-
Notifications
You must be signed in to change notification settings - Fork 865
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ONNX Model done #315
Comments
Thank you so much for this! |
Amazing work! Do you want send a pool request mentioning that in the top of the readme page? |
We are currently preparing an update to speed up the onnx model. When we release it, we will submit a pull request with guide on how to export the model. |
@senya-ashukha We have released an updated onnx model and submitted PR #316 |
Hey @OPHoperHPO @senya-ashukha , First of all thanks for this conversion, I was trying the ONNX model, and it seems like it differs from the original model and doesn't perform that well, Since pytorch is now compatible with exporting fft_rfftn layers with the new dynamo_export function to ONNX, do you think is there a way to export the model directly? I tired doing that but it errors out, seems like dynamo_export is still in beta ;-; |
Hello @K-prog,
Could you share your code and the images where the ONNX model performs worse than the original? Many code implementations encounter issues with image preprocessing/postprocessing functions because some preprocessing/postprocessing operations differ from those in the original model (see the code of our HuggingFace Demo Space). I suggest checking this aspect in your code.
Regarding export via If you compare |
Thanks for the clarification @OPHoperHPO, I used the same preprocessing/postprocessing functions as provided in your conversion notebook for the onnx inference, as for the pytorch inference, I followed Sanster's lama cleaner code as its more clear and well structured. As far as I experimented with the ONNX implementation with multiple images, it seems that it lacks in preserving the structures, sharing an example with both pytorch and onnx results here. I'll also be drafting you an email with more context and information. |
Interestingly, I checked the original code (without exporting to ONNX) and couldn't get a result close to yours using Torch. The quality I achieved was similar to ONNX— here is a Colab notebook demonstrating this. The issue is not with ONNX since the original code gives the same result. The problem lies with the operations in the ExportLama layers, the dependency versions, or the preprocessing operations. Could you provide a Colab notebook similar to this one, where the code for working with the original model is isolated from external factors and produces your torch result with such a mask? |
@OPHoperHPO, I have created an isolated notebook and removed all the external factors(courtesy here). I also tried replacing just the inference part via onnx and keeping the same preprocessing logic but it gives weird outputs. Also, I have sent you and email at [email protected], waiting for your acknowledgement! |
inpainted_image /= 255.0 I checked your notebook and noticed some errors in the ONNX postprocessing. You just need to divide the output by 255, as the ONNX model multiplies it. This will give you the same results as Torch LaMa. So, it's not an issue with the ONNX model — it's a preprocessing issue that causes the difference in results. See this corrected version. |
Haah, thanks catching my little blunder ;-; Big big thanks for your help! |
@OPHoperHPO It works well with fp32, but fp16 sometimes produces completely black results, possibly due to numerical overflow. Is there a way to solve? |
@ljdang fp16 doesn't seem to have any visible losses, is your conversion correct? |
@K-prog I use MNN to convert the model to FP16 format, just like what was mentioned in this issue. Only some images have issues. alibaba/MNN#2977 |
Hello everyone,
We at @Carve-Photos have successfully ported LaMa (big-lama) to ONNX with results closely resembling the original. Check it out here: https://huggingface.co/Carve/LaMa-ONNX
Upd. Here is HG Space that uses our model https://huggingface.co/spaces/Carve/LaMa-Demo-ONNX
The text was updated successfully, but these errors were encountered: