You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
i have a private dataset of 300k semantically similar images. with the default model (5 resnet blocks, default settings. trained on gtx1060) i was able to achieve 1.75bpsp.
i also made a model with 10 resnet blocks and a 256px crop size, and it has achieved 1.44bpsp. i think it could go further but my
gpu's are now working on something else.
when trying to compress a large image, it quickly runs out of memory. id like to modifiy this code to chop large images into smaller parts capable of fitting into the GPU.
this project is super cool thanks much!
The text was updated successfully, but these errors were encountered:
i have a private dataset of 300k semantically similar images. with the default model (5 resnet blocks, default settings. trained on gtx1060) i was able to achieve 1.75bpsp.
i also made a model with 10 resnet blocks and a 256px crop size, and it has achieved 1.44bpsp. i think it could go further but my
gpu's are now working on something else.
note: my eval dataset consists of only 6 images.
check it out:
https://wandb.ai/impudentstrumpet/compress/reports/SReC-Ablations--Vmlldzo1NTQyMTM?accessToken=fbmtc0gz2w1f9dt2lmbyz0avsw4nbrsd32j0a38d6rhk07ebuuoqbjf1yq0le4o8
when trying to compress a large image, it quickly runs out of memory. id like to modifiy this code to chop large images into smaller parts capable of fitting into the GPU.
this project is super cool thanks much!
The text was updated successfully, but these errors were encountered: