Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

10 resnet blocks, w/ semantically similar dataset reaches 1.5bpsp #13

Open
lidless-vision opened this issue Mar 27, 2021 · 0 comments
Open

Comments

@lidless-vision
Copy link

i have a private dataset of 300k semantically similar images. with the default model (5 resnet blocks, default settings. trained on gtx1060) i was able to achieve 1.75bpsp.

i also made a model with 10 resnet blocks and a 256px crop size, and it has achieved 1.44bpsp. i think it could go further but my
gpu's are now working on something else.

note: my eval dataset consists of only 6 images.

check it out:
https://wandb.ai/impudentstrumpet/compress/reports/SReC-Ablations--Vmlldzo1NTQyMTM?accessToken=fbmtc0gz2w1f9dt2lmbyz0avsw4nbrsd32j0a38d6rhk07ebuuoqbjf1yq0le4o8

when trying to compress a large image, it quickly runs out of memory. id like to modifiy this code to chop large images into smaller parts capable of fitting into the GPU.

this project is super cool thanks much!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant