-
Notifications
You must be signed in to change notification settings - Fork 55
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Not getting repaired images #4
Comments
What's the resolution of the input? For the pre-trained model, it is best to use the image at 256*256. If you use different resolution, you should resize the image to 256*256 first, and then estimate the flow, finally resize the estimated flow and multiply the flow by a factor according to the resolution. For example, if your original resolution is 512*512, you should multiply the flow by 2 after you upsample the flow. |
Thank you very much for your work. I have found the same issue with undistorted images being very similar to original images. Multiplying the flow has not produced good results. This is what I have at the moment (reformatted your evaluation code a bit): def load_weights(model, path):
model = nn.DataParallel(model) # if pre-trained
model = model.cuda() if torch.cuda.is_available() else model
model.load_state_dict(torch.load(path))
return model.eval()
def transform_image(image_path):
transform = transforms.Compose([transforms.Resize((256,256)),
transforms.ToTensor(),
transforms.Normalize(
(0.5, 0.5, 0.5),
(0.5, 0.5, 0.5))])
im = Image.open(image_path).convert('RGB')
im_npy = np.asarray(im.resize((256,256)))
im_tensor = transform(im).unsqueeze(0)
if torch.cuda.is_available(): im_tensor = im_tensor.cuda()
return im_tensor,im_npy
def rectify_image(image_path, multi=1):
from resample.resampling import rectification
im_tensor,im_npy = transform_image(image_path)
middle = model_en(im_tensor)
flow_output = model_de(middle)
return rectification(im_npy, flow_output.data.cpu().numpy()[0]*multi)
model_en = EncoderNet([1,1,1,1,2])
model_de = DecoderNet([1,1,1,1,2])
model_class = ClassNet()
model_en = load_weights(model_en, './model_en.pkl')
model_de = load_weights(model_de, './model_de.pkl')
model_class = load_weights(model_class, './model_class.pkl')
testImgPath = 'test_images'
testImgs = [x for x in [Path(testImgPath).rglob(e) for e in ('*.jpg','*.png')] for x in x]
imgPath = testImgs[2]
resImg,resMsk = rectify_image(imgPath) This is the output |
Thank you for your reply. |
Hi Bruce, The image that I used on my comment comes from the paper itself so there shouldn't be a problem. If I multiply the flow it eventually gets it right, but overall the quality is very low. I'm probably running something wrong, hopefully @xiaoyu258 can help us. |
@nicolasmetallo , I have a long trip this time, sorry for the late reply. |
@valeriopaolicelli ,did you solve this problem? |
@xiaoyu258 I have got the results before fitting. How to get the results after fitting? |
How to do model fitting? |
Dear @xiaoyu258 , thank you very much for your work! I would like to ask if it is possible to adjust the code to work with horizontal wave distortion? I introduced changes in distortion_model.py file as well as in dataset_generation.py but after the resampling the image stayed the same. |
Could you help to show how to get the result 'after fitting' ? Thank you so much @xiaoyu258 xi |
Hi you send me the code for evaluating on my image. I tried to run the eval.py but all I am getting is just the output.mat. |
So why do I have the same image before and after correction,who can help me?thank you |
Thanks for your work.I run eval.py with your pretrained model and undistort images with resampling.py.However the unpaired images are almost same as the origin images.Besides, I find the values in .mat file generated by eval.py are much smaller than that generated by dataset_generated.py.
The text was updated successfully, but these errors were encountered: