-
Notifications
You must be signed in to change notification settings - Fork 37
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Video editing output size #9
Comments
The color difference is due to the editing vector we used from previous papers is not well distentangled. Thus it will affect the color attributes. You can use some face detector to detect the face region, and use alpha blending or poisson blending to alleviate the color difference. |
ok i'd love to learn more about that please, as I have hired people to do a easy windows fork with audio pasteback, next step is pasting output video back into input video, but after that I was going to see if we could get this color shift fixed but it's sounding like that's not possible (i will mask as you suggest), but it's not something simple like passing the bgr image to the gan instead of the rgb, sorry if that line does not make sense I saw it in another repo talking about avoiding color shifts on gfpgan they did rgb to brg then passed it through then converted it back to rgb |
I did a lot of research & played with the code and now understand what you mean, the masking in a third-party editor is giving me fantastic results so I cannot be happier with the color solution now, only thing that's bugging me is the lack of a paste back as currently I am eyeballing the resizing and x + y translation to re-allighn with the original footgage, i'll see if I can find a solution. |
why can't a change the I can resize before its written by adding more code just wonder why it's acting like this I figured those figures already came back from the model |
your size should match your video frame, or the video cannot be saved. You cannot use a smaller size (W, H) unless you down sample the (4W, 4H) output frame to (W, H). You can check the doc of cv2.VideoWriter to see the details or search on the stack overflow to find the solution. I'm not an expert of opencv. |
that's ok I'm already coding it just trying to figure out the math, the scaling is working fine just can't seem to get the right calculations
|
Your code seems more professional than my original one for video configuration. |
So currently I am testing on 2 videos The first Alpha.mp4 is 500x500 with a small face region, I downsample to (W,H) and divide by 2, this creates the correct size so I can overlay onto the original video. The next video is Beta.mp4 which is 1920x1080 which a large face region, I downsample to (W,H) and multiply by 2.5 this creates the correct size so I can overlay onto the original video. I then cropped Beta.mp4 to 800x800 and also downsampled to (W,H) and multiplyed by 2.5 which resulted in 800x800 output which was perfect so I knew I was on the right track it seems the size of the face must be to blame or the input being divisible by 64, whats your guess? |
Hi fantastic job! I don't understand the output resolution in video editing, it looks like it tracks a single face and zooms in, what would be the best way to return to it's original size in a video editing app could I just do a zoom out x2 and a move x or y or something?
for my example the original size is 1920x1080 and the output is 1920x1632
The text was updated successfully, but these errors were encountered: