BlenderNeRF v5
This release mainly focuses on fixing a bug pointed out in issue #18, in which a miscomputation of the camera intrinsics sometimes leads to mismatching fields of view between ground truth Blender renders and the corresponding NeRF renders. The mismatch is due to the wrong computation of focal lengths, when the camera sensor fit is not Horizontal
.
To this end, 27 datasets have been captured for debugging purposes, in which different aspect ratios, pixel ratios and camera sensor fits have been evaluated. The dataset and respective Blender file are made available here and a thorough description of the data and capturing process is described in the contained README.txt
file.
The mismatch should now have been alleviated. Below are depicted a few results in which NeRF renders (left column of each image) are compared to their respective ground truth with pixel ratio 1:1 rendered in Blender (right column). Camera parameters for each set of renders are described in the table underneath the image. The NGP renders are automatically stretched or squeezed to undo the non uniform pixel ratio in training frames. Feel free to inspect the data for further details.
Left | Middle | Right | |
---|---|---|---|
Sensor Fit | Auto |
Horizontal |
Vertical |
Aspect Ratio | 1 : 1 | 16 : 9 | 2 : 3 |
Resolution | 300 * 300 | 576 * 324 | 300 * 450 |
Pixel Ratio | 1 : 1.5 | 1.5 : 1 | 1 : 1 |
Below are a two relevant observations and take aways.
- For the middle image, the NeRF volume is partially cropped at the top and bottom of the donut. This is because the training images are stretched out (pixel ratio 1.5:1), and therefore a smaller region of the donut is only visible in these frames (see the corresponding frames in the data). As validated here, the NeRF render reshapes the scene to a uniform pixel ratio, thereby undoing the stretching effect.
- The
Vertical
sensor fit often results in somewhat distorted NeRF volumes. I am however unsure of the cause, and consequently recommend avoiding it if possible.
This release additionally comprises a new functionality and a few warning/error fixes.
- The number of training frames used from the training camera with the TTC method can now be changed independently of the number of testing frames. The latter (number of testing frames) remains the frame range of the Blender scene.
- The issues highlighted in the pull request #19 have been resolved.