Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

render_to_depth_image() on macbook macos give half width and half height scene but at right resolution #6999

Closed
3 tasks done
jerome-godbout-lychens opened this issue Oct 4, 2024 · 3 comments · Fixed by #7001
Labels
bug Not a build issue, this is likely a bug.

Comments

@jerome-godbout-lychens
Copy link

jerome-godbout-lychens commented Oct 4, 2024

Checklist

Describe the issue

When using both scene render_to_image() and render_to_depth_image() it provide the right resolution of the image in my case. But the depth image is only half the width and half the height of the actual rendered scene. This might be related to macbook pixel density.

The actual rendered scene render_to_image() result (ok, this is exactly what is scene into the viewport)
captured_image

The depth capture for render_to_depth_image(), it has the right image resolution, but the actual scene in it is half the width and half the height (this is normalized value to make a black to white scale visible):
captured_image_depth

The overlap when I scale down the depth to half width and half height and bring it to 0,0 image coordinate, the depth is put atop with 98% alpha on top of the render image:
captured_image_overlay

I also did try to bring the window to an external monitor full HD no high density but the result is the same.

Steps to reproduce the bug

#6996 provide the original question with the files but here is the code copied, just need to run it and wait a few seconds, it will generate the depth and the capture image.

import open3d as o3d
import numpy as np
import threading

def normalized_depth_image(depth):
    normalized_depth = (depth - depth.min()) / (depth.max() - depth.min()) * 255
    normalized_depth = normalized_depth.astype(np.uint8)
    return normalized_depth

class Example:

    def __init__(self):
        self._app = o3d.visualization.gui.Application.instance
        self._app.initialize()
        self._window = o3d.visualization.gui.Application.instance.create_window("example", 1024, 768)
        self._scene = o3d.visualization.gui.SceneWidget()
        self._scene.scene = o3d.visualization.rendering.Open3DScene(self._window.renderer)
        self._window.add_child(self._scene)
        self._geom_mat = o3d.visualization.rendering.MaterialRecord()
        self._geom_mat.shader = 'defaultUnlit'
        self._geom_mat.point_size = 2.0
        self._capture_image = None
        self._capture_depth = None
        self.add_cube()

    def add_cube(self):
        mesh_box = o3d.geometry.TriangleMesh.create_box(width=3.0, height=3.0, depth=3.0)
        mesh_box.compute_vertex_normals()
        mesh_box.paint_uniform_color([0.7, 0.1, 0.1])
        self._scene.scene.add_geometry("cube", mesh_box, self._geom_mat)
        self._scene.look_at([1.5, 1.5, 1.5], [1.5, -6, 1.5], [0, 1, 0]) 

    def display(self):
        self._app.run()
    
    def imageCaptured(self, img):
        self._capture_image = np.asarray(img)
        o3d.io.write_image("demo_captured_image.png", o3d.geometry.Image(self._capture_image))
        self.check_capture_completed()
        
    def depth_captured(self, depth):
        self._capture_depth = np.asarray(depth)
        o3d.io.write_image("demo_captured_image_depth.png", o3d.geometry.Image(normalized_depth_image(self._capture_depth)))
        self.check_capture_completed()
    
    def capture_depth(self):
        self._scene.scene.scene.render_to_depth_image(self.depth_captured)

    def capture_image(self):
        self._scene.scene.scene.render_to_image(self.imageCaptured)
    
    def call_main_thread(self, fct):
        o3d.visualization.gui.Application.instance.post_to_main_thread(self._window, fct)

    def check_capture_completed(self):
        if self._capture_image is None:
            self.call_main_thread(self.capture_image)
            return
        if self._capture_depth is None:
            self.call_main_thread(self.capture_depth)
            return
        # do actual work with both image and depth here

if __name__ == "__main__":
    e = Example()
    t = threading.Timer(2, e.check_capture_completed)
    t.start()
    e.display()

Error message

NA

Expected behavior

Having the same resolution and the same scene dimension to superpose both capture depth and scene image.

Open3D, Python and System information

- Operating system: macOS 14.6.1 (Sonoma)
- Python version: 3.10.15 (main, Sep 16 2024, 16:25:53) [Clang 15.0.0 (clang-1500.3.9.4)]
- Open3D version: output from python: 0.18.0
- System architecture: apple-silicon (macbook M3 Pro)
- Is this a remote workstation?: no
- How did you install Open3D?: pip
- Compiler version (if built from source): clang 15.0.0

Additional information

Tested on normal external monitor (macbook screen still first monitor). No difference sadly. I do not have another computer on hand to test it on a Windows/Linux PC with a "normal" screen.

Personnal Code note

Not sure if this might help, I did not debug the whole thing, but here is something that might differ between the render to image and render depth to image.

Depth:
https://github.com/isl-org/Open3D/blob/db00e339c1645440dea6951c2971ffa759934112/cpp/open3d/visualization/rendering/Renderer.cpp#L113C17-L116C72

Compare to the buffer size used directly for the:

image->data_ = std::vector<uint8_t>(buffer.bytes,

I would need to dig what the render->Configure(... bool depth_image,) does exactly, because the rest seem to be the same at high level.

only seems to affect the view ConfigureForColorPicking():

Hopefully, this can be fixed rather quickly, since it block me from doing anything with it so far.

@jerome-godbout-lychens jerome-godbout-lychens added the bug Not a build issue, this is likely a bug. label Oct 4, 2024
@rxba
Copy link
Contributor

rxba commented Oct 5, 2024

Hey @jerome-godbout-lychens , I was able to reproduce the issue you are having on my M3 Pro.
This only occurs if the MacBook monitor is open and the primary display.
You can connect an external monitor as the default monitor and close the MacBook display; the images render fine for me in this configuration, maybe this works as a workaround for you.

There is also an active PR #6587 for a GLFW upgrade which you could try to see if it fixes the issue.

Update: Just tested the aforementioned GLFW upgrade, it did not change the behaviour.
Update 2: I assume this may be related to hacks of getting pixel screen coordinates on Retina displays with GLFW. I'll investigate in that direction a little bit more.

@rxba
Copy link
Contributor

rxba commented Oct 6, 2024

Found a fix for this issue, would be thankful if you could test it with your other RGBD data and see if it works as intended. Thanks!

@jerome-godbout-lychens
Copy link
Author

The workaround by closing the lid work just fine indeed. This should solve the issue for now, since the code will be running on headless Linux server in the end, so I can develop the algo with the lid close and that will be fine for now, but having a fix would be great. Seems like the enabling shadowing fix it for Mac, that would be nice and avoid single screen when developing. Thanks for the quick turnaround on this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Not a build issue, this is likely a bug.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants