-
Notifications
You must be signed in to change notification settings - Fork 1.4k
How to use SfMPerspectiveCameras for renderer? #18
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi wangg12, can I please ask whether you can share the rendering script? More specifically, it is not obvious what is the position of the LM object in the world coordinates. Furthermore, the "R" matrix you are using does not seem to be a valid rotation, see below:
In case the LM object should be centered around (0, 0, 0), I would suggest setting the camera to a trivial one with a non-zero z-component of the
This would point the camera directly at the object center and render it from the distance of 2 world units. |
Sorry, |
@wangg12 could you please share a small script to reproduce the error which we can run easily? For example load and render one mesh with the settings you have mentioned. This will enable us to help you much faster :) |
Finally I got the correct results with a modified OpenGL Real Perspective Camera. BTW, @nikhilaravi How can I render the differentiable depth values? |
Haven't tried this myself since I am working with RGB, but Line 37 of renderer.py exposes a fragment, which should have a zbuf object. Also according to documentation here:
|
@wangg12 Glad you got it to work. I can take a look at your script later! For future, please try to share the minimum amount of code needed to explain the issue so we can review more quickly :) Thanks @aluo-x for explaining the zbuf/fragments :) As mentioned, we expose the zbuffer as part of the |
@wangg12 Thanks for sharing your script. I am trying to do something similar using your script. My image size is (960, 1280). I believe it has something to do with rasterizer since it only works for square image. |
@bhadresh74 yes the rasterizer currently only works for square images! |
Hi @bhadresh74 Since 57a22e7 we have updated the cameras and have added a note to explain the space and definitions here: https://github.com/facebookresearch/pytorch3d/blob/master/docs/notes/cameras.md Note that by default PyTorch3D cameras assume parameters in NDC space for rendering. We have updated cameras.py such that SfMPerspective and SfMOrthographic can now accept parameters in both NDC and screen space. |
I want to use this differentiable renderer to render a LINEMOD object. The .ply model vertices are loaded and then scale from
mm
tom
by a factor 0.001.The camera intrinsic:
Rotation and translation
I followed the camera position tutorial to set the cameras and renderers, except that the camera I used is
And since I want to get 480x640 image, I set the
image_size=640
and crop to480x640
after rendering.However, there is nothing in the rendered results. I could get correct results with a OpenGL renderer with the same settings.
So I wonder how I can correctly use the
SfMPerspectiveCameras
?The text was updated successfully, but these errors were encountered: