-
Notifications
You must be signed in to change notification settings - Fork 1.4k
object depth information #35
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
Hi @Zozobozo yes you can get the per pixel depth (for the top k faces which overlap with that pixel) from the output of the mesh rasterizer. To retrieve this output, you can initialize a rasterizer and only use that e.g.
OR if you want the full image as well as the depth, you can extend the
We also have a setting to enable perspective correct depth interpolation (set If this answers your question, please close this issue! :) |
thanks for the quick answer! i'll give it a try! |
@nikhilaravi I have tried Depth render with images, but found they are not in the same coordinate? it seems the y-axis flips? is this an issue or I use change it manually? |
@wangsen1312 this y flip issue has now been fixed - see #78 for further discussion. |
@nikhilaravi Got it, Nice work! |
Is this kind of depth image differentiable? @nikhilaravi |
@Bob-Yeah yes it should be differentiable. |
I'm actually kind of curious now. For the zbuf output (and we optimize with respect to another 2.5D depth map target), is it differentiable ONLY at pixels where there is a face? Or is this like the SoftSilhouetteShader where the boundaries can also be optimized? |
Following this. I am using this shader
and this
and my output is this. I was wondering how can I extract something with a white gradient and a dark background like this What are the main decisions regarding your color choice ( purple and green), how can I change those?! |
When you do |
We are landing a change now that introduces MeshRendererWithFragments that returns images, fragments into the renderer library which you can use in the future. |
Thank you so much |
Hi @nikhilaravi , sorry to bother you. I meet a problem while trying to convert it to point clouds after getting the The problem is that the point cloud calculated from rendered The original mesh is like this: The point clouds generated from zbuf is like this: Here is the code I use: import numpy as np
import matplotlib.pyplot as plt
from pytorch3d.io import load_objs_as_meshes, load_obj
from pytorch3d.renderer import (
FoVPerspectiveCameras, look_at_view_transform,
RasterizationSettings, BlendParams,
MeshRenderer, MeshRasterizer, HardPhongShader
)
import open3d as o3d
width = 512
height = 512
fov = 60
obj_path = './data/examples/models/model_normalized.obj'
verts, faces, aux = load_obj(obj_path)
meshes = load_objs_as_meshes([obj_path])
R, T = look_at_view_transform(2.7, 10, 20)
cameras = FoVPerspectiveCameras(R=R, T=T, fov=fov)
raster_settings = RasterizationSettings(
image_size=(height, width),
blur_radius=0.0,
faces_per_pixel=1,
# max_faces_per_bin=20000
)
rasterizer = MeshRasterizer(
cameras=cameras,
raster_settings=raster_settings
)
depth = rasterizer(meshes).zbuf.cpu().squeeze().numpy()
cx = width / 2
cy = height / 2
fx = cx / np.tan(fov / 2)
fy = cy / np.tan(fov / 2)
row = height
col = width
# TODO check whether u or v is the column. depth[v, u] ???
v = np.array(list(np.ndindex((row, col)))).reshape(row, col, 2)[:, :, 0]
u = np.array(list(np.ndindex((row, col)))).reshape(row, col, 2)[:, :, 1]
X_ = (u - cx) / fx
X_ = X_[depth > -1] # exclude infinity
Y_ = (v - cy) / fy * depth
Y_ = Y_[depth > -1] # exclude infinity
depth_ = depth[depth > -1] # exclude infinity
X = X_ * depth_
Y = Y_ * depth_
Z = depth_
coords_g = np.stack([X, Y, Z]) # shape: num_points * 3
pcd = o3d.geometry.PointCloud()
pcd.points = o3d.utility.Vector3dVector(coords_g.T)
o3d.visualization.draw_geometries([pcd]) Any suggestion will be helpful. Please reply at your convenience. Thanks! |
I get this "Process finished with exit code 139 (interrupted by signal 11: SIGSEGV)" error, when utilizing rasterizer for zbuf. can anyone familiar with this and help? the code is from posts here and similar renderer demos. |
This issue is closed. Please open new issues with all the details for help with other things. |
Hi,
Thanks for sharing your great work!
I was wondering - can one render a mesh and get per-pixel depth too, like e.g. mesh-renderer allows you to do?
Thanks a lot!
Z.
The text was updated successfully, but these errors were encountered: