Skip to content

Jagged edges + "glitchy" overlap when rendering multiple meshes in a single scene?  #657

Closed
@collinskatie

Description

@collinskatie

❓ Questions on how to use PyTorch3D

Hi, thank you for making such an amazing tool!

I'm currently trying to render a scene of block towers. I am following the method proposed in Issue #15 to concatenate several meshes (in this case, cubes) into a single mesh to render. However, sometimes the edges of these cubes look very jagged, particularly in low-resolution settings. And when adding blur/blending, there are still challenges with overlap.

I'm curious if anyone on the PyTorch3D dev team (or any other user!) has recommendations on how to better render multiple meshes to a single image, where each mesh may have a different color? Does the renderer have issues resolving face overlap, or cases where one block sits directly ontop of another (and how to potentially resolve this on-the-fly?)

Thank you!

Below are some examples of cube "glitches" (the top is shown at 32x32 res and the bottom at 256x256). I've also included a cube mesh in isolation, showing the "jagged" edges.

Screen Shot 2021-04-27 at 12 23 58 AM

Screen Shot 2021-04-27 at 12 24 31 AM

Screen Shot 2021-04-21 at 2 11 43 PM

Here is a sampling of the code used to combine the meshes, in case anything is particulary amiss there.

   # Modified from PyTorch3D tutorial
    # https://github.com/facebookresearch/pytorch3d/blob/master/docs/tutorials/render_textured_meshes.ipynb
    device = sizes.device
    R, T = look_at_view_transform(1.0, 90, 180,
                                  up=((0.0, -1.0, 0.0),),
                                  at=((0.0, 1, -0.2),))  # view top to see stacking
    cameras = FoVPerspectiveCameras(device=device, R=R, T=T,
                                    fov=45.0)
    # Settings for rasterizer (optional blur)
    # https://github.com/facebookresearch/pytorch3d/blob/1c45ec9770ee3010477272e4cd5387f9ccb8cb51/pytorch3d/renderer/mesh/shader.py
    blend_params = BlendParams(sigma=1e-3, gamma=1e-4, background_color=(0.0, 0.0, 0.0))#BlendParams(sigma=1e-4, gamma=1e-4, background_color=(0.0, 0.0, 0.0))
    raster_settings = RasterizationSettings(
        image_size=im_size,  # crisper objects + texture w/ higher resolution
        blur_radius=np.log(1. / 1e-4 - 1.) * blend_params.sigma,
        faces_per_pixel=1,  # increase at cost of GPU memory,
        bin_size=0
    )
    lights = PointLights(device=device, location=[[0.0, 3.0, 0.0]])  # top light
    # Compose renderer and shader
    renderer = MeshRenderer(
        rasterizer=MeshRasterizer(
            cameras=cameras,
            raster_settings=raster_settings
        ),
        shader=SoftPhongShader(
            device=device,
            cameras=cameras,
            lights=lights,
            blend_params=blend_params
        )
    )
    # create one mesh per elmt in batch
    meshes = []
    for batch_idx, n_cubes in enumerate(num_cubes):
        # Combine obj meshes into single mesh from rendering
        # https://github.com/facebookresearch/pytorch3d/issues/15
        vertices = []
        faces = []
        textures = []
        vert_offset = 0 # offset by vertices from prior meshes
        for i, (position, size,color) in enumerate(zip(positions[batch_idx, :n_cubes, :], sizes[batch_idx, :n_cubes],
                                                       colors[batch_idx, :n_cubes, :])):
            cube_vertices, cube_faces = get_cube_mesh(position, size)
            # For now, apply same color to each mesh vertex (v \in V)
            texture = torch.ones_like(cube_vertices) * color# [V, 3]
            # Offset faces (account for diff indexing, b/c treating as one mesh)
            cube_faces = cube_faces + vert_offset
            vert_offset = cube_vertices.shape[0]
            vertices.append(cube_vertices)
            faces.append(cube_faces)
            textures.append(texture)
        # Concatenate data into single mesh
        vertices = torch.cat(vertices)
        faces = torch.cat(faces)
        textures = torch.cat(textures)[None]  # (1, num_verts, 3)
        textures = TexturesVertex(verts_features=textures)
        # each elmt of verts array is diff mesh in batch
        mesh = Meshes(verts=[vertices], faces=[faces], textures=textures)
        meshes.append(mesh)
    batched_mesh = join_meshes_as_batch(meshes)
    # Render image
    img = renderer(batched_mesh)   # (B, H, W, 4)
    # Remove alpha channel and return (B, im_size, im_size, 3)
    img = img[:, ..., :3]#.detach().squeeze().cpu().numpy()

Thanks for any help/advice!

Metadata

Metadata

Assignees

Labels

Stalehow toHow to use PyTorch3D in my projectquestionFurther information is requested

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions