Automated Low-Detail Mesh Generation using Point Cloud Reconstruction

Generating low-detail meshes of large scale environments can be a labor intensive process. Together with Daan Niphuis at Force Field VR I developed a method that would almost completely automate this process, while keeping the original shape, color and lighting of the original 3D environment. The process of reducing an environment for distant LODs, which might have taken one full or several days, is now reduced to a 30 – 60 minute process, depending on the complexity and size of the scene.

Developed at Force Field VR during my graduation internship. Daan Niphuis, Engine Programmer, helped me implement the system in Unreal Engine 4.

In this example, I used the UE4 plugin we developed on a scene from Unreal Tournament. As you can see, the shape, color, and lighting of the original environment is kept intact, while the geometry is greatly reduced.

The basic principle of this method is similar of that of Photogrammetry — or photoscanning — but skips the cumbersome step of reconstructing the 3D point cloud from images. The 3D point cloud is already gathered in the game engine. First, in Unreal Engine 4, we set up several locations from where the points cloud is being captured from. This works by rendering the scene from 6 different angles per point, gather world locations, normals and base colors. We also capture more high-detail images of the final image — these will later be projected onto the generated mesh for the final textures.

The resulting images and point cloud are then brought into the Open-Source program MeshLab. Some manual labor is required in this step, as the original environment might not be watertight, resulting in leaks in the point cloud. After that’s cleared up, it looks like this.

The large purple fields are blockers, they look black in the final render in UE4, but are purple in base color. This way you can cut holes in the mesh where doorways would be. This is the Point Cloud in 3D (much simplified, the original point cloud contained over 14.000.000 points).

Next, we can begin the reconstruction of the mesh. We used Meshlab’s implementation of the Poisson Surface Reconstruction algorithm, which takes several minutes to compute, but results in a very smooth mesh. Currently, there are no textures applied yet – all you see are the vertex colors. We can now cut out the magenta faces, and reduce the mesh to a low polycount. We use Quadratic Edge Collapse for this process.

The end result is a much simpler mesh. Only 10.000 triangles left of the original mesh. This mesh is then exported to a 3D package to be UV unwrapped. For Blender, I found that Vilém Duha’s plugin Auto Seams Unwrap works fantastically. It produces nice, medium sized islands that pack together great.

After UV’s are done, we use Meshlab’s texture projection filter to take the captured frames and project them onto the texture. This process uses the GPU to calculate the best pictures for each part of the final texture, and blends between those. The final texture looks a like this:

Boiler Room

Arena

Outside

Automated Low-Detail Mesh Generation using Point Cloud Reconstruction

One thought on “Automated Low-Detail Mesh Generation using Point Cloud Reconstruction

  • April 15, 2020 at 6:25 pm
    Permalink

    This is very clever, thank you for sharing your process!
    I wonder if we can see this plug-in in unreal engine 4 marketplace in future..

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *