Gamasutra is part of the Informa Tech Division of Informa PLC

This site is operated by a business or businesses owned by Informa PLC and all copyright resides with them. Informa PLC's registered office is 5 Howick Place, London SW1P 1WG. Registered in England and Wales. Number 8860726.


Gamasutra: The Art & Business of Making Gamesspacer
View All     RSS
July 2, 2022
arrowPress Releases
If you enjoy reading this site, you might also want to check out these UBM Tech sites:


 

Rendering Complex Geometry

by Dave De Breuck on 06/21/22 06:43:00 pm

The following blog post, unless otherwise noted, was written by a member of Gamasutra’s community.
The thoughts and opinions expressed are those of the writer and not Gamasutra or its parent company.

 

Rendering Solution for Complex Geometry

Using sphere tracing


Website: www.davedebreuck.com
Article Website: www.rendering-complex-geometry.com
Github: github/Dyronix

 

Hello, my name is Dave and I'm a Software Engineer @ Twikit NV. We create products for all types of industries ranging from automotive, medical, sportswear and even games. Our specialty is designing configurators so a customer can design its own set of 3D geometry to be able to print using any 3D printer currently available on the market. However, my role does not have anything to do with the configurator. Due to my background in games and graduating from Digital Arts and Entertainment as a Game Developer, I am responsible for the graphics, so anything that is visible on-screen I create. At the time of writing, I am obtaining my master's degree in Game Technology at Breda University of Applied Science. Within this article, I will try and elaborate on how one can achieve the results of the findings of my master thesis.

Introduction

Within modern society, many media devices exist. Having a responsive software product on any device would benefit different industries, such as individualising three-dimensional (3D) printed products, medical visualisations representing a diagnosis in a 3D environment, and even games to support a more extensive player base.

We will explore an example from the Additive Manufacturing (AM) industry. One of the complex structures AM recently took interest in is lattice structures. Lattices are space-filling unit cells that can be tessellated along any axis with no gaps between cells that can generate all types of geometry without losing structural integrity. However, visualisations of lattice structures using computer graphics can become hard to render on sub-optimal hardware. Depending on the size of the lattice, polycount can reach millions of triangles which is not feasible to visualise optimally on consumer hardware in real-time. Additionally, a representation of such geometry within modeling tools such as MAYA or Blender necessitates a high level of knowledge, patience, foresight, and meticulous preparations to ensure that models have adequate control vertices where details are desired.

Figure 1: Example of a lattice structure in the shape of a shoe – Carbon 3D

In this article, we propose a solution for developers to create a 3D renderer that uses a volumetric approach to visualise these structures. Our target platform will be the web, focusing on chromium browsers. This also means that state-of-the-art technology such as mesh-shaders or raytracing is not available. However, this will make sure that our solution is compatible with all kinds of platforms as a more generic approach is utilized to achieve the desired outcome. Volumes have shown promising results in visualising high-fidelity geometry without the cost of uploading the required surface-based data to the GPU. An added benefit of volumes is that they can perform Boolean operations more efficiently.

Preliminaries

Before we can talk about the process of creating a volumetric rendering pipeline. Some key mathematics and programming ideas involved in this article have to be explained.

Distance Fields

A distance field is a scalar field that specifies the minimum distance to the surface of a shape. If we examine this in more detail, a distance field can be represented by a function F, such that any given point P will return a distance d from the object represented by the function. We store the distances returned by such a function as 3D matrices or, more commonly known within graphics programming, a 3D texture. Each texture cell stands for the closest distance from the grid element to the nearest surface. Therefore, a grid element containing a value of 0 represents the surface of a shape.

Figure 2: A circle is represented by a 2D distance field – Inigo Quilez

Sphere Tracing

Visualising a distance field can be achieved by using an algorithm called sphere tracing. Sphere tracing is a technique for rendering implicit surfaces using a geometric distance. Which is exactly what we stored within our 3D texture. To find the distance towards a shape, we need to define a distance function for it or have a generated volume available to trace against. For example, a sphere with center \((x_0,y_0,z_0)\) situated at the world origin (P) and radius r can be represented as followed:

\(P^2-r^2=0\)

This equation is what is called an implicit function. A sphere represented in this form is also called an implicit shape. An implicit equation only tells us if a particular point is inside a shape (negative values), outside a shape (positive values), or precisely on the surface (value of 0). The collection of points where the implicit function equals x is called an iso-surface of value x (or iso-contour in 2 dimensions). Sphere tracing is a method of drawing a surface solely based on this data. For more information about sphere tracing, click here

Boolean Operations

Volumetric data can easily represent shapes defined via Boolean operations. These operations are often used in CAD software in collaboration with a technique called Constructive Solid Geometry (CSG), which consists of the same operations only based on surface data not on geometry, which makes this algorithm a lot more CPU intensive as new geometry has to be constructed on the fly. Modeling complex shapes by assembling simple shapes such as spheres, cubes, and planes might be hard to achieve if we modeled our geometry by hand. Being able to blend implicit shapes is a quality that parametric surfaces lack and thus one of the main motivations for using them. For more information about Boolean operations, click here

Deferred Shading

Deferred rendering or deferred shading is based on the idea that we defer most heavy calculations (such as light calculations) to a later stage. We can achieve deferred shading with one geometry pass and one light pass. The geometry pass renders the scene once and stores distinct data about the displayed geometry in different textures, commonly known as the G-buffer. Position vectors, color vectors, normal vectors, and/or specular values make up the majority of this data. In the second pass, we render a full-screen quad and calculate the final render using the provided G-buffer. We only need to do our light calculations once when deferring them to a later stage because the G-buffer contains all of the data from the topmost fragment. If deferred rendering is a little fuzzy I highly recommend reading the following article: Learn OpenGL: Deferred Shading from Joey De Vries.

Strategy

Now that we've caught up, we can start working on a volume renderer. Any watertight (closed, non-self-intersecting, manifold) mesh can be used to construct a volume. There are already a lot of tools that can produce a volume for us, such as the Unity build-in tool named SDF Bake Tool. However, we opted for a more programmatical approach and used a library called [IGL](https://libigl.github.io). This library is developed in C++ and may be used to produce a volume as part of our pipeline. The steps for creating a volume with the IGL library are as follows. First, we import a mesh (which is also possible using an IGL function igl::readObj). Next, we feed the data that was imported into IGL's signed distance function:

When executing this function properly a volume should be created.

Figure 3: Generated signed distance field of the Standford Bunny - IGL.

As previously indicated, we employed a deferred rendering approach to incorporate our volumetric renderer into a conventional rendering pipeline. This means that our volumetric frame buffer will produce a G-Buffer. This G-Buffer was built by leveraging our sphere-tracer within the fragment shader of our render pass. This render pass might be created using the following pseudocode:


Accompanied with this render pass comes a shader that traces against our generated volume.

We now have all of the data we need to develop a high-quality renderer. The data in the G-Buffer is given to the lighting pass, which calculates all relevant lighting information needed to illuminate our scene. Furthermore, the produced frame might be enhanced using other rendering techniques such as ambient occlusion, reflection, or subsurface scattering. Other material attributes, such as roughness and metallicity, might be added to the lookup table in addition to albedo and specular values. This would allow us to make a PBR material that we could use on our traced volume (We opted for simple diffuse shading since light propagation and varied visual effects are not the focus of this post). Finally, to create a depth buffer, the traveled distances might be translated back to the camera's distance. The depth buffer could be used to create a hybrid approach that combines surface-based geometry with volumetric data in the same scene.

Lattified Geometry

There is only one step left: producing a lattified version of the specified volume. I mentioned that volumetric data offers the advantage of being able to use Boolean Operations more efficiently. Three sets of operations would be used to create a lattice structure. Which are as follows:

  1. A union of line segments to create the unit cell we want to visualise, as previously said, "a lattice is a tesselated unit cell..." At this point, we shall create that single unit cell.
  2. A repetition operation to tesselate our unit cells on any axis. Which can be found in the following article: Inigo Quilez: Infinite Repetition
  3. An intersection operation to mold our lattice into the correct shape. Which can be found in the following article: Inigo Quilez: Primitive Combinations

The implementation of these steps within our fragment shader could look as followed:

Summary

All relevant data of our volumetric renderer can be stored within a G-Buffer, this allows us to utilize the output of our frame buffer in a deferred rendering pipeline.

Figure 4: Example of the G-Buffer.

We can use this G-Buffer to calculate any light information in our scene and any additional image effects that might be required such as ambient occlusion.

Figure 5: Example of a lit scene, generated from a given G-Buffer.

To achieve a hybrid renderer we can export a depth buffer by converting the "true" distance to the scene back to camera distance. Additionally, we could create a lattified version of the given volume if desired

Figure 6: Light fixtures (cubes) added on top of a volume rendered frame.
Figure 7: Lattified version of the given volume.

Conclusion

Distance field rendering in its current state is neither robust nor fast enough for large-scale commercial video game productions. Nonetheless, in comparison to today's industry norms, the simplicity of these techniques makes them desirable for rendering and other use cases such as modeling. Algorithms and rendering technology will advance over time, allowing for efficient hybrid or full-on volume rendering within game development.


Related Jobs

PerBlue
PerBlue — Madison, Wisconsin, United States
[07.01.22]

Unity Gameplay Developer (remote)
PerBlue
PerBlue — Madison, Wisconsin, United States
[07.01.22]

Senior Back-End Engineer (remote)
Build a Rocket Boy Games
Build a Rocket Boy Games — Edinburgh, Scotland, United Kingdom
[07.01.22]

Senior Animation Programmer
Build a Rocket Boy Games
Build a Rocket Boy Games — Edinburgh, Scotland, United Kingdom
[07.01.22]

Senior Core Systems Programmer





Loading Comments

loader image