When I started work on the engine powering Privacy, I took inspiration from the DOOM (2016) – Graphics Study done by Adam Courrèges. I initially wanted Privacy to have a very fast rendering engine so that I could target VR as well as regular PCs. As such, Privacy originally used a pretty simple forward renderer, much like DOOM.

Over time, my thoughts on what the rendering in Privacy should be evolved from “has to be super fast” to “has to look super nice”. In order to achieve that super nice look, I settled on implementing a hybrid raytracer, taking some inspiration from the work done by Kostas Anagnostou and Dennis Gustafsson.

When I say hybrid raytracer, what I mean is that the engine uses ray tracing, accelerated by (standard) rasterization to achieve something that approaches a ‘movie quality’ image – but in real-time. In this post, I’ll describe everything that needs to happen before Privacy can display a single frame.

Shadow Maps

The first step is drawing the shadow maps used to project shadows from the sun onto the scene. Most light sources in Privacy now use ray tracing to compute shadows, but for the sun good old shadow maps proved to produce higher quality shadows overall.

Shadow Map 1
Shadow Map 2

Normals, Velocity and Depth

In parallel with the shadow maps, Privacy starts rendering of the main image by drawing a buffer containing (view-space) normals and a second buffer containing the screen-space velocity of every pixel. I couldn’t get a decent capture of the velocity buffer, but it looks similar to this one as used in DOOM.

The normals are required to calculate the diffuse reflections in the next step. The velocity buffer is used further down the line to aid in the temporal filtering of the diffuse reflections and again when applying the temporal anti-aliasing.

Normal Buffer

The final buffer rendered in this pass is the depth buffer, which stores the distance to the nearest object at each pixel. The depth buffer is used when calculating the diffuse reflections in the next step and to speed up rendering of the main pass.

Depth Buffer

Diffuse Reflections

A key element in producing the look of Privacy happens next, namely ray tracing diffuse reflections. Calculating these diffuse reflections properly, as they do in film, would require thousands of (primary) rays per pixel as well as secondary rays whenever a primary ray hits an object. That is the main reason why rendering a single frame of CGI for a movie takes hours. Since Privacy is a game, we cannot spend hours on rendering a single frame – hence I have resorted to a bunch of tricks.

The first trick is that the previous frame is used as an input to this step, which allows me to have ‘secondary’ reflections without needing to trace any additional rays.

Secondly, Privacy fires only 12 rays per pixel (using stratified random sampling). Together, these rays capture diffuse light reflections (i.e. light bouncing from one object to another) as well as ambient occlusion (areas where light cannot reach). However, because the number of rays is so low, the result ends up looking rather noisy:

Raw Diffuse Reflection Buffer

Fortunately, we can reduce the noise by applying a filter. Privacy uses a technique called Edge-Avoiding À-Trous Wavelet Transform to ‘denoise’ the diffuse reflection buffer.

Diffuse Reflection Buffer after denoising

To further reduce noise visible in the final image, I also apply a temporal filter which blends between the current and previously calculated diffuse reflections.

Diffuse Reflection buffer after temporal filtering

Main Pass

Enough preparations! It is now time to render the main pass. A lot happens in this step! A single ray is fired to trace specular (i.e. ‘shiny’) reflections and yet more rays are fired to capture incoming light sources. Also, shadows from the sun are projected onto the entire scene. All of this is then multiplied by the PBR textures that define each object’s material(s) to make an image that already looks quite close to the final frame. The colors still look a little weird, because everything is rendered in linear space – which is not how we are used to view images.

Main Pass

Anti-Aliasing

The second-to-last thing that happens is temporal anti-aliasing, which removes all those nasty jaggies by very slightly moving the camera every frame and integrating the current image with the previous one. The anti-aliased image is then fed into the final step.

Temporal Anti-Aliasing

Post Processing

The final step of rendering applies a number of effects simultaneously:

  • Exposure (to control overall brightness)
  • A color filter (to optionally change the color of the image)
  • Saturation (to optionally add or remove color from the image)
  • Contrast
  • Filmic tonemapping (to emulate the pleasing look of film stock)
  • Gamma correction (to convert the image from true linear to the perceptually linear ‘gamma-space’ that your monitor uses to display an image)
The Final Image

And there we have it! Privacy has rendered a single frame. Now we just need to do it all again for the next frame, and the next, and the next… ideally 60 times per second. Phew!

At the time of writing, the rendering of Privacy maxes out at about 25 frames per second when running in 1080p on my trusty Vega 56. My goal is for the final game to comfortably run at 30fps on this card, hopefully faster. There is still plenty of time to work on increasing performance and I already have a whole bunch of optimizations in mind. More on that later…

For now, I hope you’ve enjoyed this little trip down the graphics pipeline. I for one never cease to be amazed at how much stuff we can do in a single frame nowadays. When I first started making games, the first 3Dfx cards had only just come out and allowed us to render 2,000 maybe 3,000 triangles per frame. In the images above, the little cactus alone is comprised of 30,000 triangles!