**How much boilerplate code you need to write a ray-tracer?**
# Introduction
Featured Content Ads
add advertising hereIf you want to write a ray-tracer from scratch you would usually start with something simple, like **Ray Tracing in One Weekend**.
There will be spheres, planes, rays. Eventually you would want to render a bunny, so you add a simple 3D model loading, which would require
adding triangles, and routines for triangle intersection, then area light sources, environment images, participating media…
The list could be continued forever. And then at some point you realize that you want to render a caustics (either as a refraction from glass,
or reflective from metals). And now you need to learn stuff about bidirectional renderers and add even more code to make it work.
In this post I want to give you an idea how much code is needed to write a bidirectional ray-tracer, which can produce images like this:

Featured Content Ads
add advertising hereThe purpose of this post is to rather serve as a checklist and starting point, than to scare and frustrate people who are learning raytracing.
Also, it describes my personal experience.
Let’s take a closer look!
# Code for unidirectional ray-tracer
Let’s pretend we are writing a unidirectional path-tracing from scratch.
Featured Content Ads
add advertising here## Images
The first thing you would want to have is to see a result of your work. Therefore, you need an ability to save an image.
Right now, we are not talking about UI and visualizing images on the screen, which is rather important feature
together with debug visualization. Let’s just start with saving a result to a file. A lot of projects using something like PPM file format, which
is very simple, but in the end, you need to write an extra code for this simplicity (like tone mapping, converting from floating point values to uint8, etc.)
That’s why I propose to have an output in `float4` (RGBA32F) format and use [tinyexr](https://github.com/syoyo/tinyexr) library for writing
HDR images in EXR format. This library would also be useful in the future for loading EXR images.
## Intersections
Now you want to actually trace something. There is at least three ways to it:
– write all intersection and BVH code by yourself;
– build and use open-source third-party library;
– use something like [Intel Embree](https://www.embree.org/).
I’d personally recommend going with the third option. Direct comparison between Embree and couple of other libraries from GitHub shows that
with Embree ray-tracer could be easily 2x faster even when using the most naive way (tracing one ray without batching them).
And the integration and using of Embree is quite simple.
But of course, it highly depends on your personal motivation and purposes. If you want to get an experience writing everything from scratch – just do it!
Having the intersection routines, you can now hardcode a couple of triangles and see if everything works. Let’s move to the actual geometry!
## Scene
Now you want to load some geometry from files. The easiest way to do it would be using something like [tinyobjloader](https://github.com/tinyobjloader/tinyobjloader).
I think OBJ is still one of the most popular formats around because it’s simplicity. Of course, you might want to load GLTF files, or even something more complicated.
But the point is that now you have a bunch of triangles and vertices which you upload to your intersection library and you desperate to trace them.
At this point you can hardcode some camera parameters and throw some rays, and actually see a result. But it’s good time to add a camera to your ray-tracer.
## Camera
The easiest way to setup a camera in the scene would be to define a position and a view point, and a field of view.
If you are loading format which contains camera data (like GLTF) – it would be even simpler.
But anyway, you would need a code which builds camera data (like matrices).
And now having a camera data you could easily generate rays for the specific camera configuration. Going through each pixel of the image you need to
generate a ray for this specific pixel, so there should be two functions – to get NDC coordinates from a pixel, and to generate ray out of these coordinates.
At this point you’d probably see something like that in the output:

And this is definitely a good start, but let’s move on.
## Multithreading
So far, we were using a simple loop over all pixels of the image. But this is not the fastest way. We usually want to spread a workload over all threads we
have and get a result as fast as possible. Any option is suitable – you could write your own scheduler or use a library. I am using
[enkiTS](https://github.com/dougbinks/enkiTS) with a wrapper around.
## Materials
At this point you could write a fully functional ambient occlusion integrator that launches rays in the scene, and secondary rays to see if geometry
is occluded. And when it will be working – is a good time to add colors to your renderings. For that purpose, you will need to introduce materials.
Usually, you could start with simple diffuse materials, and then eventually add more and more of them.
Also, now we can enable very simple light sources. Just use emissive color of material, and if ray hit such material – it will contribute to the ray’s
result.
And since we are using materials – we can now start using importance sampling. The very first method would be to sample material’s BSDF
to determine the best next direction for ray, rather than selecting a direction randomly in hemisphere around normal.
Each material has its own “preferred” directions, but in general – the most commonly used ones are:
– cosine-weighted direction on a hemisphere (for diffuse materials);
– perfect specular reflections and refractions (basically `reflect` and `refract` functions) – for mirror and glass materials;
– some normal distribution (GGX is most commonly used now) for more complex reflections and refractions.
Also, if we want to render realistic metals or glass materials – we should take care of Fresnel coefficients.
They are calculated differently for metals and non-metals, which adds a bit more boilerplate code.
## Light sources
So far so good. We are sending rays, that are bouncing around in our scene and gathering light. And after waiting for some time our algorithm would
be able to produce image like this:

Look at those noise. The whole image is noisy even using only diffuse materials. To fix this issue we have a Next Event Estimation,
or simply sending additional rays directly to light sources. At the first sight looks simple – we have a list of emissive triangles on a scene,
and we just randomly picking one and sending a “shadow” ray to it. If this ray reaches emissive triangle – we are adding its contribution to the ray’s radiance.
But! If on some scene we have two triangles, one of which is really huge and covers like half of a scene, and a second one is microscopic, and picking them
randomly (each with 50% probability) – then contribution from microscopic triangle will be negligible, and image would still have a lot of noise.
Whan we can do in this situation? Pick lights not randomly but based on their contribution to the whole lighting in a scene. And here comes an
additional code, which would also greatly serve us in a future. I am talking about generating and sampling distributions tables
(check links section for the some great posts about generating them). So, another piece of boilerplate code would be to generate and sample random numbers,
based on some random value distributions. In our example we would want to sample huge emissive triangle in 95% of cases, and small one only in 5%.
Now we have a bunch of helpers which allow us to sample proper light sources, and we are ready for plug in Multiple Importance Sampling.
Basic idea behind it is to weight a light contribution according to a probabilities of sampling light source and sampling BSDF.
And here we need another piece of boilerplate code which randomly selects an emitter and evaluates probability of sampling it.
Now we can render the same image with same 128 samples per pixel, and get this result:
 
## Textures
Now to add another dimension to our rendering we would want to use textures. For that we need routines to load images and evaluate them at the certain
coordinates. For that I suggest using libraries like tinyexr and [stb_image](https://github.com/nothings/stb). Also keep in mind that
different materials might refer to the same images, which adds another piece of boilerplate code – to manage images and save some memory.
## Environment images / distant emitters
So far, we’ve being using only emissive triangles to illuminate our scene. But there are other types of emitters which adds more realism to a scene.
We will focus on distant emitters (like Sun) and environment maps. The first ones (like Sun) just require adding another pieces of a code for sampling
and evaluating them.
But adding environment emitters would require additional effort. Since environment image is literally everywhere in the scene (surrounds it) –
the question is – which direction should be sampled? If the image is uniform (contains one color everywhere), then there is no difference, and
random direction on a sphere would work. But let’s say image contains a very bright light source (like Sun among clouds), or in the corner case
there is only one pixel on the image which is not black. Now selecting a random direction on a sphere will not work, since most of the time we will
miss this bright pixel, and scene will be poorly lit with a lot of noise.
To fix this issue we need to build a two-dimensional sampling table. Similar to what we used to select emitters on a scene, but now it will be used
to select a bright pixels in the image. To build this two-dimensional sampling table we will go through each row of image and build 1D sampling table
for this row, also during the building 1D sampling table we will be accumulating a total weight of this row. And then we will build another 1D sampling
table for these weights. Basically, one table would be used to select a row, and then we select exact pixel from chosen row. This is a basic idea, but
there is also a lot of implementation details, which you should be aware of (like row weights for equirectangular textures, etc.).
After implementing sampling tables for the environment maps you will be able to render images like that:

(this image is actually produced by bidirectional path tracing, read about it below)
## Medium
Medium rendering is another dimension of a ray-tracer. Adding participating media to the scene can greatly increase an atmosphere.
Compare these three images:

Left image was rendered without participating media and without tuning reflection and refraction colors,
middle one was rendered with manually tuned material parameters, and the right one was rendered using participating media and with default (white)
values for reflections and refractions. The differences are noticeable, and what is more important adds a realism to the glass material
(just look at the bottom of the teapot!).
The medium could be homogenous (uniform in the whole volume) and heterogeneous (variable density in a volume). Adding medium rendering to ray-tracer
requires an effort, because not only a lot of new boilerplate code should be introduced, but the whole rendering algorithm should be adjusted.
But let’s focus on a boilerplate code that is required for rendering medium. At first you obviously need to add something that stores medium’s parameters
(usually, that is absorption, outscattering coefficients and phase function asymmetry value). The next methods would be medium sampling – different for
homogenous and heterogeneous (actually, most of the methods would be different for these two types, so I won’t mention this anymore).
Next thing (which is probably common for different medium types) would be to evaluate phase function (which is kind of replacement for BSDF for “solid”
materials), and sample phase function for selecting next ray direction.
This will work well for homogenous medium when you just want to fill some volume with it. But you’d also want to render something more complicated,
like clouds or smoke. For making this possible you could either generate / simulate density grid or load it from file. The most common format is
VDB and a library for loading it – OpenVDB. It would require some time to build and link this library, but in the end loading density grid data using
it is very simple.
# Bidirectional path tracing
So far, we’ve being talking only about unidirectional method. It traces light from the camera to scene and samples/hits emitters. For certain
types of materials and/or emitters it is hard for these methods to capture enough light, and rendering could be noisy after a lot of samples.
Good examples are glass surfaces, like water. When ray walks behind the water it has no chance to sample light source above the water, and therefore
not enough light information will be available. Rendering will be noisy and won’t have nice caustics. Compare these two images:

Left one is rendered with unidirectional path tracing, right with bidirectional. Both have 256 samples per pixel.
Actually, not so much boilerplate code required for bidirectional methods, compared to unidirectional. But implementing those methods could be
quite a challenge. But let’s focus on what is required to make them work.
## Output image wrapper
In “regular” path tracing we are going through each pixel of an image and therefore we precisely know a location of a pixel to be updated, and each
thread was changing only this specific pixel. In the bidirectional methods paths from light sources are connected to a random point on the image plane,
and therefore, each thread could randomly change random pixel on the image. Without proper handling such approach could result a race condition and
missing light information. Therefore, something special should be added to a code which will allow to avoid a race conditions. Like an atomic write, or
atomic addition to the image data or having multiple images (one for each thread) and then merging them into the output image.
## New methods for emitters and camera
As mentioned before in bidirectional methods vertices from light path are connected to points randomly selected on the image plane. Therefore,
another method should be added to our code base which will allow to randomly select points there. This should also include calculating probabilities
for multiple importance sampling methods.
And one more thing to build path from emitters is to actually select a place where this path will begin. Methods for this are similar to sampling
light sources from the specific points, but now we will generate points directly on emitters (either triangles, or in case of environment emitters
on a disk with radius of the whole scene and oriented according to a sampled direction).
# Conclusion
As a recap we can make a list of methods/routines which are used in ray-tracer:
– writing output images;
– intersection routines;
– scene loading code;
– camera code:
– generating rays (considering depth of field);
– sampling image plane;
– evaluating pdf and importance for a given ray;
– multithreading code (considering multithreaded writes to output image);
– materials and BSDF code:
– sampling BSDF;
– evaluating BSDF for a given ray;
– evaluating probabilities for a given ray;
– **everything above times N materials**;
– light sources:
– building a distribution table for light sources in scene;
– sampling light sources according to the distribution table;
– evaluating light source contribution for a given ray;
– sampling an emission from light source (position, direction, contribution, probabilities);
– **everything above times M light source types**;
– medium rendering:
– calculating transmittance;
– sampling medium;
– evaluating and sampling phase functions;
– code for loading .vdb files for heterogeneous medium;
As you see the real amount of code is really big. And we did not even touch GPU code.
But don’t be scared, because every other piece of code could be added on top of existing code to improve your renderings.
Start with small steps and eventually you will be able to render beautiful images!
# Implementation and support
Most of the things I described here are already implemented in my ray-tracer: [etx-tracer](https://github.com/sergeyreznik/etx-tracer)
You can support my open-source development by becoming a [sponsor on GitHub](https://github.com/sponsors/sergeyreznik)
Thank you for reading!
# Useful Libraries
– [Intel Embree](https://github.com/embree/embree) – High Performance Ray Tracing Kernels
– [enkits](https://github.com/dougbinks/enkiTS) – A permissively licensed C and C++ Task Scheduler for creating parallel programs
– [glm](https://github.com/g-truc/glm) – OpenGL Mathematics
– [stb_image](https://github.com/nothings/stb) – stb single-file public domain libraries for C/C++
– [tinyexr](https://github.com/syoyo/tinyexr) – Tiny OpenEXR image loader/saver library
– [tinyobjloader](https://github.com/tinyobjloader/tinyobjloader) – Tiny but powerful single file wavefront obj loader
– [mikktspace](https://github.com/mmikk/MikkTSpace) – A common standard for tangent space used in baking tools to produce normal maps.
# Links
[Physically Based Rendering:From Theory To Implementation](https://www.pbr-book.org/)
[Generating Random Numbers From a Specific Distribution By Inverting the CDF](https://blog.demofox.org/2017/08/05/generating-random-numbers-from-a-specific-distribution-by-inverting-the-cdf/)