News
integration_instructions [Jan 2022] Code released on GitHub.
Abstract
Neural graphics primitives, parameterized by fully connected neural networks, can be costly to train and evaluate. We reduce this cost with a versatile new input encoding that permits the use of a smaller network without sacrificing quality, thus significantly reducing the number of floating point and memory access operations. A small neural network is augmented by a multiresolution hash table of trainable feature vectors whose values are optimized through stochastic gradient descent. The multiresolution structure allows the network to disambiguate hash collisions, making for a simple architecture that is trivial to parallelize on modern GPUs. We leverage this parallelism by implementing the whole system using fully-fused CUDA kernels with a focus on minimizing wasted bandwidth and compute operations. We achieve a combined speedup of several orders of magnitude, enabling training of high-quality neural graphics primitives in a matter of seconds, and rendering in tens of milliseconds at a resolution of 1920×1080.
Real-time training progress on the image task where the neural network learns the mapping from 2D coordinates to RGB colors of a high-resolution image. Note that in this video, the network is trained from scratch – but converges so quickly you may miss it if you blink! (a) None
411k parameters10:45 (mm:ss)
(b) Multiresolution grid
10k + 16.3M parameters1:26 (mm:ss)
(c) Frequency
438k + 0 parameters13:53 (mm:ss)
(d) Hashtable (T=214)
10k + 494k parameters1:40 (mm:ss)
(e) Hashtable (T=219)
10k + 12.6M parameters1:45 (mm:ss)
A demonstration of the reconstruction quality of different encodings. Each configuration was trained for 11000 steps using our fast NeRF implementation, varying only the input encoding and the neural network size. The number of trainable parameters (neural network weights + encoding parameters) and training time are shown below each image. Our encoding (d) with a similar total number of trainable parameters as the frequency encoding (c) trains over 8 times faster, due to the sparsity of updates to the parameters and smaller neural network. Increasing the number of parameters (e) further improves approximation quality without significantly increasing training time.
Real-time training progress on eight synthetic NeRF datsets.
Fly-throughs of trained real-world NeRFs. Large, natural 360 scenes (left) as well as complex scenes with many disocclusions and specular surfaces (right) are well supported.
We also support training NeRF-like radiance fields from the noisy output of a volumetric path tracer. Rays are fed in real-time to the network during training, which learns a denoised radiance field.
Real-time training progress on various SDF datsets. Training data is generated on the fly from the ground-truth mesh using the NVIDIA OptiX raytracing framework.
Direct visualization of a neural radiance cache, in which the network predicts outgoing radiance at the first non-specular vertex of each pixel’s path, and is trained on-line from rays generated by a real-time pathtracer. On the left, we show results using the triangle wave encoding of [Mรผller et al. 2021]; on the right, the new multiresolution hash encoding allows the network to learn much sharper details, for example in the shadow regions.
Instant Neural Graphics Primitives with a Multiresolution Hash Encoding
Thomas Mรผller, Alex Evans, Christoph Schied, Alexander Keller
integration_instructions Code
Please send feedback and questions to Thomas Mรผller
We would like to thank Girl With a Pearl Earing renovation by Koorosh Orooj (CC BY-SA 4.0 License)Results
Gigapixel Image
Featured Content Ads
add advertising hereNeural Radiance Fields
Both models can be rendered in real time and were trained in under 5 minutes from casually captured data: the left one from an iPhone video and the right one from 34 photographs.
Signed Distance Function
Neural Radiance Cache
Paper
Acknowledgements
Towaki Takikawa,
David Luebke,
Koki Nagano and
Nikolaus Binder for profound discussions, and
Anjul Patney,
Jacob Munkberg,
Jonathan Granskog,
Jonathan Tremblay,
Marco Salvi,
James Lucas and
Towaki Takikawa
for proof-reading, feedback, profound discussions, and early testing.
We also thank Joey Litalien for providing us with the framework for this website.
Tokyo gigapixel photograph by Trevor Dobson (CC BY-NC-ND 2.0 License)
Lucy model from the Stanford 3D scan repository
Factory robot dataset by Arman Toornias and Saurabh Jain.
Disney Cloud model by Walt Disney Animation Studios. (CC BY-SA 3.0)
Bearded Man model by Oliver Laric. (CC BY-NC-SA 3.0)