Instant Neural Graphics Primitives with a Multiresolution Hash Encoding: Difference between revisions

(Created page with " [https://nvlabs.github.io/instant-ngp/ Project Webpage] Allows training a NeRF or other neural representation in seconds but optimizing features in a hash table.<br> This is faster than an octree since it doesn't require pointer chasing while using less space than enumerating voxels.<br> Furthermore, the size of the hash table allows control over the amount of detail. ==Method== ===Multi-resolution Hash Encoding=== ==Experiments== ==References== Category: Papers")
 
Line 8: Line 8:
==Method==
==Method==
===Multi-resolution Hash Encoding===
===Multi-resolution Hash Encoding===
# For each level/resolution
## Find which voxel we are in and get indices to the corners.
## Hash each corner and retrieve features (<math>\in \mathbb{R}^{F}</math>) from the hash map.
## Do trilienar interpolation to get an average feature.
# Concatenate features from all levels (<math display="inline">\in \mathbb{R}^{LF}</math>) along with auxiliary inputs (<math display="inline">\in \mathbb{R}^{E}</math>, e.g. view direction) to produce a feature vector <math display="inline">\mathbf{y} \in \mathbb{R}^{LF+E}</math>.
# Pass <math>\mathbf{y}</math> through your small feature decoding neural network.
;Interesting details
* Hash collisions are ignored.
* The hash function used is: <math>h(\mathbf{x}) = \left( \bigoplus_{i=1}^{d} x_i \pi_i \right) \mod T</math>
** <math>\bigoplus</math> is bitwise XOR.
** <math>\pi_i</math> are unique large primes.
** <math>T</math> is the size of the hash table.


==Experiments==
==Experiments==
==References==
==References==
[[Category: Papers]]
[[Category: Papers]]