Neural Fields: Difference between revisions

From David's Wiki
Line 79: Line 79:
* [https://www.youtube.com/watch?v=PeRRp1cFuH4 CVPR 2022 Tutorial on Neural Fields in Computer Vision]
* [https://www.youtube.com/watch?v=PeRRp1cFuH4 CVPR 2022 Tutorial on Neural Fields in Computer Vision]
* [https://arxiv.org/abs/2004.03805 State of the Art on Neural Rendering (Tewari et al., 2020)]
* [https://arxiv.org/abs/2004.03805 State of the Art on Neural Rendering (Tewari et al., 2020)]
* [https://arxiv.org/abs/2111.05849 Advances in Neural Rendering (Tewari et al., 2021)
* [https://arxiv.org/abs/2111.05849 Advances in Neural Rendering (Tewari et al., 2021)]

Revision as of 17:47, 29 March 2023

Neural Fields refers to using neural networks or neural methods to


Techniques

Forward Maps

Forward maps are the differentiable functions which convert the representation to an observed signal.

Shapes

Occupancy Grids or Voxel Grids
Signed Distance Functions
Primary-ray (PRIF)

3D Scenes

Radiance Fields (NeRF)
Light Fields

Identity

Images

Architectures

Neural Networks

MLP
CNN + MLP
Progressive Architectures

Hybrid Representations

Voxel Grids

These typically combine a octree or voxel grid with an MLP.
Some of these are basically feature grids.

  • Neural Sparse Voxel Fields
  • KiloNeRF
Point Clouds
Mesh

Feature Grids

Plenoxels
Plenoctrees
Hash (Instant-NGP)
Vector Quantization

https://nv-tlabs.github.io/vqad/

Factorized Feature Grids
  • TensoRF

Generalization

Generalization mainly focuses on learning a prior over the distribution, similar to what existing image generation network do.
This enables tasks such as view synthesis from a single image, shape completion, or inpainting.

CNN
  • pixelNeRF
Latent Codes
Hyper Networks
  • Light Field Networks

Applications

3D Generation

  • EG3D - Adapting Stylegan2, NeRF, and a super-resolution network for generating 3D scenes
  • Dream Fields - CLIP-guided NeRF generation
  • Dreamfusion - Adapting text-to-image diffusion models to generate NeRFs


Resources