Image-based rendering: Difference between revisions
No edit summary |
|||
Line 10: | Line 10: | ||
{{ main | NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis }} | {{ main | NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis }} | ||
NeRF preprocesses unstructured light fields into a neural network (MLP) representation which predicts/interpolates unknown light rays based on the known light rays in the scene. | NeRF preprocesses unstructured light fields into a neural network (MLP) representation which predicts/interpolates unknown light rays based on the known light rays in the scene. | ||
;Resources | |||
* [https://github.com/yenchenlin/awesome-NeRF yenchenlin/awesome-NeRF] | |||
* [https://dellaert.github.io/NeRF/ NeRF explosion 2020] | |||
==Layered Representations== | ==Layered Representations== |
Revision as of 15:54, 3 February 2021
Image-based rendering focuses on rendering scenes from existing captured or rasterized images, typically from a new viewpoint.
Recent research allows adding new objects, performing relighting, and other AR effects.
Implicit Representations
Light Fields
Lightfields aim to capture the radiance of light rays within the scene.
NeRF
NeRF preprocesses unstructured light fields into a neural network (MLP) representation which predicts/interpolates unknown light rays based on the known light rays in the scene.
- Resources
Layered Representations
Some notable people here are Noah Snavely and Richard Tucker.
Representations here vary from implicit (MPI, MSI) to explicit (LDI, Point Clouds).
Multi-plane Image (MPI)
Layered Depth Image (LDI)
- One-shot 3D photography
- Casual 3D Photography
Multi-sphere Image (MSI)
- Matryodshka (ECCV 2020) - Renders 6-dof video from ODS videos.
Point Clouds
Classical Reconstruction
Reconstruction aims to recreate the 3D scene from a set of input images.
Techniques include structure from motion, multi-view stereo.
This is also known as photogrammetry.