Image-based rendering: Difference between revisions

From David's Wiki
Line 5: Line 5:
===Light Fields===
===Light Fields===
{{ main | Light field}}
{{ main | Light field}}
Lightfields aim to capture the radiance of light rays within the scene.
Lightfields aim to capture the radiance of light rays within the scene.<br>
Traditionally stored as a grid of images or videos.


===Light Field Networks===
===Light Field Networks===

Revision as of 15:18, 28 September 2021

Image-based rendering focuses on rendering scenes from existing captured or rasterized images, typically from a new viewpoint.
Recent research allows adding new objects, performing relighting, and other AR effects.

Implicit Representations

Light Fields

Lightfields aim to capture the radiance of light rays within the scene.
Traditionally stored as a grid of images or videos.

Light Field Networks

This is an implicit representation similar to NeRF.
However, you directly predict colors from light rays instead of performing volume rendering.

NeRF

NeRF preprocesses unstructured light fields into a neural network (MLP) representation which predicts radiance at different points during volume rendering.

Resources

Layered Representations

Some notable people here are Noah Snavely and Richard Tucker.
Representations here vary from implicit (MPI, MSI) to explicit (LDI, Point Clouds).

Multi-plane Image (MPI)

Multiple perpendicular planes each with some transparency which are composited together.

Layered Depth Image (LDI)

Multiple meshes each with some transparency. Unlike MPI, these meshes are not necessarily planes but may not correspond directly to scene objects.

Multi-sphere Image (MSI)

Similar to MPI but using spheres.

Point Clouds

Classical Reconstruction

Reconstruction aims to recreate the 3D scene from a set of input images.
Techniques include structure from motion, multi-view stereo.
This type of reconstruction is also studied in the field of photogrammetry.