Light field

From David's Wiki
\( \newcommand{\P}[]{\unicode{xB6}} \newcommand{\AA}[]{\unicode{x212B}} \newcommand{\empty}[]{\emptyset} \newcommand{\O}[]{\emptyset} \newcommand{\Alpha}[]{Α} \newcommand{\Beta}[]{Β} \newcommand{\Epsilon}[]{Ε} \newcommand{\Iota}[]{Ι} \newcommand{\Kappa}[]{Κ} \newcommand{\Rho}[]{Ρ} \newcommand{\Tau}[]{Τ} \newcommand{\Zeta}[]{Ζ} \newcommand{\Mu}[]{\unicode{x039C}} \newcommand{\Chi}[]{Χ} \newcommand{\Eta}[]{\unicode{x0397}} \newcommand{\Nu}[]{\unicode{x039D}} \newcommand{\Omicron}[]{\unicode{x039F}} \DeclareMathOperator{\sgn}{sgn} \def\oiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x222F}\,}{\unicode{x222F}}{\unicode{x222F}}{\unicode{x222F}}}\,}\nolimits} \def\oiiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x2230}\,}{\unicode{x2230}}{\unicode{x2230}}{\unicode{x2230}}}\,}\nolimits} \)

Small survey over light fields. Light fields are also known as integral images over 3D scenes.
Lightfields were originally published by Gortler et al.[1] and Levoy et al.[2]

Introduction

A light field describes all light in a scene.
Given a position \(\displaystyle (x,y,z)\) in 3D and angle \(\displaystyle (\theta, \phi)\) describing a ray, the 5D plenoptic function \(\displaystyle L(x,y,z,\theta,\phi)\) defines the radiance at that point. The radiance we represent as an RGB value in \(\displaystyle \mathbb{R}^3\).

For many scenes, we can assume the air is transparent so that the radiance is consistent along the ray.
In these situations, light fields can be reduced to 4D functions defined only on rays in some enclosed scene.
You can create a 4D parameterization using a two plane parameterization, typically represented as \(\displaystyle (s,t,u,v)\), or a (plane, angle) parameterization to define each ray.

Representations

Lightfields can be represented as a set of images along a 2D grid. Each image is captured from a slightly different viewpoint but faces the same object using a sheer projection or rotation.

The easiest way to collect light field data is to simulate it from within a virtual environment using software.

Alternative ways to represent light fields is as a radiance field (see NeRF) or as a light field network.

Aperture

Pinhole Rendering

How to render with a virtual pin-hole camera.

  1. Define a virtual camera position.
  2. Define a camera plane (ST-plane) and a (not necessarily parallel) focus plane (XY-plane) with predefined depth from the camera.
  3. For each pixel of the virtual camera, project to the (ST) camera plane and (XY) focus plane to get it's target camera position and world position.
  4. For each pixel, sample from a single virtual camera, i.e. closes to the target camera position, to get the RGB value.
  5. You can also do bilinear interpolation along the ST plane and along the XY plane resulting in a quadrilinear interpolation for a softer image.

Variable Aperture Rendering

See Isaksen et al.[3] and Implementing a Light Field Renderer.

  1. Define a virtual camera position.
  2. Define a camera plane (ST-plane) and a (not necessarily parallel) focus plane (XY-plane) with predefined depth from the camera.
  3. For each pixel of the virtual camera, project to the (ST) camera plane and (XY) focus plane to get it's target camera position and world position.
  4. Project each pixel's target world position back to each real camera to sample an RGB value.
  5. For each pixel, add the sampled colors from each camera with some weight according the distance along the camera plane. For example, you can use the euclidean distance passed through the normal PDF to get a weight. Normalize weights so that they sum to 1.
  6. The focus is defined by the depth to the focus plane and the aperture is controlled by the weights. For aperture, you can add a multiplicative factor before pushing into the normal PDF.

Focus

See http://www.plenoptic.info/pages/refocusing.html

Glossery

  • Spatial resolution - resolution of the image-plane in the two-plane parameterization (i.e. resolution of the sub-aperture image) (e.g. 512x512).
  • Angular resolution - resolution in the angular-plane in the two-plane parameterization (e.g. 5x5 if you have 25 cameras).
  • Sub-aperture images - individual image from a single viewpoint, a fraction of the aperture.
  • Epipolar plane image (EPI) - image where the y-axis is the u-axis in the angular plane and x-axis is x-axis in spatial plane, useful for visualizing disparity.
  • Microlens Array - A set of lens behind the main lens in lightfield cameras.

Resources

References

<templatestyles src="Reflist/styles.css" />

  1. Steven J. Gortler, Radek Grzeszczuk, Richard Szeliski, Michael F. Cohen (1996). The Lumigraph (SIGGRAPH 1996) PDF
  2. Marc Levoy, Pat Hanrahan (1996). Light Field Rendering (SIGGRAPH 1996) PDF
  3. Aaron Isaksen, Leonard McMillan, Steven J. Gortler (2000) Dynamically Reparameterized Light Fields (SIGGRAPH 2000) PDF