Light field

From David's Wiki
Revision as of 20:23, 15 January 2021 by David (talk | contribs)
\( \newcommand{\P}[]{\unicode{xB6}} \newcommand{\AA}[]{\unicode{x212B}} \newcommand{\empty}[]{\emptyset} \newcommand{\O}[]{\emptyset} \newcommand{\Alpha}[]{Α} \newcommand{\Beta}[]{Β} \newcommand{\Epsilon}[]{Ε} \newcommand{\Iota}[]{Ι} \newcommand{\Kappa}[]{Κ} \newcommand{\Rho}[]{Ρ} \newcommand{\Tau}[]{Τ} \newcommand{\Zeta}[]{Ζ} \newcommand{\Mu}[]{\unicode{x039C}} \newcommand{\Chi}[]{Χ} \newcommand{\Eta}[]{\unicode{x0397}} \newcommand{\Nu}[]{\unicode{x039D}} \newcommand{\Omicron}[]{\unicode{x039F}} \DeclareMathOperator{\sgn}{sgn} \def\oiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x222F}\,}{\unicode{x222F}}{\unicode{x222F}}{\unicode{x222F}}}\,}\nolimits} \def\oiiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x2230}\,}{\unicode{x2230}}{\unicode{x2230}}{\unicode{x2230}}}\,}\nolimits} \)

Small survey over light fields. Light fields are also known as integral images over 3D scenes.
Lightfields were originally published by Gortler et al.[1] and Levoy et al.[2]


Introduction

A light field describes all light in a scene.
Given a position \(\displaystyle (x,y,z)\) in 3D and angle \(\displaystyle (\theta, \phi)\) describing a ray, the 5D plenoptic function \(\displaystyle L(x,y,z,\theta,\phi)\) defines the radiance at that point. The radiance we represent as an RGB value in \(\displaystyle \mathbb{R}^3\).

For many scenes, we can assume the air is transparent so that the radiance is consistent along the ray.
In these situations, light fields can be reduced to 4D functions defined only on rays in some enclosed scene.
You can create a 4D parameterization using a two plane parameterization, typically represented as \(\displaystyle (s,t,u,v)\) or a (plane, angle) parameterization to define each ray.

Representations

Lightfields can be represented as a set of images along a 2D grid. Each image is captured from a slightly different viewpoint but faces the same object using a sheer projection or rotation.

The easiest way to collect light field data is to simulate it from within a virtual environment using software.

Aperture

Pinhole Rendering

How to render with a virtual pin-hole camera.

  1. Define a virtual camera position.
  2. Define a camera plane (ST-plane) and a (not necessarily parallel) focus plane (XY-plane) with predefined depth from the camera.
  3. For each pixel of the virtual camera, project to the (ST) camera plane and (XY) focus plane to get it's target camera position and world position.
  4. For each pixel, sample from a single virtual camera, i.e. closes to the target camera position, to get the RGB value.
  5. You can also do bilinear interpolation along the ST plane and along the XY plane for a smoother/softer image.

Variable Aperture Rendering

See Isaksen et al.[3] and Implementing a Light Field Renderer.

  1. Define a virtual camera position.
  2. Define a camera plane (ST-plane) and a (not necessarily parallel) focus plane (XY-plane) with predefined depth from the camera.
  3. For each pixel of the virtual camera, project to the (ST) camera plane and (XY) focus plane to get it's target camera position and world position.
  4. Project each pixel's target world position back to each real camera to sample an RGB value.
  5. For each pixel, add the sampled colors from each camera with some weight according the distance along the camera plane. For example, you can use the euclidean distance passed through the normal PDF to get a weight. Normalize weights so that they sum to 1.
  6. The focus is defined by the depth to the focus plane and the aperture is controlled by the weights. For aperture, you can add a multiplicative factor before pushing into the normal PDF.

Focus

Resources

References

  1. Steven J. Gortler, Radek Grzeszczuk, Richard Szeliski, Michael F. Cohen (1996). The Lumigraph (SIGGRAPH 1996) PDF
  2. Marc Levoy, Pat Hanrahan (1996). Light Field Rendering (SIGGRAPH 1996) PDF
  3. Aaron Isaksen, Leonard McMillan, Steven J. Gortler (2000) Dynamically Reparameterized Light Fields (SIGGRAPH 2000) PDF