Light field
Small survey over light fields. Light fields are also known as integral images over 3D scenes.
Lightfields were originally published by Gortler et al.[1] and Levoy et al.[2]
Introduction
A light field describes all light in a scene.
Given a position \(\displaystyle (x,y,z)\) in 3D and angle \(\displaystyle (\theta, \phi)\) describing a ray, the 5D plenoptic function \(\displaystyle L(x,y,z,\theta,\phi)\) defines the radiance at that point. The radiance we represent as an RGB value in \(\displaystyle \mathbb{R}^3\).
For many scenes, we can assume the air is transparent so that the radiance is consistent along the ray.
In these situations, light fields can be reduced to 4D functions defined only on rays in some enclosed scene.
You can create a 4D parameterization using a two plane parameterization, typically represented as \(\displaystyle (s,t,u,v)\), or a (plane, angle) parameterization to define each ray.
Representations
Lightfields can be represented as a set of images along a 2D grid. Each image is captured from a slightly different viewpoint but faces the same object using a sheer projection or rotation.
The easiest way to collect light field data is to simulate it from within a virtual environment using software.
Alternative ways to represent light fields is as a radiance field (see NeRF) or as a light field network.
Parameterizations
- Two-plane parameterization
In the two plane parameterization, one plane represents the camera position (u,v) also known as the angular resolution and the other plane represents the pixels (s,t) known as the spatial resolution.
This is also known as light slab or lumigraph. For 360 light fields, multiple light slabs are needed to represent different directions. A limitation of this method is that transitions between light slabs may not be smooth.
- Spherical
- Plucker
Aperture
Pinhole Rendering
How to render with a virtual pin-hole camera.
- Define a virtual camera position.
- Define a camera plane (ST-plane) and a (not necessarily parallel) focus plane (XY-plane) with predefined depth from the camera.
- For each pixel of the virtual camera, project to the (ST) camera plane and (XY) focus plane to get it's target camera position and world position.
- For each pixel, sample from a single virtual camera, i.e. closes to the target camera position, to get the RGB value.
- You can also do bilinear interpolation along the ST plane and along the XY plane resulting in a quadrilinear interpolation for a softer image.
Variable Aperture Rendering
See Isaksen et al.[3] and Implementing a Light Field Renderer.
- Define a virtual camera position.
- Define a camera plane (ST-plane) and a (not necessarily parallel) focus plane (XY-plane) with predefined depth from the camera.
- For each pixel of the virtual camera, project to the (ST) camera plane and (XY) focus plane to get it's target camera position and world position.
- Project each pixel's target world position back to each real camera to sample an RGB value.
- For each pixel, add the sampled colors from each camera with some weight according the distance along the camera plane. For example, you can use the euclidean distance passed through the normal PDF to get a weight. Normalize weights so that they sum to 1.
- The focus is defined by the depth to the focus plane and the aperture is controlled by the weights. For aperture, you can add a multiplicative factor before pushing into the normal PDF.
Focus
See http://www.plenoptic.info/pages/refocusing.html
Glossery
- Spatial resolution - resolution of the image-plane in the two-plane parameterization (i.e. resolution of the sub-aperture image) (e.g. 512x512).
- Angular resolution - resolution in the angular-plane in the two-plane parameterization (e.g. 5x5 if you have 25 cameras).
- Sub-aperture images - individual image from a single viewpoint, a fraction of the aperture.
- Epipolar plane image (EPI) - image where the y-axis is the u-axis in the angular plane and x-axis is x-axis in spatial plane, useful for visualizing disparity.
- Microlens Array - A set of lens behind the main lens in lightfield cameras.
Resources
References
<templatestyles src="Reflist/styles.css" />
- ↑ Steven J. Gortler, Radek Grzeszczuk, Richard Szeliski, Michael F. Cohen (1996). The Lumigraph (SIGGRAPH 1996) PDF
- ↑ Marc Levoy, Pat Hanrahan (1996). Light Field Rendering (SIGGRAPH 1996) PDF
- ↑ Aaron Isaksen, Leonard McMillan, Steven J. Gortler (2000) Dynamically Reparameterized Light Fields (SIGGRAPH 2000) PDF