Light field: Difference between revisions

1,858 bytes added ,  5 January 2021
no edit summary
No edit summary
Line 1: Line 1:
Small survey over light fields
Small survey over light fields. Light fields are also known as integral images over 3D scenes.


==Introduction==
==Introduction==
Line 6: Line 6:
Given a position <math>(x,y,z)</math> and angle <math>(\theta, \phi)</math> describing a ray, the 5D function <math>L(x,y,z,\theta,\phi)</math> is the radiance at that point.   
Given a position <math>(x,y,z)</math> and angle <math>(\theta, \phi)</math> describing a ray, the 5D function <math>L(x,y,z,\theta,\phi)</math> is the radiance at that point.   


For reasonable scenes, the radiance is consistent along the ray so light fields are actually 4D functions.
For reasonable scenes, the radiance is consistent along the ray so light fields are actually 4D functions
You can create a 4D parameterization using a two plane parameterization or a (plane, angle) parameterization to define each ray.


==Representations==
==Representations==
Line 12: Line 13:


The easiest way to collect light field data is to simulate it from within a virtual environment using software.
The easiest way to collect light field data is to simulate it from within a virtual environment using software.
==Fixed Aperture Rendering==
# Define a virtual camera position.
# Define a camera plane (ST-plane) and a (not necessarily parallel) focus plane (XY-plane) with predefined depth from the camera.
# For each pixel of the virtual camera, project to the (ST) camera plane and (XY) focus plane to get it's target camera position and world position.
# For each pixel, sample from a single virtual camera, i.e. closes to the target camera position, to get the RGB value.
# You can also do bilinear interpolation along the ST plane and along the XY plane for a smoother/softer image.
==Variable Aperture Rendering==
See [http://www.cs.harvard.edu/~sjg/papers/drlf.pdf Dynamically Reparameterized Light Fields (SIGGRAPH 2000)] and [https://www.youtube.com/watch?v=p2w1DNkITI8 Implementing a Light Field Renderer].
# Define a virtual camera position.
# Define a camera plane (ST-plane) and a (not necessarily parallel) focus plane (XY-plane) with predefined depth from the camera.
# For each pixel of the virtual camera, project to the (ST) camera plane and (XY) focus plane to get it's target camera position and world position.
# Project each pixel's target world position back to each real camera to sample an RGB value.
# For each pixel, add the sampled colors from each camera with some weight according the distance along the camera plane. For example, you can use the euclidean distance passed through the normal PDF to get a weight. Normalize weights so that they sum to 1.
# The focus is defined by the depth to the focus plane and the aperture is controlled by the weights. For aperture, you can add a multiplicative factor before pushing into the normal PDF.


==Resources==
==Resources==
* [https://graphics.stanford.edu/papers/light/light-lores-corrected.pdf Light Field Rendering by Marc Levoy and Pat Hanrahan (SIGGRAPH 1996)]
* [https://graphics.stanford.edu/papers/light/light-lores-corrected.pdf Light Field Rendering by Marc Levoy and Pat Hanrahan (SIGGRAPH 1996)]
* [http://www.cs.harvard.edu/~sjg/papers/drlf.pdf Dynamically Reparameterized Light Fields (SIGGRAPH 2000)]
* [http://www.cs.harvard.edu/~sjg/papers/drlf.pdf Dynamically Reparameterized Light Fields (SIGGRAPH 2000)]