Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering: Difference between revisions
(→=) |
|||
Line 5: | Line 5: | ||
Links [https://arxiv.org/abs/2106.02634 https://arxiv.org/abs/2106.02634] | Links [https://arxiv.org/abs/2106.02634 https://arxiv.org/abs/2106.02634] | ||
== | ==Method== | ||
===Background=== | ===Background=== | ||
See NeRF and SIREN. | See NeRF and SIREN. | ||
=== | ===Light Field Networks=== | ||
The idea here is to use light field rendering instead of volume rendering or SDF ray marching.<br> | The idea here is to use light field rendering instead of volume rendering or SDF ray marching.<br> | ||
In this case, the input to the network is still a single point and direction but it represents the entire ray rather than the radiance at a particular point.<br> | In this case, the input to the network is still a single point and direction but it represents the entire ray rather than the radiance at a particular point.<br> | ||
Line 19: | Line 19: | ||
The benefit is that Plucker coordinates are invariance to the selected point and can represent the entire 360 set of rays.<br> | The benefit is that Plucker coordinates are invariance to the selected point and can represent the entire 360 set of rays.<br> | ||
<math>\mathbf{r} = (\mathbf{d},\mathbf{m}) \in \mathbb{R}^6</math> where <math>\mathbf{m}=\mathbf{p} \times \mathbf{d}</math> | <math>\mathbf{r} = (\mathbf{d},\mathbf{m}) \in \mathbb{R}^6</math> where <math>\mathbf{m}=\mathbf{p} \times \mathbf{d}</math> | ||
===Metalearning=== | ===Metalearning=== |
Revision as of 15:22, 30 August 2021
Light Field Networks: Neural Scene Representations with Single-Evaluation Rendering
Authors:Vincent Sitzmann, Semon Rezchikov, William T. Freeman, Joshua B. Tenenbaum, Fredo Durand
Affiliations: MIT
Links https://arxiv.org/abs/2106.02634
Method
Background
See NeRF and SIREN.
Light Field Networks
The idea here is to use light field rendering instead of volume rendering or SDF ray marching.
In this case, the input to the network is still a single point and direction but it represents the entire ray rather than the radiance at a particular point.
Thus, it is not necessary to sample across the entire ray and composite the samples.
Plucker coordinates
They use Plucker coordinates to encode rays instead of directly inputting the (point, direction) representation or using a two-plane parameterization.
The benefit is that Plucker coordinates are invariance to the selected point and can represent the entire 360 set of rays.
\(\displaystyle \mathbf{r} = (\mathbf{d},\mathbf{m}) \in \mathbb{R}^6\) where \(\displaystyle \mathbf{m}=\mathbf{p} \times \mathbf{d}\)