Light Field Duality: Concept and Applications: Difference between revisions

Line 79: Line 79:


==CHL Light Field Rendering==
==CHL Light Field Rendering==
Each camera <math>i</math> is a point in world coordinates which corresponds to a hyperline in the light field.   
Each camera <math>i</math> is a point in world coordinates which corresponds to a hyperline <math>\mathbf{l}_i=(a_i, b_i, c_i, d_i)</math> in the light field.   
If we want to render from a new viewpoint, for each render pixel, we have a target ray which corresponds a point on the light field <math>\mathbf{r} = (s_r, t_r, u_r, v_r)</math>.
If we want to render from a new viewpoint, for each render pixel, we have a target ray which corresponds a point on the light field <math>\mathbf{r} = (s_r, t_r, u_r, v_r)</math>
For each camera/hyperline <math>i</math>, you can find the ray/hyperpoint which minimizes some distance function: 
<math>
\begin{aligned}
d^2 &= \Vert (s_i,t_i,u_i,v_i) - (s_r,t_r,u_r,v_r) \Vert^2_2\\
&= (s_i - s_r)^2 + (t_i - t_r)^2 + (u_i - u_r)^2 + (v_i - v_r)^2
\end{aligned}
</math>. 
There is a closed form solution provided in the paper. 
Then simply blend with weights as inverse distance.
 
===Dynamic focal plane===
You can add a slope loss to the distance function to make sure rays point in the same direction: 
<math>
\begin{aligned}
d^2 &= \Vert (s_i,t_i,u_i,v_i) - (s_r,t_r,u_r,v_r) \Vert^2_2 +\\
&\hspace{5mm}\beta\Vert (s_i-u_i, t_i-v_i) - (s_r-u_r, t_r-v_r) \Vert^2_2
\end{aligned}
</math>.
 
If the virtual camera is not at the same position as the source cameras, then the lines will not be parallel and will intersect. 
<math>\beta</math> can be used to control the depth at which they intersect.


==GHL Light Field Rendering==
==GHL Light Field Rendering==