Monocular Neural Image Based Rendering with Continuous View Control: Difference between revisions

From David's Wiki
No edit summary
Line 30: Line 30:
* \(B(F, I)\) is a bilinear warp of image \(I\) using backwards flow \(F\)
* \(B(F, I)\) is a bilinear warp of image \(I\) using backwards flow \(F\)
* \(P_{t \to s}(I)\)) is the projection of \(I\) from \(t\) to \(s\)
* \(P_{t \to s}(I)\)) is the projection of \(I\) from \(t\) to \(s\)
===Transforming Auto-encoder===
The latent code \(z_s\) is represented as a set of 3D points: \(z_s \in \mathbb{R}^{n \times 3}\). 
This allows it in homogeneous coordinates to be multiplied by the transformation matrix \(T_{s \to t} = [R |t]_{s \to t}\):
\[
\begin{equation}
z_t = [R|t]_{s\to t} \tilde{z}_s
\end{equation}
\]
===Depth Guided Appearance Mapping===


==Architecture==
==Architecture==

Revision as of 12:56, 1 June 2020

Monocular Neural Image Based Rendering with Continuous View Control

Authors: Xu Chen, Jie Song, Otmar Hilliges
Affiliations: AIT Lab, ETH Zurich


Method

The main idea is to create a transformating autoencoder.
The goal of the transforming autoencoder is to create a point cloud of latent features from a 2D source image.

  1. Encode the image \(I_s\) into a latent representation \(z = E_{\theta_{e}}(I_s)\).
  2. Rotate and translate the latent representation to get \(z_{T} = T_{s \to t}(z)\).
  3. Decode the latent representation into a depth map for the target view \(D_t\).
  4. Compute correspondences between source and target using projection to the depth map
    • Uses camera intrinsics \(K\) and extrinsics \(T_{s\to t}\) to yield a dense backward flow map \(C_{t \to s}\).
  5. Do warping using correspondences to get the target image \(\hat{I}_{t}\).

In total, the mapping is: \[ \begin{equation} M(I_s) = B(P_{t \to s}(D_{\theta_{d}}(T_{s \to t}(E_{\theta_{e}}(T_s))),I_s) = \hat{I}_{t} \end{equation} \]

where:

  • \(B(F, I)\) is a bilinear warp of image \(I\) using backwards flow \(F\)
  • \(P_{t \to s}(I)\)) is the projection of \(I\) from \(t\) to \(s\)

Transforming Auto-encoder

The latent code \(z_s\) is represented as a set of 3D points: \(z_s \in \mathbb{R}^{n \times 3}\).
This allows it in homogeneous coordinates to be multiplied by the transformation matrix \(T_{s \to t} = [R |t]_{s \to t}\): \[ \begin{equation} z_t = [R|t]_{s\to t} \tilde{z}_s \end{equation} \]

Depth Guided Appearance Mapping

Architecture

References