Monocular Neural Image Based Rendering with Continuous View Control
Appearance
Monocular Neural Image Based Rendering with Continuous View Control
Authors: Xu Chen, Jie Song, Otmar Hilliges
Affiliations: AIT Lab, ETH Zurich
Method
The main idea is to create a transformating autoencoder.
The goal of the transforming autoencoder is to create a point cloud of latent features from a 2D source image.
- Encode the image into a latent representation
- Rotate and translate the latent representation
- Decode the latent representation into a depth map for the target view
- Compute correspondences between source and target using projection to the depth map
- Do warping using correspondences to get the target image