SynSin: End-to-end View Synthesis from a Single Image: Difference between revisions
(Created page with "SynSin: End-to-end View Synthesis from a Single Image (CVPR 2020) Authors: Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson Affiliations: University of Oxford...") |
No edit summary |
||
Line 5: | Line 5: | ||
* [http://www.robots.ox.ac.uk/~ow/synsin.html#:~:text=End%2Dto%2Dend%20view%20synthesis%3A%20Given%20a%20single%20RGB,to%20synthesise%20the%20output%20image. Website] | * [http://www.robots.ox.ac.uk/~ow/synsin.html#:~:text=End%2Dto%2Dend%20view%20synthesis%3A%20Given%20a%20single%20RGB,to%20synthesise%20the%20output%20image. Website] | ||
* [http://openaccess.thecvf.com/content_CVPR_2020/ | * [http://openaccess.thecvf.com/content_CVPR_2020/html/Wiles_SynSin_End-to-End_View_Synthesis_From_a_Single_Image_CVPR_2020_paper.html CVF Mirror] [https://arxiv.org/abs/1912.08804 Arxiv mirror] | ||
* [http://openaccess.thecvf.com/content_CVPR_2020/supplemental/Wiles_SynSin_End-to-End_View_CVPR_2020_supplemental.zip Supp] | |||
==Method== | ==Method== |
Revision as of 12:06, 26 June 2020
SynSin: End-to-end View Synthesis from a Single Image (CVPR 2020)
Authors: Olivia Wiles, Georgia Gkioxari, Richard Szeliski, Justin Johnson Affiliations: University of Oxford, Facebook AI Research, Facebook, University of Michigan
Method
- First a depth map and a set of features are generated for each pixel using depth network \(d\) and feature network \(f\).
- The depths are used to create a 3D point cloud of features \(P\).
- Features are repositioned using the transformation matrix T.
- Repositioned features are rendered using a neural point cloud renderer.
- Rendered features are passed through a refinement network \(g\).