Image-based rendering: Difference between revisions
No edit summary |
|||
| (9 intermediate revisions by the same user not shown) | |||
| Line 1: | Line 1: | ||
Image-based rendering focuses on rendering scenes from existing captured or rasterized images | Image-based rendering focuses on rendering scenes from existing captured or rasterized images.<br> | ||
View synthesis focuses on rendering the scene from a new viewpoint based on the captured information.<br> | |||
Other research focuses on adding new objects, performing relighting, stylization, and other AR effects. | |||
==Implicit Representations== | ==Implicit Representations== | ||
===Light Fields=== | ===Light Fields=== | ||
{{ main | Light field}} | {{ main | Light field}} | ||
Light fields capture the accumulated radiance of light rays within the scene.<br> | |||
Traditionally stored as a grid of images or videos. | |||
===Light Field Networks=== | |||
This is an implicit representation similar to NeRF.<br> | |||
However, you directly predict colors from light rays instead of performing volume rendering. | |||
===NeRF=== | ===NeRF=== | ||
{{ main | NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis }} | {{ main | NeRF: Representing Scenes as Neural Radiance Fields for View Synthesis }} | ||
NeRF preprocesses unstructured light fields into a neural network (MLP) representation which predicts | NeRF preprocesses unstructured light fields into a neural network (MLP) representation which predicts radiance at different points during volume rendering. | ||
;Resources | ;Resources | ||
| Line 20: | Line 26: | ||
===Multi-plane Image (MPI)=== | ===Multi-plane Image (MPI)=== | ||
Multiple perpendicular planes each with some transparency which are composited together. | |||
* [https://arxiv.org/abs/1805.09817 Stereo Magnification (SIGGRAPH 2018)] | * [https://arxiv.org/abs/1805.09817 Stereo Magnification (SIGGRAPH 2018)] | ||
* [https://openaccess.thecvf.com/content_CVPR_2019/html/Flynn_DeepView_View_Synthesis_With_Learned_Gradient_Descent_CVPR_2019_paper.html DeepView (CVPR 2019)] | * [https://openaccess.thecvf.com/content_CVPR_2019/html/Flynn_DeepView_View_Synthesis_With_Learned_Gradient_Descent_CVPR_2019_paper.html DeepView (CVPR 2019)] | ||
===Layered Depth Image (LDI)=== | ===Layered Depth Image (LDI)=== | ||
Multiple meshes each with some transparency. Unlike MPI, these meshes are not necessarily planes but may not correspond directly to scene objects. | |||
* [https://facebookresearch.github.io/one_shot_3d_photography/ One-shot 3D photography] | * [https://facebookresearch.github.io/one_shot_3d_photography/ One-shot 3D photography] | ||
* Casual 3D Photography | * Casual 3D Photography | ||
===Multi-sphere Image (MSI)=== | ===Multi-sphere Image (MSI)=== | ||
Similar to MPI but using spheres. | |||
* [http://visual.cs.brown.edu/projects/matryodshka-webpage/ Matryodshka (ECCV 2020)] - Renders 6-dof video from ODS videos. | * [http://visual.cs.brown.edu/projects/matryodshka-webpage/ Matryodshka (ECCV 2020)] - Renders 6-dof video from ODS videos. | ||
| Line 34: | Line 43: | ||
==Classical Reconstruction== | ==Classical Reconstruction== | ||
Reconstruction aims to recreate the 3D scene from a set of input images | Reconstruction aims to recreate the 3D scene from a set of input images, typically as a mesh or point cloud | ||
Techniques include structure from motion, multi-view stereo. | Techniques include structure from motion, multi-view stereo. | ||
This type of reconstruction is also studied in the field of photogrammetry. | This type of reconstruction is also studied in the field of photogrammetry. | ||