5,331
edits
No edit summary |
|||
Line 32: | Line 32: | ||
* [https://openaccess.thecvf.com/content_ECCV_2018/html/Sameh_Khamis_StereoNet_Guided_Hierarchical_ECCV_2018_paper.html StereoNet (ECCV 2018)] ([[StereoNet: Guided Hierarchical Refinement for Real-Time Edge-Aware Depth Prediction |My Summary]]) is a method by Google's Augmented Perception team. | * [https://openaccess.thecvf.com/content_ECCV_2018/html/Sameh_Khamis_StereoNet_Guided_Hierarchical_ECCV_2018_paper.html StereoNet (ECCV 2018)] ([[StereoNet: Guided Hierarchical Refinement for Real-Time Edge-Aware Depth Prediction |My Summary]]) is a method by Google's Augmented Perception team. | ||
* [http://visual.cs.ucl.ac.uk/pubs/casual3d/ Casual 3D photography (SIGGRAPH ASIA 2017)] includes a method for refining cost volumes and a system for synthesizing views from a few dozen photos] | * [http://visual.cs.ucl.ac.uk/pubs/casual3d/ Casual 3D photography (SIGGRAPH ASIA 2017)] includes a method for refining cost volumes and a system for synthesizing views from a few dozen photos] | ||
==Depth from Motion== | ==Depth from Motion== |