Digging Into Self-Supervised Monocular Depth Estimation: Difference between revisions

From David's Wiki
No edit summary
 
Line 5: Line 5:
Affiliations: UCL, Caltech, Niantic
Affiliations: UCL, Caltech, Niantic


* [https://arxiv.org/abs/1806.01260 Arxiv mirror]
* [https://arxiv.org/abs/1806.01260 Arxiv mirror] [https://openaccess.thecvf.com/content_ICCV_2019/html/Godard_Digging_Into_Self-Supervised_Monocular_Depth_Estimation_ICCV_2019_paper.html CVF Mirror]
* [https://github.com/nianticlabs/monodepth2 Github]
* [https://github.com/nianticlabs/monodepth2 Github]



Latest revision as of 12:51, 10 August 2020

Digging Into Self-Supervised Monocular Depth Estimation (ICCV 2019)
Monodepth2

Authors: Clement Godard, Oisin Mac Aodha, Michael Firman, Gabriel Brostow Affiliations: UCL, Caltech, Niantic

Method

They perform self-supervised training by using the depth for view-synthesis and comparing to other images.

Given a source view \(I_{t'}\) and a target view \(I_t\), let the following:

  • \(T_{t \to t'}\) be the relative pose of \(t'\)
  • \(D_t\) the depth map of view \(t\)
  • \(L_p = \sum_{t'}pe(I, I_{t' \to t}\) the cumulative reprojection error
  • \(I_{t' \to t} = I_{t'}\langle proj(D_t, T_{t \to t'}, K)\rangle \) the projection

From this layout, they make the following contributions:

Per-Pixel Minimum Reprojection Loss

Basically you have two images in a sequence: frame1, frame2, frame3.
Each gives you a loss:

loss1 = abs(frame2 - warp(frame1))
loss2 = abs(frame2 - warp(frame3))
# Take the minimum over all pixels
loss = mean(min(loss1, loss2))

Auto-Masking Stationary Pixels

Multi-scale Estimation

Architecture

Evaluation

Resources

References