Digging Into Self-Supervised Monocular Depth Estimation: Difference between revisions

From David's Wiki
(Created page with "Digging Into Self-Supervised Monocular Depth Estimation (ICCV 2019) Monodepth2 Authors: Clement Godard, Oisin Mac Aodha, Michael Firman, Gabriel Brostow Affiliations: UCL,...")
 
Line 20: Line 20:


===Per-Pixel Minimum Reprojection Loss===
===Per-Pixel Minimum Reprojection Loss===
Basically you have two images in a sequence: frame1, frame2, frame3. 
Each gives you a loss:
<pre>
loss1 = abs(frame2 - warp(frame1))
loss2 = abs(frame2 - warp(frame3))
# Take the minimum over all pixels
loss = mean(min(loss1, loss2))
</pre>
===Auto-Masking Stationary Pixels===
===Auto-Masking Stationary Pixels===
===Multi-scale Estimation===
===Multi-scale Estimation===

Revision as of 16:58, 10 July 2020

Digging Into Self-Supervised Monocular Depth Estimation (ICCV 2019)
Monodepth2

Authors: Clement Godard, Oisin Mac Aodha, Michael Firman, Gabriel Brostow Affiliations: UCL, Caltech, Niantic

Method

They perform self-supervised training by using the depth for view-synthesis and comparing to other images.

Given a source view \(I_{t'}\) and a target view \(I_t\), let the following:

  • \(T_{t \to t'}\) be the relative pose of \(t'\)
  • \(D_t\) the depth map of view \(t\)
  • \(L_p = \sum_{t'}pe(I, I_{t' \to t}\) the cumulative reprojection error
  • \(I_{t' \to t} = I_{t'}\langle proj(D_t, T_{t \to t'}, K)\rangle \) the projection

From this layout, they make the following contributions:

Per-Pixel Minimum Reprojection Loss

Basically you have two images in a sequence: frame1, frame2, frame3.
Each gives you a loss:

loss1 = abs(frame2 - warp(frame1))
loss2 = abs(frame2 - warp(frame3))
# Take the minimum over all pixels
loss = mean(min(loss1, loss2))

Auto-Masking Stationary Pixels

Multi-scale Estimation

Architecture

Evaluation

Resources

References