Deep Image Prior

From David's Wiki
\( \newcommand{\P}[]{\unicode{xB6}} \newcommand{\AA}[]{\unicode{x212B}} \newcommand{\empty}[]{\emptyset} \newcommand{\O}[]{\emptyset} \newcommand{\Alpha}[]{Α} \newcommand{\Beta}[]{Β} \newcommand{\Epsilon}[]{Ε} \newcommand{\Iota}[]{Ι} \newcommand{\Kappa}[]{Κ} \newcommand{\Rho}[]{Ρ} \newcommand{\Tau}[]{Τ} \newcommand{\Zeta}[]{Ζ} \newcommand{\Mu}[]{\unicode{x039C}} \newcommand{\Chi}[]{Χ} \newcommand{\Eta}[]{\unicode{x0397}} \newcommand{\Nu}[]{\unicode{x039D}} \newcommand{\Omicron}[]{\unicode{x039F}} \DeclareMathOperator{\sgn}{sgn} \def\oiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x222F}\,}{\unicode{x222F}}{\unicode{x222F}}{\unicode{x222F}}}\,}\nolimits} \def\oiiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x2230}\,}{\unicode{x2230}}{\unicode{x2230}}{\unicode{x2230}}}\,}\nolimits} \)

Website
Paper
Supplementary Material Mirror
Deep image prior GitHub

The basic idea is that your choice of neural network imparts a prior on the output.
That is, by using a CNN, the output of your neural network tends to be smoother images.
They show this by fitting a neural network to noise vs fitting it to natural or drawn images.
CNNs are harder to fit to noise than to natural images.
This leads to numerous applications including denoising, reconstruction, super-resolution, and impainting.

For each application, the idea is to train a CNN neural network to reconstruct that particular image.
However, you perform early stopping so that the neural network does not learn to reconstruct the noise or artifacts within the image.
Since CNNs find it easier to reconstruct low-frequency details which correspond too the image structure, they'll reconstruct those first during the optimization process.

Architectures

See the supplementary material