Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains

From David's Wiki
\( \newcommand{\P}[]{\unicode{xB6}} \newcommand{\AA}[]{\unicode{x212B}} \newcommand{\empty}[]{\emptyset} \newcommand{\O}[]{\emptyset} \newcommand{\Alpha}[]{Α} \newcommand{\Beta}[]{Β} \newcommand{\Epsilon}[]{Ε} \newcommand{\Iota}[]{Ι} \newcommand{\Kappa}[]{Κ} \newcommand{\Rho}[]{Ρ} \newcommand{\Tau}[]{Τ} \newcommand{\Zeta}[]{Ζ} \newcommand{\Mu}[]{\unicode{x039C}} \newcommand{\Chi}[]{Χ} \newcommand{\Eta}[]{\unicode{x0397}} \newcommand{\Nu}[]{\unicode{x039D}} \newcommand{\Omicron}[]{\unicode{x039F}} \DeclareMathOperator{\sgn}{sgn} \def\oiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x222F}\,}{\unicode{x222F}}{\unicode{x222F}}{\unicode{x222F}}}\,}\nolimits} \def\oiiint{\mathop{\vcenter{\mathchoice{\huge\unicode{x2230}\,}{\unicode{x2230}}{\unicode{x2230}}{\unicode{x2230}}}\,}\nolimits} \)

Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains

Authors: Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, and more... Affiliations: UC Berkeley, Google Research, UC San Diego

This is mainly a followup paper expanding on the positional encoding idea from Nerf. The purpose is to address the spectral bias of neural networks.

Background

I don't really understand much here. Read the paper instead.

Kernel regression

Spectral bias

Setup

Their focus is on MLPs where the input is low-dimensional.
For example \((x,y)\) to \((r,g,b)\) or \((x,y,z)\) to density.

Results

See their website.

Resources

  • Nerf
  • SIREN
    • The results and findings here are very similar but they use sinusoidal activations instead.
  • Reddit discussion