Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains
Fourier Features Let Networks Learn High Frequency Functions in Low Dimensional Domains
Authors: Matthew Tancik, Pratul Srinivasan, Ben Mildenhall, Sara Fridovich-Keil, and more... Affiliations: UC Berkeley, Google Research, UC San Diego
This is mainly a followup paper expanding on the positional encoding idea from Nerf. The purpose is to address the spectral bias of neural networks.
Background
I don't really understand much here. Read the paper instead.
Kernel regression
Spectral bias
Setup
Their focus is on MLPs where the input is low-dimensional.
For example \((x,y)\) to \((r,g,b)\) or \((x,y,z)\) to density.
Results
See their website.
Resources
- Nerf
- SIREN
- The results and findings here are very similar but they use sinusoidal activations instead.
- Reddit discussion