Jump to content

Deep Learning: Difference between revisions

No change in size ,  19 November 2020
no edit summary
No edit summary
Line 1,817: Line 1,817:
# Case 1: H is a discrete variable then <math>g^*(x, z) = \eta^*(x)^t \psi(z^*)</math>
# Case 1: H is a discrete variable then <math>g^*(x, z) = \eta^*(x)^t \psi(z^*)</math>
# Case 2: There exists <math>\eta,\psi</math> such that <math>E(\eta(x)^t \psi(z) - g^*(x,z))^2 \leq o(\frac{1}{m})</math>.
# Case 2: There exists <math>\eta,\psi</math> such that <math>E(\eta(x)^t \psi(z) - g^*(x,z))^2 \leq o(\frac{1}{m})</math>.
==LSTMs and transformers==
Lecture 23 (November 19, 2020)
===Recurrent Neural Networks (RNNs)===


==Meta Learning==
==Meta Learning==
Line 1,877: Line 1,872:


4: <math>A</math> is a black box (e.g. LSTM).
4: <math>A</math> is a black box (e.g. LSTM).
==LSTMs and transformers==
Lecture 23 (November 19, 2020)
===Recurrent Neural Networks (RNNs)===


==Misc==
==Misc==