Jump to content

Machine Learning Glossary: Difference between revisions

Line 2: Line 2:


==A==
==A==
* Attention - An element used in transformers which involves computing the product of query embeddings and key embeddings to compute the interaction between sequence elements.
* Attention - An component of transformers which involves computing the product of query embeddings and key embeddings to compute the interaction between elements.


==B==
==B==