5,321
edits
(→A) |
(→A) |
||
Line 2: | Line 2: | ||
==A== | ==A== | ||
* Attention - An component of transformers which involves computing the product of query and key embeddings to compute the interaction between elements. | * Attention - An component of [[Transformer_(machine_learning_model)|transformers]] which involves computing the product of query and key embeddings to compute the interaction between elements. | ||
==B== | ==B== |