Natural language processing: Difference between revisions

 
Line 19: Line 19:
===Transformer===
===Transformer===
{{ main | Transformer (machine learning model)}}
{{ main | Transformer (machine learning model)}}
[https://arxiv.org/abs/1706.03762 Attention is all you need paper]<br>
[https://arxiv.org/abs/1706.03762 Attention is all you need paper]
A neural network architecture by Google.
 
It is currently the best at NLP tasks and has mostly replaced RNNs for these tasks.
A neural network architecture by Google which uses encoder-decoder attention and self-attention.
It is currently the best at NLP tasks and has mostly replaced RNNs for these tasks
However, it's computational complexity is quadratic in the number of input and output tokens due to attention.


;Guides and explanations
;Guides and explanations