Natural language processing
Natural language processing (NLP)
The Classical NLP consists of creating a pipeline using processors to create annotations from text files.
Below is an example of a few processors.
- Convert a paragraph of test or a file into an array of words.
- Part-of-speech annotation
- Named Entity Recognition
Datasets and Challenges
The Stanford Question Answering Dataset. There are two versions of this dataset, 1.1 and 2.0.
A neural network architecture by Google which uses encoder-decoder attention and self-attention.
It is currently the best at NLP tasks and has mostly replaced RNNs for these tasks.
However, it's computational complexity is quadratic in the number of input and output tokens due to attention.
- Guides and explanations
- A Lite BERT for Self-supervised Learning of Language Representations
This is a parameter reduction on Bert.