Natural language processing: Difference between revisions
Line 34: | Line 34: | ||
A pretrained NLP neural network. | A pretrained NLP neural network. | ||
Note the code is written in TensorFlow 1. | Note the code is written in TensorFlow 1. | ||
====Albert==== | |||
[https://github.com/google-research/google-research/tree/master/albert Github]<br> | |||
;A Lite BERT for Self-supervised Learning of Language Representations | |||
This is a parameter reduction on Bert. | |||
==Libraries== | ==Libraries== | ||
===Apache OpenNLP=== | ===Apache OpenNLP=== | ||
[https://opennlp.apache.org/ Link] | [https://opennlp.apache.org/ Link] |
Revision as of 12:36, 14 November 2019
Natural language processing (NLP)
Classical NLP
The Classical NLP consists of creating a pipeline using processors to create annotations from text files.
Below is an example of a few processors.
- Tokenization
- Convert a paragraph of test or a file into an array of words.
- Part-of-speech annotation
- Named Entity Recognition
Machine Learning
Datasets and Challenges
SQuAD
Link
The Stanford Question Answering Dataset. There are two versions of this dataset, 1.1 and 2.0.
Transformer
Attention is all you need paper
A neural network architecture by Google.
It is currently the best at NLP tasks and has mostly replaced RNNs for these tasks.
- Guides and explanations
Google Bert
Github Link
Paper
Blog Post
BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding
A pretrained NLP neural network.
Note the code is written in TensorFlow 1.
Albert
- A Lite BERT for Self-supervised Learning of Language Representations
This is a parameter reduction on Bert.