Natural Language Processing projects, which includes concepts and scripts about:
- Word2vec:
gensim
,fastText
andtensorflow
implementations. See Chinese notes, 中文解读. - Sentence2vec:
doc2vec
,word2vec averaging
andSmooth Inverse Frequency
implementations. See - Text_classification:
tensorflow LSTM
(See Chinese notes 1, 中文解读 1 and Chinese notes 2, 中文解读 2) andfastText
implementations. - Chinese_word_segmentation:
HMM Viterbi
implementations. See Chinese notes, 中文解读. - Sequence_labeling-NER: brands NER via bi-directional LSTM + CRF,
tensorflow
implementation. See Chinese notes, 中文解读. - Machine_reading_comprehension: introduction,
BiDAF+ELMo
implementation. - Knowledge_graph: introduction
- Pretraining_LM: introduction, principle of ELMo, ULMFit, GPT and BERT
- Attention == weighted averages
- The attention review 1 and review 2 summarize attention mechanism into several types:
- Additive vs Multiplicative attention
- Self attention
- Soft vs Hard attention
- Global vs Local attention
-
Parallelization [1]
- RNNs
- Why not good ?
- Last step's output is input of current step
- Solutions
- Simple Recurrent Units (SRU)
- Perform parallelization on each hidden state neuron independently
- Sliced RNNs
- Separate sequences into windows, use RNNs in each window, use another RNNs above windows
- Same as CNNs
- Simple Recurrent Units (SRU)
- CNNs
- Why good ?
- For different windows in one filter
- For different filters
- RNNs
-
Long-range dependency [1]
- CNNs
- Why not good ?
- Single convolution can only caputure window-range dependency
- Solutions
- Dilated CNNs
- Deep CNNs
N * [Convolution + skip-connection]
- For example, window size=3 and sliding step=1, second convolution can cover 5 words (i.e., 1-2-3, 2-3-4, 3-4-5)
- Transformer > RNNs > CNNs
- CNNs
-
Position [1]
-
CNNs
- Why not good ?
- Convolution preserves relative-order information, but max-pooling discards them
-
Solutions
- Discard max-pooling, use deep CNNs with skip-connections instead
- Add position embedding, just like in ConvS2S
-
- Why not good ?
- In self-attention, one word attends to other words and generate the summarization vector without relative position information
-
-
Semantic features extraction [2]
- Transformer > CNNs == RNNs
- [1] Review
- [2] Why self-attention? A targeted evaluation of neural machine translation architectures
Layer normalization is a normalization method in deep learning that is similar to batch normalization. In layer normalization, the statistics are computed across each feature and are independent of other examples. The independence between inputs means that each input has a different normalization operation.
- Spacy
- gensim
- Install tensorflow with one line:
conda install tensorflow-gpu