Minimal Seq2Seq model with attention for neural machine translation in PyTorch.
This implementation focuses on the following features:
- Modular structure to be used in other projects
- Minimal code for readability
- Full utilization of batches and GPU.
This implementation relies on torchtext to minimize dataset management and preprocessing parts.
- Encoder: Bidirectional GRU
- Decoder: GRU with Attention Mechanism
- Attention: Neural Machine Translation by Jointly Learning to Align and Translate
- GPU & CUDA
- Python3
- PyTorch
- torchtext
- Spacy
- numpy
- Visdom (optional)
download tokenizers by doing so:
sudo python3 -m spacy download de
sudo python3 -m spacy download en
Based on the following implementations