Skip to content

Latest commit

 

History

History
89 lines (75 loc) · 3.05 KB

features.md

File metadata and controls

89 lines (75 loc) · 3.05 KB
layout title
page
Features

While OpenNMT initially focused on standard sequence to sequence models applied to machine translation, it has been extended to support many additional models and features. The tables below highlight some of the key features of each implementation.

  • TOC {:toc}

Tasks

OpenNMT-py OpenNMT-tf
Image to text ✓ (v1 only)
Language modeling
Sequence classification
Sequence tagging
Sequence to sequence
Speech to text ✓ (v1 only)
Summarization

Models

OpenNMT-py OpenNMT-tf
ConvS2S
DeepSpeech2 ✓ (v1 only)
GPT-2
Im2Text ✓ (v1 only)
Listen, Attend and Spell
RNN with attention
Transformer

Model configuration

OpenNMT-py OpenNMT-tf
Cascaded and multi-column encoder
Copy attention
Coverage attention
Hybrid models
Multiple source
Multiple input features
Relative position representations
Tied embeddings

Training

OpenNMT-py OpenNMT-tf
Automatic evaluation
Automatic model export
Contrastive learning
Data augmentation (e.g. noise)
Early stopping
Gradient accumulation
Supervised alignment
Mixed precision
Moving average
Multi-GPU
Multi-node
On-the-fly tokenization
Pretrained embeddings
Scheduled sampling
Sentence weighting
Vocabulary update
Weighted dataset

Decoding

OpenNMT-py OpenNMT-tf
Beam search
Coverage penalty
CTranslate2 compatibility
Ensemble
Length penalty
N-best rescoring
N-gram blocking
Phrase table
Random noise
Random sampling
Replace unknown

For more details on how to use these features, please refer to the documentation of each project: