A set of notebooks that explores the power of Recurrent Neural Networks (RNNs), with a focus on LSTM, BiLSTM, seq2seq, and Attention.
Performing machine translation between English and Italian using an encoder-decoder-based seq2seq model combined with an additive attention mechanism. The encoder is a Bi-directional LSTM and the decoder is a simple LSTM. The additive attention mechanism is proposed by Bahdanau et al., in their paper Neural Machine Translation By Jointly Learning to Align and Translate published in 2015. The model is implemented in Keras.
The word feature vector data is retrieved from nlp.standford.edu. and is approximately 350MB in size. The data of English-Italian tab-delimited sentence pairs, which is originally collected for the Tatoeba Project, is retrieved from manythings.org, and is approximately 50MB in size. The training time is within 2 hours (for 35 epochs).
Generating music using a Recurrent Neural Network (RNN) model with LSTM and dense layers, implemented in Tensorflow/Keras and trained on 10 pieces of Frédéric Chopin's Waltz/Valse pieces and 9 pieces of Chopin's Nocturne pieces separately.
- 10 Valse/Waltz pieces in MIDI: kunstderfuge.com
- 9 Nocturne pieces in MIDI: 8notes.com
Listen to one of the waltz music generated by the model.