This chapter will show the importance of continuous latent space of Variational Autoencoders (VAEs) and its importance in music generation compared to standard Autoencoders (AEs). We'll use the MusicVAE model, a hierarchical recurrent VAE, from Magenta to sample sequences and then interpolate between them, effectively morphing smoothly from one to another. We'll then see how to add groove, or humanization, to an existing sequence, using the GrooVAE model. We'll finish by looking at the TensorFlow code used to build the VAE model.
Before you start, follow the installation instructions for Magenta 1.1.7.
This example shows how to sample, interpolate and humanize a drums sequence using MusicVAE and various configurations. For the Python script, while in the Magenta environment (conda activate magenta
):
# Runs the example, the output files (plot, midi) will be in the "output" folder
python chapter_04_example_01.py
For the Jupyter notebook:
jupyter notebook notebook.ipynb
This example shows how to sample and interpolate a melody sequence using MusicVAE and various configurations. For the Python script, while in the Magenta environment (conda activate magenta
):
# Runs the example, the output files (plot, midi) will be in the "output" folder
python chapter_04_example_02.py
This example shows how to sample a trio (drums, melody, bass) sequence using MusicVAE and various configurations. For the Python script, while in the Magenta environment (conda activate magenta
):
# Runs the example, the output files (plot, midi) will be in the "output" folder
python chapter_04_example_03.py
# On linux
fluidsynth -a pulseaudio -g 1 -n -i /usr/share/sounds/sf2/FluidR3_GM.sf2 $( ls -t output/groove/*.mid | head -n1 )
TODO add call examples for the utils note_sequence_utils.py functions