-
Notifications
You must be signed in to change notification settings - Fork 157
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Suggestion: Adding a tutorial for using your own models in the plugins #54
Comments
This would be a great contribution for someone to add! Here are the high-level instructions: You'll first need to convert your TF python checkpoint to one compatible with TFJS using the instructions here: https://github.com/magenta/magenta-js/tree/master/music#your-own-checkpoints magenta-studio/continue/Model.js Line 30 in aaf3282
Finally, you can follow the instructions in the README to rebuild the apps: https://github.com/magenta/magenta-studio Hope this helps! |
Thank you for your response, Adam! I tried digging around and do not see anywhere about the exact configurations of the models that are used by Drumify. I trained a Musical VAE model using a configuration that is based on "groovae_2bar_hits_control_tfds", except it has 16 bars as the maximum sequence length (as my data has a lot of performances of that length and I want to retain the complexity) without any split_bars, with tapify=True, and I created my own drum pitch class mapping to match the data, which is not as simplified as the Roland drum pitch classes, but also not as extensive as the full drum pitch classes. I trained it on over 8000 midi files ranging from 2/4 bars, all the way to 16 bars, creating a note sequence of over 18MB. It went from a loss of 3000, all the way down to reaching almost 84 overnight. I tried to create samples with the music VAE generation script, but it gave me the "ValueError: Cannot feed value of shape () for Tensor 'Placeholder_2:0', which has shape '(?, 43)" error, which I saw elsewhere happens when you try to generate with a multi-track model in Musical VAE. Will a model like this still work if I plug it into the Drumify plugin? Should I also train a few more models with maximum sequence lengths varying from 1, 2, 3, 4... etc, or will a single model that was trained on sequences of up to 16 bars work sufficiently? |
This is about as far as I think I can go based on this being my first foray into Node.js: Tried building the plugins and there is no "dist" folder to be found anywhere. I hope this text file containing all the details of me doing npm install will help I really hope to get this to work someday with my own data, as I appreciate what the plugin can do for me as a guitarist that wants to spend more time coming up with melodies and riffs instead of programming drum patterns or scouring through 8000 midi files to find one that is suitable for a particular part that I am playing. |
After many challenges, here's the furthest I've gotten to after getting my trained model into the plugin and building it, and attempting to use it in Ableton Live Suite. Every other part of Magenta Studio works fine, as I am not changing any of the other models. This is with just stripping down to my best guess of the groovae_tap2drum_4bar model and disabling the other three models for testing. First I will show the config map for music_vae: TOONTRACK_REDUCED_DRUM_PITCH_CLASSES is simply an expanded copy of ROLAND_DRUM_PITCH_CLASSES, to work with my dataset. The neural network is still only reading 9 drums from the 59 different hits. I also use this config.json file in the model's folder:
After building the plugin in Max 8, and trying to run it in Ableton Live Suite 11, I get this error message.
Am I configuring the network wrong prior to training? I got the same error when I tried to train with the full 59 separate drum hits, getting a slightly larger checkpoint, so I tried to go back to what was done originally. It would be a great help if I could know the exact configuration that was used for the Drumify and Groove models, and even better for what could be done to allow for Drumify to utilize a more advanced network that is trained to recognize all 59 different hits, and generate them during inference. Anyone's help would be greatly appreciated! This project means a lot to me. |
Hey Riley, cool to hear that you're working on this! A few things:
Hope that helps! Jon |
@jrgillick , thank you so much for your help. I was able to get my models working with the plugin. However, here are some issues I've encountered during the process of attempting to follow the instructions Adam provided:
I am pleased with the results of what the model can do at the moment. I hope my journey will be helpful to continue the development of this project and help evolve it into something that will find more and more use in the workflow of many musicians. Thank you all for your continued help! |
Hey Riley, glad to hear you got your models working with the plugin, and thanks for the installation tips.
|
I'm trying to dig around in the code, and I think I have the idea that I must convert a satisfactory checkpoint into one suitable for magenta.js. However, since I have no experience working in JS, what is the process of utilizing a model that is hosted locally in one of the plugins? My end goal is to use Drumify with a model trained on data of my own choosing so the output is more suitable to my style of music.
The text was updated successfully, but these errors were encountered: