I'd like to try using this kinda thing to build an automated beat saber map. The ability to orchestrate the beats very specifically would make for excellent mappings.
Awesome, I've always wanted to make something like this myself but never came around to do it!
It would be really neat if someone combined this with a very simple sequencer/tracker to create and manipulate tunes, which could then be dropped in (sequencer + synthesizer) directly into projects. It could be a quick and easy way to add music to small-scale retro game projects, for example.
Yes, please! This would be a really neat tool. There are already some Ableton devices that are similar. Makes me want to investigate how to build a device.
I had an idea for a similar project. Basically, a tabletop drum kit. In this case, you’d tap on a few objects on the table that make distinct sounds and then map that back to a virtual drum kit in your DAW using midi. I never got around to building it, but I am still keen on the idea.
I am actually very interested in its application to musical patterns, ie the actual notes rather than the audio. I think there's already a tool that uses this to generate rich and musically-correct MIDI on the fly but I'm having trouble remembering the name/manufacturer now. Future Retro maybe.
It would be nice if the output midi files were broken into verse, chorus, bridge, etc instead of one big midi file per track. Also be nice if drums was midi instead of rendered audio.
If you ever make an API, I'd be interested in integrating it into some of my software. (I've already said that in previous threads you've posted)
I would love to see a MIDI interpreter for something like this. MIDI is already a series of note events (among other weird things like pitchbend and sustain, so it couldn't be too hard.
I'm working on a project that maps the steam controller as a synthesizer and midi controller, called couchsynth. I'm also working on a textfile-based music sequencer called textbeat (https://github.com/flipcoder/textbeat).
Definitely. It'd be cool if someone could get the text-to-speech synthesizer to follow a single BPM and time signature. Then throw some drum loops and ambient sounds over that that would switch up every file.
This is something that has crossed my mind recently... Did you reached any conclusions/results? How did you layer the different midi tracks? It would be pretty cool to build a VSTi or AU with some AI implemented...
Alas so many projects, too little time!
reply