We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Good afternoon all - hope everybody is doing well.
A quick run down - I took a module in Processing at uni years ago and thoroughly enjoyed my time with it. I haven't done any sort of programming since but really want to get back into creative coding.
Essentially, the ultimate goal is to create live abstract visualisations of audio (the audio being part pre-programmed and part live) and project it onto myself whilst 'gigging'. Alongside learning Processing, I'll be learning Ableton as I'm bored of Cubase and from what I've heard Ableton is a lot more flexible.
So I guess the main quiestion is this... Is it possible/relatively painless to route multiple tracks of audio from Ableton into Processing and then use data from those tracks to trigger visuals? I'm guessing I'd have to get some sort of virtual I/O box and send the audio to both the speakers and this virtual box, which Processing would pick up/analyse from there? I know it's possible to do this with MIDI data so I assume it should be the same for audio?
Also to note, I'm aware that as I'll be learning Ableton I could probably just use MaxMSP/Jitter and make life easier for myself. The aim though is to teach myself to code - I don't mind jumping through a few hoops to get it up and running, as long as my CPU dosen't explode under the load...
Many thanks all anyway, have a good evening!
Comments
Lots of Processing users have worked with Ableton:
including dj visualization work, e.g.
See also node-based IDEs:
in particular, if you are interested in the MaxMSP/Jitter paradigm you might want to try PraxisLive:
From the announce:
That's perfect - many thanks Jeremy, some good reading to get involved with there :-)