Perhaps "loadSample" spins off a thread to load the audio, and "trigger" forces it to go in real time. In this case, the buffer would exist to say, "If you can't wait for the audio to load, I'll play it back in real time." But then, wouldn't AudioSample would be a pointless and redundant class?
I've been looking at ANAR, and it seems a much more logical way of approaching geometry than the native Processing model. It looks like ANAR replaces much of Processing, itself, which makes me wary of using it. How would I use other libraries, etc?
Likewise, the Peasy camera takes a similar approach to ANAR, and my life seems much easier.
Why does Processing uses the stack-based model? (I'm sure there must be some advantage to it.) I've written some code using it, but I feel like I'm a procedural programmer working in LISP.
Does anyone know of a simple, mature wavelet library, preferably that can be called Processing? Specifically, the library would allow access to and reconstruction of audio data (by time and amplitude, separated by frequency bands) as with Minim's FFT API?