GSoC Proposal Concept - p5.js.Sound

edited March 2017 in Summer of Code 2017

Hello-- I am interested in working on p5.js.sound. I would break my idea down into three stages (I am not yet sure if it is feasible to propose all three as one summer project). Any thoughts and feedback would be greatly appreciated!

proposal can be viewed:

https://docs.google.com/document/d/1fC-ZSMrVdsF4MUCxuMoRKNp7_Jx_iXjqxSJM9oT0eC8/edit?usp=sharing

1) Develop EQ and Compression objects (based on Web Audio nodes). Potentially add 3D panning into soundfile object

2) Design presets for effects and create Instruments (presets based on Oscillators)

3) A library of modules for algorithmic composition. Programmers would be able to adjust parameters of these modules to generate evolving compositions.

I think these ideas make sense as a proposal for Processing because they fit the goal that the language should make things appear on the screen as quickly as possible. These additions to the sound library will be useful for musicians and non-musicians alike- for the former will have a fuller toolset to play with while creating visuals, and the latter will be able to easily produce interesting and non-repetitive sounds as they experiment.

I kept the descriptions brief but let me know if anything is unclear! Many thanks in advance.

Comments

  • edited March 2017

    Hi jvntf!

    This is sounding great.

    EFFECTS

    I like that you've chosen to start with effects. Some of the basic building blocks are already part of the Web Audio API, like the DynamicsCompressorNode and spatial panner (as you've already mentioned). So the challenge comes in planning out how to present these new features to users through a well-thought-out API, documentation, and examples.

    For example, what will be the relation between the p5.Filter and the new 8 band EQ? Should the stereo panner (which already powers the .pan method) be made available along with the 3D panner?

    You also mentioned creating a new class, p5.Effect. I think that's a great idea—inheritance would help ensure that all p5 effects have a similar API. But I don't see this reflected in your timeline. It might be a good place to start, before adding new effects.

    PRESETS

    This is a great idea.

    How would users find and access the presets? Some possible inspiration: - Wad presets. - ToneJS presets for ToneJS

    Check out these libraries if you haven't already. p5.sound uses some Tone.JS components under the hood.

    ALGORITHMIC COMPOSITION

    There are a few foundational features that I think need to be covered before p5.sound can really support algorithmic composition:

    • Musical timing. The p5 draw loop is not accurate enough for music, and timing is tricky in Web Audio. This article comes up a lot in Web Audio https://www.html5rocks.com/en/tutorials/audio/scheduling/ The p5.Part is most often used as a looper / metronome, and it might be more useful to offer that as its own class. The p5.Part was inspired by this book, but I don't think I got it quite right. Andrew Brown, one of the authors of the book, gave me some ideas for how to make it better and this is something I'd like to spend some time on soon.

    • Instruments. We need an easy way to play a note. It sounds like this could be covered by your proposed instrument library, so I'm curious to hear more details about what you have in mind. I think it's important to keep it simple, before getting into the timbre of different instruments. Check out the great work of b2renger towards a p5.Synth class—he has put a lot of thought into this and it might be a good jumping off point https://github.com/processing/p5.js-sound/issues/47

    From there, I wonder to what extent algorithmic composition should be handled by examples vs. included as part of the library? Should the algorithms be implemented specifically for sound? Or could they be implemented in a way that allows them to be used in visuals, text, or other mediums? These are open ended questions.

  • hi @talkscheap

    Thanks for your feedback! I have been working more on this and I have a couple of changes based on your comments. The number of p5 audio effects is greater than I had originally thought, so re: abstract effect class, I think it makes most sense to let this be an interface for applying effects to any sound (midi or soundfile). So would be similar to the relationship between p5.Oscillator and p5.SinOsc, in that the latter is a more direct way to create what you want. An abstract class will be even more useful, with the effects preset library because this will all be reference-able from a single call to a p5.effect constructor. I think this structure will make it more intuitive to combine effects to create new/complicated ones. I see p5.Filter and potential p5.Eq as on the same plane, these are both subclasses of Effect that are built on the Web Audio Filter node, the latter has the option to utilize the former (lo-cut, hi-cut, etc).

    re: Presets I am not sure if Tone.JS's JSON files are a better/worse option than Wad.js's variable architecture. Perhaps with regard to the future it is more helpful to build a library of JSON's, really just so everything is organized well.

    re: timing / composition For playing notes I think we can use envelopes to create notes and rhythms from a waveform. In this way the Instrument can be told to play X pitch for Y seconds, starting at time Z. This would require (as you say) an accurate measure of time for measuring when the clock started and when it is time to play the next note.

    re: instruments Cool! my ideas were fairly similar to what b2renger seems to have already implemented-- instruments would just be combinations of the existing p5 effects to affect the raw wave forms from an oscillator. A good resource that I will consult is rtcmix, in which everything is implemented with the above timing scheme- count onwards from time 0.

Sign In or Register to comment.