We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hello I am utpal from MNNIT Allahabad,I have been following processing for a while now and accordingly have filed 2 issues and put up 2 pull requests (although they don't affect code in a major way).I would be interested in contributing to processing-sound this summer and accordingly have come up with some ideas that might be of interest . I have had a look of all the existing present libraries like beads,minim and the existing processing library which uses methcla interface. Some facilitate synthesizing sound in real time and others which do rigorous analysis of sound data and let users plot them on a graph(like frequency spectrum,Amplitude spectrum)however after looking into every aspect I think one library which would help generate algorithmic music is still not up to the mark (I had a look over Tactu5 library which seemed promising but it's not as competitive as Jmusic) along with processing text to speech and vice-versa. So finally this is what I think
2.Generation of algorithmic music would using Jmusic would be also an added functionality . At present I am thinking of implementing Piano sounds and guitar sounds out of this library . The API for this library is huge and can many features like rendering notes on a canvas or even exporting it some other formats exist. It's an easy to use library with no complex built in dependencies .
I have been a member of open source organization(catrobat subproject Musicdroid) for quite a while now which does the job of producing music on mobile based device and hence I am familiar with most of the music notations and how they are produced . This post doesn't consist of any technical details . I just want a head's up on ideas.Also is there any specific format in which I need to submit my proposal or do I need to make custom defined tags on my own and then describe them ?