Minim 2.1.0 Questions
in
Core Library Questions
•
2 years ago
I have a couple questions about Minim 2.1.0.
I would like to create a breathing sound that can be changed in duration with a value mapped from an Arduino Ping sensor.
I am already using the arduino data in the sketch to time some visuals, using a separate thread, communicating back to arduino to change a LED Matrix.
What I need to know specifically is how I would be able to generate the sound in a fairly live manner.
Right now the examples I am looking at are the "realtimeControlExample" which I have successfully integrated into my program to create the sound it makes, but being triggered by my visuals at the right time.
I am also looking at the "waveformShaperInstrument" and wondering how to use to my needs.
I have another sketch that is using FFT to analyze a prerecorded breath sound from the net. It is giving me the frequencies that are involved in creating the complex sound. This is built from the basic FFT example "bandCenters".
What I would like to do, and you can let me know if this is impossible, is to take the data extracted from the FFT analysis and apply it to a waveform shaper that will then use those frequencies on white noise to create a similar sound, note that it does not need to be perfect, I just want it to "sound" like breathing, and the lack of precision in the sound is actually suiting to the project. After I have the waveform shaper making a breathing sound from white noise I then want to be able to apply a ADSR ugen to the sound to control the length of the sound, and a bit crush ugen to crush the sound as the participant gets closer to the Ping Sensor.
Ultimately the main problem I will run into is being able to sync the sound and the visuals together, since right now the visuals can change instentanously with a change in the ping distance, because of the way that I programmed the visuals in its own thread. Ideally I would like this same functionality with the audio, but I believe that once the instrument is kicked off you cannot modify the ADSR values anymore. I may have to modify the code for the visuals so that it only alters the range values every so often so the sound can stay in sync, but I have a few hurdles obviously before I get to that point.
Please let me know any tips that you may have I would appreciate it.
And if I am approaching this in the wrong way please let me know this too. :)
Another note, I have thought about using another audio program to produce the audio, like PD, but I don't think I can send information back and forth between arduino using the USB serial emulator, and also to PD, I guess I could prefix each device with a certain character to look for before listening, but that may get too complex. I would like it all to run from one program.
Thanks in advance,
Greg Parsons
I would like to create a breathing sound that can be changed in duration with a value mapped from an Arduino Ping sensor.
I am already using the arduino data in the sketch to time some visuals, using a separate thread, communicating back to arduino to change a LED Matrix.
What I need to know specifically is how I would be able to generate the sound in a fairly live manner.
Right now the examples I am looking at are the "realtimeControlExample" which I have successfully integrated into my program to create the sound it makes, but being triggered by my visuals at the right time.
I am also looking at the "waveformShaperInstrument" and wondering how to use to my needs.
I have another sketch that is using FFT to analyze a prerecorded breath sound from the net. It is giving me the frequencies that are involved in creating the complex sound. This is built from the basic FFT example "bandCenters".
What I would like to do, and you can let me know if this is impossible, is to take the data extracted from the FFT analysis and apply it to a waveform shaper that will then use those frequencies on white noise to create a similar sound, note that it does not need to be perfect, I just want it to "sound" like breathing, and the lack of precision in the sound is actually suiting to the project. After I have the waveform shaper making a breathing sound from white noise I then want to be able to apply a ADSR ugen to the sound to control the length of the sound, and a bit crush ugen to crush the sound as the participant gets closer to the Ping Sensor.
Ultimately the main problem I will run into is being able to sync the sound and the visuals together, since right now the visuals can change instentanously with a change in the ping distance, because of the way that I programmed the visuals in its own thread. Ideally I would like this same functionality with the audio, but I believe that once the instrument is kicked off you cannot modify the ADSR values anymore. I may have to modify the code for the visuals so that it only alters the range values every so often so the sound can stay in sync, but I have a few hurdles obviously before I get to that point.
Please let me know any tips that you may have I would appreciate it.
And if I am approaching this in the wrong way please let me know this too. :)
Another note, I have thought about using another audio program to produce the audio, like PD, but I don't think I can send information back and forth between arduino using the USB serial emulator, and also to PD, I guess I could prefix each device with a certain character to look for before listening, but that may get too complex. I would like it all to run from one program.
Thanks in advance,
Greg Parsons
1