We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Ok, so implementing @koogs changes are not enough, as you get back to the business of empty recorded files. However, that gave me an idea (Thxs @koogs) which I tested and sort of works. I mean, it only works for mp3 files but not for wav files. However, I tried a second idea, and it might work for you although it doesn't seem to have much control over audio when it is being played. That is what I labeled second solution using sampler objects. It works for both mp3 and wav files (tested).
INSTRUCTIONS: In the code, define your file to play. When you run the sketch, press r to begin recording, r again to stop recording. Don't forget to press s to save the file to an audio file which will be located in the data folder.
Kf
//REFERENCE: https:// forum.processing.org/one/topic/how-can-i-detect-sound-with-my-mic-in-my-computer.html
//REFERENCE: https:// forum.processing.org/two/discussion/21842/is-it-possible-to-perform-fft-with-fileplayer-object-minim
/**
* This sketch demonstrates how to use an <code>AudioRecorder</code> to record audio to disk.
* Press 'r' to toggle recording on and off and the press 's' to save to disk.
* The recorded file will be placed in the sketch folder of the sketch.
* <p>
* For more information about Minim and additional features,
* visit <a href="http://code.compartmental.net/minim/" target="_blank" rel="nofollow">http://code.compartmental.net/minim/</a>
*/
import ddf.minim.*;
import ddf.minim.ugens.*;
import ddf.minim.analysis.*;
Minim minim;
FilePlayer player;
AudioOutput out;
AudioRecorder recorder;
void setup()
{
size(512, 200, P3D);
textFont(createFont("Arial", 12));
minim = new Minim(this);
player = new FilePlayer(minim.loadFileStream("energeticDJ.mp3"));
// IT DOESN'T WORK FOR WAV files ====> player = new FilePlayer(minim.loadFileStream("fair1939.wav"));
out = minim.getLineOut();
TickRate rateControl = new TickRate(1.f);
player.patch(rateControl).patch(out);
recorder = minim.createRecorder(out, dataPath("myrecording.wav"),true);
player.loop(0);
}
void draw()
{
background(0);
stroke(255);
// draw a line to show where in the song playback is currently located
float posx = map(player.position(), 0, player.length(), 0, width);
stroke(0, 200, 0);
line(posx, 0, posx, height);
if ( recorder.isRecording() )
{
text("Currently recording...", 5, 15);
} else
{
text("Not recording.", 5, 15);
}
}
void keyReleased()
{
if ( key == 'r' )
{
// to indicate that you want to start or stop capturing audio data, you must call
// beginRecord() and endRecord() on the AudioRecorder object. You can start and stop
// as many times as you like, the audio data will be appended to the end of the buffer
// (in the case of buffered recording) or to the end of the file (in the case of streamed recording).
if ( recorder.isRecording() )
{
recorder.endRecord();
} else
{
recorder.beginRecord();
}
}
if ( key == 's' )
{
// we've filled the file out buffer,
// now write it to the file we specified in createRecorder
// in the case of buffered recording, if the buffer is large,
// this will appear to freeze the sketch for sometime
// in the case of streamed recording,
// it will not freeze as the data is already in the file and all that is being done
// is closing the file.
// the method returns the recorded audio as an AudioRecording,
// see the example AudioRecorder >> RecordAndPlayback for more about that
recorder.save();
println("Done saving.");
}
}
//REFERENCE: https:// forum.processing.org/one/topic/how-can-i-detect-sound-with-my-mic-in-my-computer.html
//REFERENCE: https:// forum.processing.org/two/discussion/21842/is-it-possible-to-perform-fft-with-fileplayer-object-minim
//REFERENCE: https:// forum.processing.org/two/discussion/21953/why-can-i-only-load-four-audio-files-in-minum
/**
* This sketch demonstrates how to use an <code>AudioRecorder</code> to record audio to disk.
* Press 'r' to toggle recording on and off and the press 's' to save to disk.
* The recorded file will be placed in the sketch folder of the sketch.
* <p>
* For more information about Minim and additional features,
* visit <a href="http://code.compartmental.net/minim/" target="_blank" rel="nofollow">http://code.compartmental.net/minim/</a>
*/
import ddf.minim.*;
import ddf.minim.ugens.*;
import ddf.minim.analysis.*;
Minim minim;
AudioRecorder recorder;
AudioOutput out;
Sampler note;
void setup()
{
size(512, 200, P3D);
textFont(createFont("Arial", 12));
minim = new Minim(this);
out = minim.getLineOut();
note = new Sampler( "energeticDJ.mp3", 4, minim );
//note = new Sampler( "fair1939.wav", 4, minim );
note.patch( out );
recorder = minim.createRecorder(out, dataPath("myrecording.wav"), true);
note.trigger();
}
void draw()
{
background(0);
stroke(255);
if ( recorder.isRecording() )
{
text("Currently recording...", 5, 15);
} else
{
text("Not recording.", 5, 15);
}
}
void keyReleased()
{
if ( key == 'r' )
{
// to indicate that you want to start or stop capturing audio data, you must call
// beginRecord() and endRecord() on the AudioRecorder object. You can start and stop
// as many times as you like, the audio data will be appended to the end of the buffer
// (in the case of buffered recording) or to the end of the file (in the case of streamed recording).
if ( recorder.isRecording() )
{
recorder.endRecord();
} else
{
recorder.beginRecord();
}
}
if ( key == 's' )
{
// we've filled the file out buffer,
// now write it to the file we specified in createRecorder
// in the case of buffered recording, if the buffer is large,
// this will appear to freeze the sketch for sometime
// in the case of streamed recording,
// it will not freeze as the data is already in the file and all that is being done
// is closing the file.
// the method returns the recorded audio as an AudioRecording,
// see the example AudioRecorder >> RecordAndPlayback for more about that
recorder.save();
println("Done saving.");
}
}
Keyword: kf_keyword minim sound recording from wav mp3 files
So, i´ve been tryng to make the minim recorder to record the audio from the Audioplayer object when is played without any succes.
Something like the combination of this 2 examples :
http://code.compartmental.net/minim/audioplayer_method_play.html http://code.compartmental.net/minim/minim_method_createrecorder.html
I´ve found similar threats but with no anwer to it :
https://forum.processing.org/one/topic/minim-how-to-record-minim-s-output.html
The audio plays just well but the recorder comes out empty. If I try the "createrecorder" example it works on recording the sounds created by minim, but I need it to record the sound getting them from the audio player. This is my greates attemp :
import ddf.minim.*;
import ddf.minim.ugens.*;
Minim minim;
AudioOutput out;
AudioRecorder recorder;
AudioPlayer sonido;
void setup()
{
size(512, 200, P3D);
minim = new Minim(this);
out = minim.getLineOut();
sonido = minim.loadFile("fijo2.wav");
sonido.loop();
recorder = minim.createRecorder(out, "myrecording.wav");
textFont(createFont("Arial", 12));
}
void draw()
{
background(0);
stroke(255);
}
void keyReleased()
{
if ( key == 'r' )
{
if ( recorder.isRecording() )
{
recorder.endRecord();
}
else
{
recorder.beginRecord();
}
}
if ( key == 's' )
{
recorder.save();
println("Done saving.");
}
}
Im tryng the same thing without any success.
this would be the combination of > kfrajer but is coming out empty anyway.``
`import ddf.minim.*;
import ddf.minim.ugens.*;
Minim minim;
AudioOutput out;
AudioRecorder recorder;
AudioPlayer sonido;
void setup()
{
size(512, 200, P3D);
minim = new Minim(this);
out = minim.getLineOut();
sonido = minim.loadFile("fijo2.wav");
sonido.loop();
recorder = minim.createRecorder(out, "myrecording.wav");
textFont(createFont("Arial", 12));
}
void draw()
{
background(0);
stroke(255);
}
void keyReleased()
{
if ( key == 'r' )
{
if ( recorder.isRecording() )
{
recorder.endRecord();
}
else
{
recorder.beginRecord();
}
}
if ( key == 's' )
{
recorder.save();
println("Done saving.");
}
}`
Main bottleneck, other than the fact that there isn't a handy save function, is that using trigger() on the audioSample bypasses the audioOutput even though its patched in, so I'm unable to place a audioRecorder to the output and save it from there. (it picks up the beeps from playNote() but not trigger() :/)
Probably going to resolve this outside of processing as I've no current leads (Saving the initial audio recordings and doing post-processing everything that I had already done in Processing >_> all because lacking a save function..), but still curious on how one would go about this properly..
I am trying to use the minim library to record a mp3 that is manipulated with a motion sensor. However, the recordings I am making are coming up empty. Here is my code:
import processing.serial.*;
import cc.arduino.*;
import org.firmata.*;
import processing.sound.*;
import ddf.minim.*;
import ddf.minim.ugens.*;
Arduino ardy;
SoundFile quick;
int senVal;
Minim minim;
AudioOutput out;
AudioRecorder recorder;
void setup() {
fullScreen(P3D);
noCursor();
background(0);
//Getting Arduino Data
println(Arduino.list());
ardy = new Arduino(this, "COM3", 57600);
ardy.pinMode(0, Arduino.INPUT);
//sound
quick = new SoundFile(this, "QUICKENING.mp3");
quick.loop();
//record sound
minim = new Minim(this);
out = minim.getLineOut();
recorder = minim.createRecorder(out, "quickening_live.wav");
frameRate(25);
}
void draw() {
//Sensor Data
senVal = ardy.analogRead(0);
println(senVal);
quick.rate(map(senVal, 70, 10, 1, 0.25));
}
void keyReleased(){
if ( key == 'r' ) {
if ( recorder.isRecording() ) {
recorder.endRecord();
} else {
recorder.beginRecord();
}
}
if ( key == 's' ){
recorder.save();
println("Done saving.");
}
}
I have worked on a piece of code that generates notes and beats from tweets. And, I am struggling with a beat part where I use an example code called bitCrushBeatExample as a drum sound for this sound project. I found out that after running this for a long time, like an hour, the speaker produces noise and keeps itself open for signal. Please click here to see the video. You can hear the noise at the 10th second.
As I tried several ways to turn it off, right now the bitCrush.unpatch(out) works for me. I use bitCrush.patch(out) again when the program run into a scene to generate sound again. However, the noise from the speaker will come back and play along with the other note and beat sounds.
Here is a rewritten shorten code. This code will runs fine without noise when the note is off after the 16 sec. But, I put it here because the actual code (a very long one) is similar to this.
// bitCrushBeatExample
import ddf.minim.*;
import ddf.minim.ugens.*;
import ddf.minim.effects.*; // for BandPass
Minim minim;
Summer sum;
BitCrush bitCrush;
Line bitRateLine;
AudioOutput out;
AudioPlayer player;
AudioSample player_copy;
Timer time;
int scene=0;
int round=0;
boolean soundOn= true;
void setup()
{
// initialize the drawing window
size( 512, 200, P2D );
// initialize the minim and out objects
minim = new Minim( this );
out = minim.getLineOut( Minim.MONO );
player=minim.loadFile("dialupSound.mp3");
player_copy=minim.loadSample("dialupSound.mp3");
// make our summer and bit crush ugens
sum = new Summer();
bitCrush = new BitCrush(16.f, out.sampleRate());
// we're going to do 4 measures of 120 bpm, so that's 8 seconds.
// we'll just ramp from half the sample rate to 100 Hz
bitRateLine = new Line(8.f, out.sampleRate()*0.25f, 100 );
// connect the line to the bit crush resolution
bitRateLine.patch( bitCrush.bitRate );
// set up our signal chain
sum.patch( bitCrush ).patch( out );
// pause time when adding a bunch of notes at once
out.pauseNotes();
// we set the tempo of the output so that the time and duration arguments
// of playNote now are expressed in beats
out.setTempo( 60.f );
println(round+"------------time is reset-----------");
time = new Timer (24000);
scene=1;
}
// draw is run many times
void draw()
{
textAlign(CENTER);
background( 0 );
fill(255);
switch(scene) {
case 1:
text("This is scene 1", width/2, (height/2)-30);
text("click anywhere to go to scene 2", width/2, height/2);
break;
case 2:
text("This is scene 2", width/2, (height/2)-30);
text ("play.isPlaying(), at the end will go to scene 3 then, play_loadSample", width/2, height/2);
text("no mouse click", width/2, (height/2)+30);
if (!player.isPlaying()) {
if ( !soundOn) {
bitCrush.patch(out);
playDrum();
soundOn=true;
}
player=minim.loadFile("dialupSound_effect.mp3");
player.play();
scene++;
time.reset();
}
break;
case 3:
text("This is scene 3", width/2, (height/2)-30);
text ("click here to pause/play beat", (width/2)-100, height/2);
text ("click here to go to scene 1", (width/2)+100, height/2);
// draw using a white stroke
stroke( 255 );
// draw the waveforms
for ( int i = 0; i < out.bufferSize() - 1; i++ )
{
// find the x position of each buffer value
float x1 = map( i, 0, out.bufferSize(), 0, width );
float x2 = map( i+1, 0, out.bufferSize(), 0, width );
// draw a line from one buffer position to the next for both channels
line( x1, 50 + out.left.get(i)*50, x2, 50 + out.left.get(i+1)*50);
line( x1, 150 + out.right.get(i)*50, x2, 150 + out.right.get(i+1)*50);
}
if ( time.isDone()) {
//out.pauseNotes();
playDrum();
time.reset();
round++;
println(round+"------------time is reset-----------");
}
text(time.getCurrentTime(), 20, 30);
time.update();
break;
}
}
void playDrum() {
float kickDur = 0.8;
float snareDur = 0.2;
out.pauseNotes();
for (int i = 0; i < 4; i++)
{
out.setNoteOffset( i * 4 );
// we set the note offset so that each loop we are queuing up a new measure
out.playNote( 0, kickDur, new KickInstrument( sum ) );
println("note 1");
out.playNote( 1, snareDur, new SnareInstrument( sum ) );
out.playNote( 1.5, kickDur, new KickInstrument( sum ) );
println("note 2");
out.playNote( 2.5, kickDur, new KickInstrument( sum ) );
println("note 3");
out.playNote( 3, snareDur, new SnareInstrument( sum ) );
out.playNote( 3.5, snareDur, new SnareInstrument( sum ) );
out.playNote( 2.5, kickDur, new KickInstrument( sum ) );
println("note 4");
// every other measure give a little kick at the end
if ( i % 2 == 1 )
{
out.playNote( 3.75, 0.1, new KickInstrument( sum ) );
println("little kick");
}
}
// activate the line and unpause the output!
bitRateLine.activate();
out.resumeNotes();
//out.pauseNotes();
}
void mousePressed() {
switch(scene) {
case 1:
player.play();
scene++;
break;
case 3:
if (mouseX <= width/2) {
if (soundOn) { // play and puase
println("pause");
bitCrush.unpatch(out);
soundOn=false;
} else {
println("play");
time.reset();
bitCrush.patch(out);
playDrum();
soundOn=true;
}
} else { // go back to scene 1 (home)
println("go back");
player=minim.loadFile("dialupSound.mp3");
player.pause();
player.rewind();
bitCrush.unpatch(out);
out.close();
soundOn=false;
scene=1;
time.reset();
}
break;
}
}
void stop()
{
// always close Minim audio classes when you are done with them
out.close();
player.close();
minim.stop();
super.stop();
}
class KickInstrument implements Instrument {}
class SnareInstrument implements Instrument {}
class Timer {}
To explain a bit more: this piece of code has 3 scenes. At the 3 scene the sound should be produced. So, when going back to scene 1 the sound should be removed. In this whole project, there are other classes as well, which are SineInstrument, Gain, but they are not listed below. The SineInstrument has no problem at this time. I use out.pauseNote(); to stop it. In all Kick, and Sine class of instrument at void notenf(){} I add the any oscill class a .reset(); into the function.
like this:
void noteOn(float dur) {
// patch our oscil to the summer we were given and start the line
sineOsc.reset();
freqLine.activate();
sineOsc.patch(gain2).patch(out);
}
Is there a way to I stop the noise when it is not playing anything!? Is it possible that if you change .setTempo() it might be the one causing this problem? (In my code, tempo has been changed over time.)
I seem to be having an issue trying to use both at once in a single .pde. From looking at examples, it seems that there are different methods of setting up and playing a soundfile for BeatDetect and MoogFilter. I cannot patch the sample that is set up in the BeatDetect Example file online: code.compartmental.net/minim/beatdetect_method_iskick.html so that it then runs into the MoogFilter.
I get an error if I try to put the file name "song" from that example in the filter. "song.patch( moog1 ).patch( out );"
Below I have set up and example of both methods. I listed and labelled the two separate methods of sample setup as "ugen" or "analysis", and "shared" for any code that they share. Could someone kindly point out how to make both work together? The issue lies in the final line of patching code.
import ddf.minim.*;
import ddf.minim.analysis.*; // analysis
import ddf.minim.ugens.*; // ugens
Minim minim; // shared
AudioPlayer song; // analysis
BeatDetect beat; // analysis
FilePlayer filePlayer1; // ugens
AudioOutput out; // ugens
String fileName1 = "Gothik.mp3"; // ugens
MoogFilter moog1; // ugens
void setup()
{
size(600, 600);
minim = new Minim(this); // shared
song = minim.loadFile("Gothik.mp3", 2048); // analysis
song.loop(); // analysis
beat = new BeatDetect(); // analysis
filePlayer1 = new FilePlayer( minim.loadFileStream(fileName1) ); // ugens
out = minim.getLineOut(); // ugens
moog1 = new MoogFilter( 1200, 0 ); // ugens
filePlayer1.patch( moog1 ).patch( out ); // ugens **ISSUE IS HERE**
}
I have no idea why there are two methods of setting up a sample to play and I'm getting a bit confused.
Many thanks
Right, okay. So here's the example that demonstrates changing the gain.
/**
* This sketch demonstrates how to use the <code>getGain</code> and <code>setGain</code> methods of a
* <code>Controller</code> object. The class used here is an <code>AudioOutput</code> but you can also
* get and set the gain of <code>AudioSample</code>, <code>AudioSnippet</code>, <code>AudioInput</code>,
* and <code>AudioPlayer</code> objects. <code>getGain</code> and <code>setGain</code> will get and set
* the gain of the <code>DataLine</code> that is being used for input or output, but only if that line has
* a gain control. A <code>DataLine</code> is a low-level JavaSound class that is used for sending audio to,
* or receiving audio from, the audio system. You will notice in this sketch that you will hear the gain
* changing (if it's available) but you will not see any difference in the waveform being drawn. The reason for this
* is that what you see in the output's sample buffers is what it sends to the audio system. The system makes the
* gain change after receiving the samples.
*/
import ddf.minim.*;
import ddf.minim.signals.*;
Minim minim;
AudioOutput out;
Oscillator osc;
WaveformRenderer waveform;
void setup()
{
size(512, 200);
minim = new Minim(this);
out = minim.getLineOut();
// see the example AudioOutput >> SawWaveSignal for more about this class
osc = new SawWave(100, 0.2, out.sampleRate());
// see the example Polyphonic >> addSignal for more about this
out.addSignal(osc);
waveform = new WaveformRenderer();
// see the example Recordable >> addListener for more about this
out.addListener(waveform);
textFont(createFont("Arial", 12));
}
void draw()
{
background(0);
// see waveform.pde for more about this
waveform.draw();
if ( out.hasControl(Controller.GAIN) )
{
// map the mouse position to the audible range of the gain
float val = map(mouseX, 0, width, 6, -48);
// if a gain control is not available, this will do nothing
out.setGain(val);
// if a gain control is not available this will report zero
text("The current gain is " + out.getGain() + ".", 5, 15);
}
else
{
text("The output doesn't have a gain control.", 5, 15);
}
}
When you run this, do you get changing gain based on the mouse Position?
I have some question regarding the Minim library,
I tried the setVolume or setGain instances both didn't change anything. I am trying to start with no sound till tuioCursorList.size is bigger than 0 and when it is, sound can fade in with a simple for loop but all my attempts on changing the volume failed so couldn't implement it to the code.
I got part of the code from Minim > Synthesis > realtimeControlExample and some stuff are out of my reach. In the code below. The out.PlayNote takes 3 parameter I got the idea of them first is the start point, second is the duration and thirds is the note which will be played. How can I set so that the note plays not just for defined time but as long as program is running?
And the I tried calling out.playNote in draw and didn't work, works only when its called setup which makes it harder for me to change the parameters.
// import everything necessary to make sound.
import ddf.minim.*;
import ddf.minim.ugens.*;
import ddf.minim.effects.*;
import TUIO.*;
TuioProcessing tuioClient = new TuioProcessing(this);
ArrayList<TuioCursor> tuioCursorList = new ArrayList<TuioCursor>();
Minim minim;
AudioOutput out;
NoiseInstrument myNoise;
float x = 0;
float y = 0;
float xPos;
float yPos;
void setup()
{
size( 500, 500, P2D );
minim = new Minim( this );
out = minim.getLineOut( Minim.MONO, 512 );
myNoise = new NoiseInstrument( 1.0, out );
out.playNote( 0, 100.0, myNoise );
}
void draw()
{
background( 0 );
tuioCursorList = tuioClient.getTuioCursorList();
for (int i=0; i<tuioCursorList.size(); i++) {
TuioCursor tc = tuioCursorList.get(i);
xPos = tc.getScreenX(width);
yPos = tc.getScreenY(height);
if ( tuioCursorList.size() > 0) {
float freq = map( yPos, 0, height, 200, 120 );
float q = map( xPos, 0, width, 15, 25 );
myNoise.setFilterCF( freq );
myNoise.setFilterQ( q );
println(tuioCursorList.size());
}
}
}
// Every instrument must implement the Instrument interface so
// playNote() can call the instrument's methods.
// This noise instrument uses white noise and two bandpass filters
// to make a "whistling wind" sound. By changing using the methods which
// change the frequency and the bandwidth of the filters, the sound changes.
class NoiseInstrument implements Instrument
{
// create all variables that must be used throughout the class
Noise myNoise;
Multiplier multiply;
AudioOutput out;
BandPass filt1, filt2;
Summer sum;
float freq1, freq2, freq3;
float bandWidth1, bandWidth2;
float filterFactor;
// constructors for this intsrument
NoiseInstrument( float amplitude, AudioOutput output )
{
// equate class variables to constructor variables as necessary
out = output;
// give some initial values to the realtime control variables
freq1 = 150.0;
bandWidth1 = 10.0;
filterFactor = 1.7;
// create new instances of any UGen objects
myNoise = new Noise( amplitude, Noise.Tint.WHITE );
multiply = new Multiplier( 0 );
filt1 = new BandPass( freq1, bandWidth1, out.sampleRate() );
filt2 = new BandPass( freq2(), bandWidth2(), out.sampleRate() );
sum = new Summer();
// patch everything (including the out this time)
myNoise.patch( filt1 ).patch( sum );
myNoise.patch( filt2 ).patch( sum );
sum.patch( multiply );
}
// every instrument must have a noteOn( float ) method
void noteOn( float dur )
{
// set the multiply to 1 to turn on the note
multiply.setValue( 1 );
multiply.patch( out );
}
// every instrument must have a noteOff() method
void noteOff()
{
// set the multiply to 0 to turn off the note
multiply.setValue( 0 );
multiply.unpatch( out );
}
// this is a helper method only used internally to find the second filter
float freq2()
{
// calculate the second frequency based on the first
return filterFactor*freq1;
}
// this is a helper method only used internally
// to find the bandwidth of the second filter
float bandWidth2()
{
// calculate the second bandwidth based on the first
return filterFactor*bandWidth1;
}
// this is a method to set the center frequencies
// of the two filters based on the CF of the first
void setFilterCF( float cf )
{
freq1 = cf;
filt1.setFreq( freq1 );
filt2.setFreq( freq2() );
}
// this is a method to set the bandwidths
// of the two filters based on the BW of the first
void setFilterBW( float bw )
{
bandWidth1 = bw;
filt1.setBandWidth( bandWidth1 );
filt2.setBandWidth( bandWidth2() );
}
// this is a method to set the Q (inverse of bandwidth)
// of the two filters based on the
void setFilterQ( float q )
{
setFilterBW( freq1/q );
}
}
// called when a cursor is added to the scene
void addTuioCursor(TuioCursor tcur) {
}
// called when a cursor is moved
void updateTuioCursor (TuioCursor tcur) {
}
// called when a cursor is removed from the scene
void removeTuioCursor(TuioCursor tcur) {
}
// --------------------------------------------------------------
// these callback methods are called whenever a TUIO event occurs
// there are three callbacks for add/set/del events for each object/cursor/blob type
// the final refresh callback marks the end of each TUIO frame
// called when an object is added to the scene
void addTuioObject(TuioObject tobj) {
}
// called when an object is moved
void updateTuioObject (TuioObject tobj) {
}
// called when an object is removed from the scene
void removeTuioObject(TuioObject tobj) {
}
// --------------------------------------------------------------
// called when a blob is added to the scene
void addTuioBlob(TuioBlob tblb) {
}
// called when a blob is moved
void updateTuioBlob (TuioBlob tblb) {
}
// called when a blob is removed from the scene
void removeTuioBlob(TuioBlob tblb) {
}
// --------------------------------------------------------------
// called at the end of each TUIO frame
void refresh(TuioTime frameTime) {
}
I am in the process of adapting a music visualizer prototype from vanilla Processing to p5.js, and assets clearly work differently in a browser app than in a desktop app. Basically, I need to determine the best way to allow a user to use their own audio selections with my visualizer, and I'd rather ask some knowledgeable individuals before I waste a lot of time trying to get the p5.sound/Web Audio API to do something it's not meant to do.
I need an audio source that I can plug into an FFT object. As far as I see it, there are three approaches available:
(1) Acquire a reference to the output of the client's soundcard (audio monitor), akin to an AudioOutput
object in Minim, and analyze that, allowing the user to play music in their preferred way and simply pull up the visualizer in a new window. If possible, this would be preferable;
(2) Load an audio asset from the user's hard disk into the browser environment;
(3) Accept a URL to a web-based audio asset.
I know that, for security reasons, JavaScript cannot simply dig files up from a user's hard disk, and I imagine there might not be a straight-forward way to capture an audio monitor, either. The URL approach, while not my ideal outcome, sounds like the most feasible, but I'm not sure how to load an external resource like that into the p5.sound environment, which as far as I can tell only accepts a local file accessible from the sketch folder.
I'm not asking for a complete solution or anything, but if someone could point me in the right direction, perhaps towards some relevant reference docs, I'd be very grateful.
float P = 0;
float A = 0;
float spe[];
float den[];
float temp[];
int DC = 0;
import ddf.minim.*;
import ddf.minim.ugens.*;
Record[] records;
Minim minim;
AudioOutput out;
Oscil wave;
//======================================
//SETUP
//======================================
void setup() {
println("Setup");
size(1887, 1000);
println("Loading Data");
//======================================
//SOUND DATA
//======================================
minim = new Minim(this);
// use the getLineOut method of the Minim object to get an AudioOutput object
out = minim.getLineOut();
//======================================
//VISUAL DATA
//======================================
println("");
//======================================
String[] rawData =
loadStrings("http://" + "services.swpc.noaa.gov/text/ace-swepam.txt"); //data
records = getRecordsFromRawData(rawData);
}
void draw() {
showSomeData();
}
void stop() {
minim.stop();
super.stop();
}
//======================================
void showSomeData() {
float H = 100; //GRID VAR
float W = 1920; //LENGTH VAR
// println("Data Loaded");
//======================================
// DEBUG
//======================================
// println("Debug;");
// float A = r.temperature/10000;
// float B = r.speed/100;
// float C = r.density;
// println(A);
// println(B);
// println(C);
//======================================
// println("Debug End");
println("Table Start");
//======================================
//grid
//======================================
fill(175, 175, 175); //Gray
stroke(175, 175, 175); //Gray outline
rect(32*3, 0, 0, 1920); //side
//======================================
//HOR
//======================================
stroke(170, 170, 170);
rect(101+(15*3), 0, 0, W);
rect(101+(15*7), 0, 0, W);
rect(101+(15*11), 0, 0, W);
rect(101+(15*15), 0, 0, W);
rect(101+(15*19), 0, 0, W);
rect(101+(15*23), 0, 0, W);
rect(101+(15*27), 0, 0, W);
rect(101+(15*31), 0, 0, W);
rect(101+(15*35), 0, 0, W);
rect(101+(15*39), 0, 0, W);
rect(101+(15*43), 0, 0, W);
rect(101+(15*47), 0, 0, W);
rect(101+(15*51), 0, 0, W);
rect(101+(15*55), 0, 0, W);
rect(101+(15*59), 0, 0, W);
rect(101+(15*63), 0, 0, W);
rect(101+(15*67), 0, 0, W);
rect(101+(15*71), 0, 0, W);
rect(101+(15*75), 0, 0, W);
rect(101+(15*79), 0, 0, W);
rect(101+(15*83), 0, 0, W);
rect(101+(15*87), 0, 0, W);
rect(101+(15*91), 0, 0, W);
rect(101+(15*95), 0, 0, W);
rect(101+(15*99), 0, 0, W);
rect(101+(15*103), 0, 0, W);
rect(101+(15*107), 0, 0, W);
rect(101+(15*111), 0, 0, W);
rect(101+(15*115), 0, 0, W);
rect(101+(15*119), 0, 0, W);
rect(101+(15*121), 0, 0, W);
//======================================
//VER
//======================================
rect(32*3, H*1, W, 0);
rect(32*3, H*2, W, 0);
rect(32*3, H*3, W, 0);
rect(32*3, H*4, W, 0);
rect(32*3, H*5, W, 0);
rect(32*3, H*6, W, 0);
rect(32*3, H*7, W, 0);
rect(32*3, H*8, W, 0);
rect(32*3, H*9, W, 0);
//======================================
rect(32*3, H*10, W, 0);
rect(32*3, H*11, W, 0);
rect(32*3, H*12, W, 0);
rect(32*3, H*13, W, 0);
rect(32*3, H*14, W, 0);
rect(32*3, H*15, W, 0);
rect(32*3, H*16, W, 0);
rect(32*3, H*17, W, 0);
rect(32*3, H*18, W, 0);
rect(32*3, H*19, W, 0);
rect(32*3, H*20, W, 0);
println("");
println("Done");
//======================================
//Lines Default
//======================================
int i=32; //Data offset
println("");
println("Loop Start");
for (Record r : records) {
// println(r.density, r.speed, r.temperature, r.time); //Debug Storage
//======================================
// Bottom Half
if (r.temperature > 0) { //filter 1
if (r.speed > 0) { //filter 2
stroke(255, 0, 255);
line(i*3+5, r.temperature * .001 + 500, i*3+5, r.speed*1-100); //Line Calc
}
}
//======================================
// Top Half
if (r.density > 0) { //filter 1
if (r.speed > 0) { //filter 2
stroke(0, 255, 255);
line(i*3+5, r.density * 100 + 100, i*3+5, r.speed*1-100); //Line Calc
}
}
//======================================
//Time Overlay
//======================================
if (P < 3) {
P = P + 1;
} else {
fill(100, 100, 100);
text(r.time, (i*3)+8, 994);
P = 0;
}
//======================================
//Line Corrections
//======================================
// Overlay 1A
if ((r.density * 100 + 100) > (r.temperature * .001 + 400)) {
stroke(255, 0, 255);
line(i*3+5, r.speed*1-100, i*3+5, r.temperature * .001 + 500); //Line Calc
//println("< H1 CY >");
}
// Overlay 1B
if ((r.temperature * .001 + 500) > (r.density * 100 + 100)) {
stroke(0, 255, 255);
line(i*3+5, r.speed*1-100, i*3+5, r.density * 100 + 100); //Line Calc
//println("< H1 MA >");
}
// Overlay 2
if ((r.temperature * .001 + 500) < (r.speed*1-100)) {
stroke(255, 0, 255);
line(i*3+5, r.speed*1-100, i*3+5, r.temperature * .001 + 500); //Line Calc
//println("< H2 CY >");
}
line(i*3+5, 900, i*3+5, 900); //seperator
float cor = r.temperature;
if (cor < 0) {
stroke(255, 100, 100);
fill(255, 100, 100);
rect(i*3, 0, 10, 980);
}
//======================================
//CALCULATIONS
//======================================
if (r.speed > -1) {
fill(100, 100, 100); // white
// stroke(100, 100, 100); // white
stroke(100, 100, 100); // Gray
ellipse(5 + i*3, (r.speed*1-100), 4, 4); //SPEED
// filter
} else {
ellipse(5 + i*3, 400, 4, 4);
println("Invalid data (SPEED)");
} //debug
//======================================
if (r.temperature > -1) {
fill(255, 0, 250); // magenta
// stroke(255, 0, 250); // magenta
stroke(100, 100, 100); // Gray
ellipse(5 + i*3, (r.temperature * .001 + 500), 4, 4); // TEMP
// filter
} else {
ellipse(5 + i*3, 400, 4, 4);
println("Invalid data (TEMP)");
} //debug
if (r.temperature < 59000) {
r.temperature = 59000;
}
//======================================
if (r.density > -1) {
fill(0, 255, 255); // cyan
// stroke(0, 255, 255); // cyan
stroke(100, 100, 100); // Gray
ellipse(5 + i*3, (r.density * 100 + 100), 4, 4); //DESNITY
// filter
} else {
ellipse(5 + i*3, 400, 4, 4);
println("Invalid data (DENSITY)");
} //debug
//======================================
i+=5; // loop increase
//======================================
//OVERLAY
//======================================
// etc
fill(150, 150, 150); // gray
text("+", 4, 995);
text("h h m m", 100, 983); //bottom left
text("T I M E:", 100, 995);
//======================================
stroke(100, 100, 100); // cyan
fill(100, 100, 100); // black
text("W I N D S P E E D D A T A", 4, 14); //title
//======================================
//SIDEBAR
//======================================
text("0 . 0 p / c c", 4, 128);
text("1 . 0 p / c c", 4, 228);
text("2 . 0 p / c c", 4, 328);
text("3 . 0 p / c c", 4, 428);
text("4 . 0 p / c c", 4, 528);
text("5 . 0 p / c c", 4, 628);
text("6 . 0 p / c c", 4, 728);
text("7 . 0 p / c c", 4, 828);
text("2 0 0 k m / s", 4, 114);
text("3 0 0 k m / s", 4, 214);
text("4 0 0 k m / s", 4, 314);
text("5 0 0 k m / s", 4, 414);
text("6 0 0 k m / s", 4, 514);
text("2 0 0 0 0 0 k", 4, 642);
text("-2 0 0 0 0 k", 4, 342);
text("-1 0 0 0 0 k ", 4, 442);
text("1 0 0 0 0 k ", 4, 542);
text("7 0 0 k m / s", 4, 614);
text("3 0 0 0 0 0 k", 4, 714);
text("4 0 0 0 0 0 k", 4, 814);
text("5 0 0 0 0 0 k", 4, 914);
//======================================
//DESNITY
//======================================
// cyan
fill(0, 255, 255);
stroke(100, 100, 100); // gray outline
rect(80, 118, 12, 12);
rect(80, 218, 12, 12);
rect(80, 318, 12, 12);
rect(80, 418, 12, 12);
rect(80, 518, 12, 12);
rect(80, 618, 12, 12);
rect(80, 718, 12, 12);
rect(80, 818, 12, 12);
//key
rect(80, 956, 12, 12);
text("D e n s i t y", 12, 966);
//======================================
//speed
//======================================
// gray
fill(100, 100, 100);
rect(80, 104, 12, 12);
rect(80, 204, 12, 12);
rect(80, 304, 12, 12);
rect(80, 404, 12, 12);
rect(80, 504, 12, 12);
rect(80, 604, 12, 12);
//key
rect(80, 970, 12, 12);
text("S p e e d", 25, 978);
//======================================
//temp
//======================================
// magenta
fill(255, 0, 255);
rect(80, 332, 12, 12);
rect(80, 432, 12, 12);
rect(80, 532, 12, 12);
rect(80, 632, 12, 12);
rect(80, 704, 12, 12);
rect(80, 804, 12, 12);
rect(80, 904, 12, 12);
//key
rect(80, 984, 12, 12);
text("T e m p ", 32, 990);
//======================================
//Error Count
//======================================
if (r.speed < -1) {
DC = DC + 1 ;
}
}//for
//======================================
// noLoop(); // loop end
println("");
println("Loop End");
println("Error Count : ", DC);
println("");
println("=============== debug ===============");
}
//======================================
//storage setup A
//======================================
Record[] getRecordsFromRawData(String[] data) {
ArrayList<Record> r = new ArrayList<Record>();
for (String line : data) { //var
char c = line.charAt(0); // data line
if (c >= '0' && c <= '9') {
r.add(new Record(line));
}
}
return r.toArray(new Record[r.size()]);
}
//======================================
//Storage setup B
//======================================
class Record {
float density, speed, temperature; //var
int time;
//======================================
public Record(String lineOfData) {
String[] items = lineOfData.split("\\s+");
//======================================
density = float(items[7]); //cyan
speed = float(items[8]); //gray
temperature = float(items[9]); //magenta
int hhmm = int(items[3]);
time = hhmm;
//println(hhmm);
}
}
//======================================
//Mouse
//======================================
void mousePressed() {
int X = mouseX;
int Y = mouseY;
//======================================
int s = 15;
int l = 0;
//println(X);
//======================================
//Sound Playback
//======================================
int i=32; //Data offset
for ( Record r : records) {
if (dist(X, 0, i*3+5, 0) < 8) {
//println("");
//println(r);
//======================================
float S = records[l].speed;
println("Speed : ", S);
float D = records[l].density;
println("Density : ", D);
float T = records[l].temperature;
println("Temperature : ", T);
//======================================
int A = ((int)r.speed/100)+100;
//println("Speed Playback Tone : ", A);
//======================================
if (wave!=null&&out!=null) {
wave.unpatch(out);
}
//======================================
if (A < 10000) {
if (wave!=null&&out!=null) {
wave.unpatch(out);
}
wave = new Oscil((int)map(r.speed, 220, 500, 440, 1000), 0.5f, Waves.SINE );
wave.patch(out);
println(r.speed);
// delay(s*100);
} else {
println("Invalid data set");
}
break;
//======================================
}
l = l + 1;
i=i+5;
}//for
}//func
//======================================
//END
//======================================
/**
* This sketch demonstrates how to create synthesized sound with Minim
* using an AudioOutput and an Oscil. An Oscil is a UGen object,
* one of many different types included with Minim. For many more examples
* of UGens included with Minim, have a look in the Synthesis
* folder of the Minim examples.
*/
import ddf.minim.*;
import ddf.minim.ugens.*;
Minim minim;
AudioOutput out;
Oscil wave;
float[] notes ;
void setup()
{
size(512, 200, P3D);
notes = new float[8] ; // create the array to hold the note frequencies
notes[0] = 261.63 ;
notes[1] = 293.67 ;
notes[2] = 329.63 ;
notes[3] = 349.23 ;
notes[4] = 391.99 ;
notes[5] = 440 ;
notes[6] = 493.88 ;
notes[7] = 523.25 ;
minim = new Minim(this);
// use the getLineOut method of the Minim object to get an AudioOutput object
out = minim.getLineOut();
}
void draw()
{
background(0);
stroke(255);
// draw the waveforms
for (int i = 0; i < out.bufferSize() - 1; i++)
{
line( i, 50 + out.left.get(i)*50, i+1, 50 + out.left.get(i+1)*50 );
line( i, 150 + out.right.get(i)*50, i+1, 150 + out.right.get(i+1)*50 );
}
}
void keyPressed() {
//wave.reset();
switch(key) {
case ' ':
if (wave!=null&&out!=null)
wave.unpatch(out);
break;
default:
// all other keys
int asc1 = int(key);
println (asc1);
if (asc1>='a'&&asc1<='h') {
if (wave!=null&&out!=null) {
wave.unpatch(out);
} // if
//wave.unpatch(out);
// create a sine wave Oscil, set to 440 Hz, at 0.5 amplitude
if ((asc1-97) < 8)
wave = new Oscil( notes[asc1-97], 0.5f, Waves.SINE );
// patch the Oscil to the output
wave.patch( out );
} // if
break;
}
}
void stop()
{
minim.stop();
super.stop() ;
}
//
This next post talks about how to get the data that is under the mouse pointer: https://forum.processing.org/two/discussion/23097/how-can-i-determine-which-spectrum-is-generated-from-which-file-on-mouseclick#latest
For data into audio, check: http://code.compartmental.net/minim/javadoc/ddf/minim/AudioOutput.html
You can populate the buffer and then play it. You can see more examples from previous posts: https://forum.processing.org/two/search?Search=audiooutput
Kf
import ddf.minim.*;
import ddf.minim.ugens.*;
import ddf.minim.analysis.*;
Minim minim;
Oscil myWave1;
Oscil myWave2;
Summer sum;
AudioOutput out;
void setup() {
size (600, 900);
minim = new Minim(this);
Summer sum= new Summer();
myWave1 = new Oscil (1000, 0.5f, Waves.SINE);
myWave2 = new Oscil (1000, 0.5f, Waves.SINE);
out = minim.getLineOut();
myWave1.patch (sum);
myWave2.patch (sum);
sum.patch(out);
}
void draw() {
background (0);
stroke(255, 255, 0);
strokeWeight(1);
for (int i = 0; i < out.bufferSize() - 1; i++)
{
line(i, 20 - out.left.get(i)*50, i+1, 50 - out.left.get(i+1)*50);
line(i, 120 - out.right.get(i)*50, i+1, 150 - out.right.get(i+1)*50);
}
stroke(0, 255, 255);
for (int i = 0; i < out.bufferSize() - 1; i++) {
line(i, 220 - out.left.get(i)*50, i+1, 250 - out.left.get(i+1)*50);
line(i, 320 - out.right.get(i)*50, i+1, 350 - out.right.get(i+1)*50);
}
stroke(255, 0, 255);
for (int i = 0; i < out.bufferSize() - 1; i++) {
line(i, 420 - out.left.get(i)*50, i+1, 450 - out.left.get(i+1)*50);
line(i, 520 - out.right.get(i)*50, i+1, 550 - out.right.get(i+1)*50);
}
}
Hello! I am working on a sketch that I have found here, I have modified it a bit. Basically, at the moment, there are two modes of displaying the image: one is only showing the color of the single pixel, and the other is showing the image when the key is pressed. What I 'd like to do is showing both. Having both the image and the color of the pixel being picked up in the same sketch. Is it possible? how can I modify the code to obtain that?
import ddf.minim.*;
import ddf.minim.ugens.*;
import ddf.minim.signals.*;
PImage img;
PImage [] imgArray = new PImage [1];
int direction = 1;
float signal = 0;
int savedTime;
int totalTime = 2000; // load new image after 5 seconds
Minim minim;
AudioOutput out;
SineWave sine;
boolean show_color = true;
color c;
void setup(){
// fullScreen();
size(800,800);
// frameRate(random (5,10));
savedTime = millis();
img = loadImage("imageName0"+".jpg");
imgArray[0] = img;
img.resize(0,800);
minim = new Minim(this);
// get a line out from Minim, default bufferSize = 1024, default sample rate = 44100, bit depth = 16
out = minim.getLineOut(Minim.STEREO);
// create a sine wave Oscillator, set to 440 Hz, at 0.5 amplitude, sample rate from line out
sine = new SineWave(440, 0.5, out.sampleRate());
// set the portamento speed on the oscillator to 200 milliseconds
sine.portamento(100);
// add the oscillator to the line out
out.addSignal(sine);
noFill();
noStroke();
image(imgArray[int (random(imgArray.length))],0,0);
}
void draw(){
int passedTime = millis () - savedTime;
play();
if (passedTime > totalTime){
thread ("setup");
savedTime =millis ();
}
}
void play () {
if (signal > img.width*img.height || signal < 0);
else
//signal += 0.33*direction;
signal += random(0, 10)*direction;
//int sX = int (random (0,800));
// int sY = int(random (0,800));
int sX = int(signal) % width;
int sY = int(signal) / height;
c = img.get(sX, sY);
set(0, 0, img);
stroke(255);
strokeWeight(2);
rect(sX -8, sY -8 , 16,16);
point(sX, sY);
noStroke();
frameRate (random (5,15));
if(show_color) draw_color();
float freq = map(red(img.get(sX, sY)), 0, 255, 250, 900);
float amplitude = map(green(img.get(sX, sY)), 0, 255, 0, 1);
println (amplitude);
sine.setFreq(freq);
out.playNote( 0.0, 0.5, new SineInstrument( freq, amplitude ) );
out.resumeNotes();
float pan = map(blue(img.get(sX, sY)), 0, 255, -1, 1);
sine.setPan(pan);
println( " R = " + red(img.get(sX, sY)) + ", G = " + green(img.get(sX, sY)) + ", B = " + blue(img.get(sX, sY)));
//println (signal);
sine.setAmp(0);
}
void keyPressed()
{
show_color = ! show_color ? true : false;
}
void draw_color()
{
background(c);
}
class SineInstrument implements Instrument
{
Oscil wave;
Line ampEnv;
SineInstrument( float frequency, float amplitude )
{
// make a sine wave oscillator
// the amplitude is zero because
// we are going to patch a Line to it anyway
wave = new Oscil( frequency, amplitude, Waves.SINE );
ampEnv = new Line();
ampEnv.patch( wave.amplitude );
}
// this is called by the sequencer when this instrument
// should start making sound. the duration is expressed in seconds.
void noteOn( float duration )
{
// start the amplitude envelope
ampEnv.activate( duration, 0.5f, 0 );
// attach the oscil to the output so it makes sound
wave.patch( out );
}
// this is called by the sequencer when the instrument should
// stop making sound
void noteOff()
{
wave.unpatch( out );
}
}
Thanks for the hint. I have now got the sampler method to work. Just to round things off for anyone finding this here is my demo code that uses keys 0 to 8 to play samples 0.wav to 9.wav stored in the data folder.
/**
Sampler method */
import ddf.minim.*;
import ddf.minim.ugens.*;
Minim minim;
AudioOutput out;
Sampler [] note = new Sampler[9];
void setup()
{
size(512, 200);
minim = new Minim(this);
out = minim.getLineOut();
for(int i=0;i<9;i++){
note[i] = new Sampler( str(i)+".wav", 4, minim );
note[i].patch( out );
}
}
void draw() { }
void keyPressed()
{
if(key >= '0' && key <'9'){
int s = 0xf & int(key);
note[s].trigger();
}
}
Thanks guys.
This is what I ended up with
import ddf.minim.*;
import ddf.minim.analysis.*;
import ddf.minim.spi.*;
import ddf.minim.ugens.*;
Minim minim;
TickRate rateControl;
FilePlayer player;
AudioOutput out;
FFT fft;
float spectrumAvg;
void setup() {
fullScreen();
minim = new Minim(this);
selectInput("Select an audio file:", "fileSelected");
}
void fileSelected(File selection) {
String audioFileName = selection.getAbsolutePath(); //loading the selected file
player = new FilePlayer(minim.loadFileStream(audioFileName)); //initialising the filePlayer object
out = minim.getLineOut(); //initialising the audioOut object, no out, no sound.
fft = new FFT(out.bufferSize(), player.sampleRate()); //initialising the FFT object, setting the out buffersize to the selected file samplerate
rateControl = new TickRate(1.f); //initialising the tickRate object
player.patch(rateControl).patch(out); //building the UGen chain, patching the player through the tickRate and into the out object
rateControl.setInterpolation(true); //stops the audio from being "scratchy" lower speeds
player.loop(0);
}
void draw() {
background(0);
stroke(255);
if (player != null) {
if (fft != null) {
fft.forward(out.mix); //combining the left and right channels
for (int i = 0; i < fft.specSize(); i++) {
float lineStrength = height - fft.getBand(i)*height/2;
spectrumAvg += lineStrength;
line(i, height, i, lineStrength);
}
spectrumAvg = spectrumAvg / fft.specSize();
println(spectrumAvg);
strokeWeight(5);
stroke(255, 125, 0);
line(0, spectrumAvg, width, spectrumAvg);
line(0, height/5+height/5*3, width, height/5+height/5*3);
line(0, height/5+height/5*2, width, height/5+height/5*2);
line(0, height/5+height/5, width, height/5+height/5);
line(0, height/5, width, height/5);
if(spectrumAvg < height && spectrumAvg > height/5+height/5*3) rateControl.value.setLastValue(3.f);
if(spectrumAvg < height/5+height/5*3 && spectrumAvg > height/5+height/5*2) rateControl.value.setLastValue(6.f);
if(spectrumAvg < height/5+height/5*2 && spectrumAvg > height/5+height/5) rateControl.value.setLastValue(12.f);
if(spectrumAvg < height/5+height/5 && spectrumAvg > 0) rateControl.value.setLastValue(24.f);
}
}
}
AudioPlayer is self contained and as such doesn't seem to be able to be used in a UGen chain, this is why I had to use FilePlayer, all the same methods available and able to be used in a UGen chain.
The FFT is used to get the spectrum data(frequency bands) from the input file, I used the bufferSize of the AuidioOutput for this purpose as the FilePlayer object doesn't have a bufferSize method.
I took the average of the FFT and am using this in conjunction with tickRate to control playback speed.
AudioOut, FilePlayer, TickRate & FFT are UGens. AudioPlayer is not.
Hope that's clearer, Some of this might not be correct as I was teaching myself as I did this. Those examples are wonderful.
I looked at the tickrate example and realised that it accesses the bufferSize through the output, so I gave it a try in my code below.
It plays the sound but nothing appears on screen and the
fft.forward(player.mix);
doesn't work. Is this on the right track, or am I grasping at straws?
import ddf.minim.*;
import ddf.minim.analysis.*;
import ddf.minim.spi.*;
import ddf.minim.ugens.*;
Minim minim;
TickRate rateControl;
FilePlayer player;
AudioOutput out;
FFT fft;
float spectrumAvg;
void setup() {
//fullScreen();
size(512, 200);
minim = new Minim(this);
selectInput("Select an audio file:", "fileSelected");
}
void fileSelected(File selection) {
String audioFileName = selection.getAbsolutePath();
player = new FilePlayer(minim.loadFileStream(audioFileName));
out = minim.getLineOut();
fft = new FFT(out.bufferSize(), player.sampleRate());
rateControl = new TickRate(1.f);
player.patch(rateControl).patch(out);
rateControl.setInterpolation(true);
player.loop(0);
}
void draw(){
background(0);
stroke(255);
if (player != null) {
if (fft != null) {
fft.forward(player.mix); //error
for (int i = 0; i < fft.specSize(); i++) {
float lineStrength = height - fft.getBand(i)*height/2;
spectrumAvg += lineStrength;
line(i, height, i, lineStrength);
}
spectrumAvg = spectrumAvg / fft.specSize();
println(spectrumAvg);
}
}
}
That thread you link to was when I was trying to work out how to speed up playback, your suggestion therein lead me to Beads but Beads is unable to play my 90min+ mp3s.
Im reading the microphone line in and storing the audiobuffer, is there a way to use AudioOutput to play what im recording without having to save it on the filesystem?
AudioInput in=minim.getLineIn();
soundIn.add(in.mix);
AudioOutput out=minim.getLineOut();
//something like out.mix=soundIn.get(i);
@framos --
Here is a simple walkthrough of how to combine collision detection with sound playing:
Add an output and a sample to your sketch header:
import ddf.minim.*;
import ddf.minim.ugens.*;
Minim minim;
AudioOutput out;
Sampler sample;
Load the sample and patch it to your output in setup()
:
minim = new Minim(this);
out = minim.getLineOut();
sample = new Sampler( "boing.wav", 12, minim );
sample.patch(out);
Trigger the sample in the collision()
method, inside the if
statement that fires when a collision happens:
sample.trigger();
That's it! Now you have 12 voices worth of "boing" sounds when balls collide. For more, read the minim documentation on Sampler: http://code.compartmental.net/minim/sampler_class_sampler.html