We closed this forum 18 June 2010. It has served us well since 2005 as the ALPHA forum did before it from 2002 to 2005. New discussions are ongoing at the new URL http://forum.processing.org. You'll need to sign up and get a new user account. We're sorry about that inconvenience, but we think it's better in the long run. The content on this forum will remain online.
IndexProgramming Questions & HelpSound,  Music Libraries › Mic Input>3 second Delay>Headphone Output
Page Index Toggle Pages: 1
Mic Input>3 second Delay>Headphone Output (Read 1611 times)
Mic Input>3 second Delay>Headphone Output
Dec 1st, 2009, 2:26pm
 
Hi guys, I'm not really a programmer, although i can dissect simple code if I stare at it hard enough.
Basically I want to recreate a sound installation that I made a couple of months ago which used Logic Express and a Tape Delay effect to take ambient sound captured by a mic, delay its output by 3 seconds, and then play it into the headphones.  All without recording anything and in realtime.

I want to recreate this in Processing, and from what I gather, the ESS library looks like its capable of this. Is it possible to delay the output by x amount in realtime and keep streaming along? And will this program be quite bandwidth intensive, as Im looking to host it on my portfolio site which isn't that large.

Any help would be greatly appreciated Smiley
Re: Mic Input>3 second Delay>Headphone Output
Reply #1 - Dec 1st, 2009, 4:01pm
 
Just an update: I've managed to get a realtime input and output with a delay by increasing the buffersize using some code I found here, but the output is played twice in succession.  What do you think the cause of this is?

Code:
import ddf.minim.*;

Minim minim;
AudioInput in;
AudioOutput out;
WaveformRenderer waveform;

void setup()
{
size(512, 200);

minim = new Minim(this);
minim.debugOn();

// get a line in from Minim, default bit depth is 16
in = minim.getLineIn(Minim.STEREO,44000);
out = minim.getLineOut(Minim.STEREO,44000);

waveform = new WaveformRenderer();
in.addListener(waveform);


// adds the signal to the output
out.addSignal(waveform);

}

void draw()
{
background(0);
stroke(255);

}


void stop()
{
// always close Minim audio classes when you are done with them
in.close();
out.close();

minim.stop();

super.stop();
}


class WaveformRenderer implements AudioListener, AudioSignal
{

private float[] left;
private float[] right;

WaveformRenderer()
{
// left = null;
// right = null;
}

synchronized void samples(float[] samp)
{
left = samp;
}

synchronized void samples(float[] sampL, float[] sampR)
{
left = sampL;
right = sampR;
}


void generate(float[] samp)
{
System.arraycopy(right, 0, samp, 0, samp.length);

}

// this is a stricly mono signal
void generate(float[] sampL, float[] sampR)
{
if (left!=null && right!=null){
System.arraycopy(left, 0, sampL, 0, sampL.length);
System.arraycopy(right, 0, sampR, 0, sampR.length);
}
}


}
Re: Mic Input>3 second Delay>Headphone Output
Reply #2 - Jan 14th, 2010, 3:29pm
 
23kid wrote on Dec 1st, 2009, 4:01pm:
Just an update: I've managed to get a realtime input and output with a delay by increasing the buffersize using some code I found here, but the output is played twice in succession.  What do you think the cause of this is



On my system (MacOS 10.6, Processing 1.0.9), the system runs such that there are two calls to generate, then two calls to samples, then two calls to generate etc.  So the first set of samples is overwritten by the second set, then that set is fed to the output twice on the two calls to generate.  To fix this, you need to implement a FIFO in WaveformRenderer, e.g.

Code:

import ddf.minim.*;

Minim minim;
AudioInput in;
AudioOutput out;
WaveformRenderer waveform;

void setup()
{
 size(512, 200);

 minim = new Minim(this);
 minim.debugOn();

 // get a line in from Minim, default bit depth is 16
 int buffer_size = 4096;
 in = minim.getLineIn(Minim.STEREO, buffer_size);
 out = minim.getLineOut(Minim.STEREO, buffer_size);

 waveform = new WaveformRenderer(buffer_size);
 in.addListener(waveform);


 // adds the signal to the output
 out.addSignal(waveform);

}

void draw()
{
 background(0);
 stroke(255);

}


void stop()
{
 // always close Minim audio classes when you are done with them
 in.close();
 out.close();

 minim.stop();

 super.stop();
}


class WaveformRenderer implements AudioListener, AudioSignal
{

 private float[] left;
 private float[] right;
 private int buffer_max;
 private int inpos, outpos;
 private int count;
 
 // Assumes that samples will always enter and exit in blocks
 // of buffer_size, so we don't have to worry about splitting
 // blocks across the ring-buffer boundary

 WaveformRenderer(int buffer_size)
 {
    int n_buffers = 4;
    buffer_max = n_buffers * buffer_size;
    left = new float[buffer_max];
    right = new float[buffer_max];
    inpos = 0;
    outpos = 0;
    count = 0;
 }

 synchronized void samples(float[] samp)
 {
   // handle mono by writing samples to both left and right
   samples(samp, samp);
 }

 synchronized void samples(float[] sampL, float[] sampR)
 {
   System.arraycopy(sampL, 0, left, inpos, sampL.length);
   System.arraycopy(sampR, 0, right, inpos, sampR.length);
   inpos += sampL.length;
   if (inpos == buffer_max) {
     inpos = 0;
   }
   count += sampL.length;
   // println("samples: count="+count);
 }


 void generate(float[] samp)
 {
    // println("generate: count="+count);
    if (count > 0) {
      System.arraycopy(left, outpos, samp, 0, samp.length);
      outpos += samp.length;
      if (outpos == buffer_max) {
        outpos = 0;
      }
      count -= samp.length;
    }
 }

 void generate(float[] sampL, float[] sampR)
 {
    // handle stereo by copying one channel, then passing the other channel
    // to the mono handler which will update the pointers
    if (count > 0) {
      System.arraycopy(right, outpos, sampR, 0, sampR.length);
      generate(sampL);
    }
 }


}
Page Index Toggle Pages: 1