We are about to switch to a new forum software. Until then we have removed the registration on this forum.
I want to capture real-time audio continuously without interruption, so I can take samples of arbitrary length from that capture, but without waiting for that capture-length of time. My goal is to be able to analyze the signal such that the higher frequency results appear immediately, while allowing some lag for the lower frequency components. The code below shows my attempt to do this using an array to paste together the input samples as they appear, using arrayCopy() to move older samples to the right, eventually falling off. I had expected that each new sample would start where the previous sample had ended, but this appears not to be the case. The sketch shows this as a discontinuity in waveform at the junctions shown in the middle (most easily seen with a sine wave input, varying the frequency). I've checked the timing various ways, and the problem is not with math or drawing speed. An additional waveform discontinuity seen only in the most recent sample at about 10-20% from the left suggests that new samples actually do not start at the end of the old sample.
As you can see I've tried using a "listener" and putting the capture and array processing in a separate thread from draw, as well as checking my assumptions about which end of the capture buffer is most recent. I'm sure there must be some solution to this. GetInputStream looks like a possibility, but I couldn't find any examples showing how to use it. Or am I better off going to Processing 3?
Many Thanks for suggestions!!!
import ddf.minim.analysis.*;
import ddf.minim.*;
Minim minim;
AudioInput in1;
ListenUp listenUp;
int bufSize = 1024, bufMult = 16;
float points[] = new float[bufSize];
float bigSrc[] = new float[bufMult*bufSize];
float bigDest[] = new float[bufMult*bufSize];
class ListenUp implements AudioListener
{
private float[] left;
private float[] right;
ListenUp()
{
left = null;
right = null;
}
synchronized void samples(float[] samp)
{
left = samp;
}
synchronized void samples(float[] sampL, float[] sampR)
{
left = sampL;
right = sampR;
}
}
void setup()
{
// frameRate(30);
size(bufSize/2, 400);
minim = new Minim(this);
listenUp = new ListenUp();
in1 = minim.getLineIn(Minim.MONO, bufSize);
background(0);
for (int i=0; i<bufMult*bufSize; i++)
{
bigSrc[i]=0;
bigDest[i]=0;
}
in1.addListener(listenUp);
}
void stuff()
{
if ( listenUp.left != null )
{
points = listenUp.left; //read "listened" input buffer front to back
// for(int i = 0; i<bufSize; i++) {points[bufSize-1-i]=listenUp.left[i];} //read "listened input buffer back to front
// arrayCopy(bigSrc, bufSize, bigDest, 0, (bufMult-1)*bufSize); //to put most recent input at back of array
// arrayCopy(points, 0, bigDest, (bufMult-1)*bufSize, bufSize); // ''
arrayCopy(bigSrc, 0, bigDest, bufSize, (bufMult-1)*bufSize); //to put most recent input at front of array
arrayCopy(points, 0, bigDest, 0, bufSize); //''
arrayCopy(bigDest, bigSrc);
}
}
void draw()
{
background(0);
stroke(255);
thread("stuff"); //to see if capturing input as separate thread helps - no difference
for (int j=0; j<bufMult/2; j++) //presenting result as adjacent buffers to show discontinuity in capture
{
for(int i = 0; i<bufSize*2; i+=4) //take every 4th point - to fit on screen and to decrease draw time
{
float dH = bigDest[i+j*bufSize]*100;
line( i/4, height*(j+1)/10-dH, i/4+1, height*(j+1)/10 - dH );
}
}
}
Answers
Hello! I've tweaked your code above. Removed some fat & added some tricks! :-bd
Dunno the result is what you expect though. But here it comes anywayz: >-)
Hi GoToLoop, Thank you, that's some very impressive code, well beyond my skills to understand, but it doesn't do what I'm trying to accomplish. The model code I showed was intended to demonstrate the gaps in captured input that I'm trying to eliminate. I modified your code to put the captured buffers side-by-side and still see the discontinuities at junctions. I want to generate a long array of microphone input that is updated without loss, and that I can access any desired length portion of without losing any of the signal. For example I'd like to access the continuous waveform of anything from 0.02 to 1 second of the most recent past data 50 times per second, and have it accurately updated each of those 50 times. (And, ideally, using code that's comprehensible to a newb!) Many Thanks again - clearly you put some effort into your answer
Oh, so it was meant to show what you didn't want to! @-)
For lossless, that's gonna need another approach. Perhaps a Queue<float[]> container: :-??
http://docs.oracle.com/javase/8/docs/api/java/util/Queue.html
Perhaps increasing ListenUp.FPS to
100
might help ya reach that out.No promises though! Check out latest version 2.21 for it! O:-)
When dealing w/ concurrency code, where 1 portion runs at the same time as the others,
we gotta use advanced techniques I'm afraid. 8-X
If you got any difficulties at understand some parts, just ask for an explanation here! :-c
New version 2.3, check it out below! :D
In order to avoid any AudioListener skips, AtomicBoolean w/ its compareAndSet() was removed!
For correct concurrency, re-introduced
synchronized
and a new method: getWavesClone().So draw() can safely access a clone() of ListenUp's waves w/o affecting each other. \m/
EDIT:
Version 2.4 added copyWavesInto() method which replaces getWavesClone().
It's more GC friendly b/c it doesn't clone() waves over & over. Rather it relies on arrayCopy().
Hi GoToLoop, Thanks again for taking the time and effort. I again modified your new code to show buffers adjacent, and still the ends don't match up, even with ListenUp.FPS set to 100 (and with reversing direction of input buffer). So I have to conclude that there are still gaps. If I tried to make an FFT of the whole, or any part longer than a single buffer I'd see a spurious signal corresponding to the junctions. I just need some way of capturing the input audio stream starting from right now to about a second ago, and doing the same as often as I choose. I'm thinking that you've mainly shown me that what I'm trying to do is beyond my current ability, so I'm marking the question as "answered". I'll take your suggestion and check out http://docs.oracle.com/javase/8/docs/api/java/util/Queue.html but I have the feeling it's gonna take some work before I'm up to using it! Many Thanks Again!
Hi GoToLoop, From your description of the issues I'm pretty sure we've covered everything doable on the Processing coding side of the problem. I can't help wondering if there might be something closer to the hardware that would answer the need. Just so you can see where all this was heading here's the sketch I wanted to improve with the question. I wanted to improve the resolution of the lower pitched sounds without slowing the response to the higher pitched sounds. It works best for visualizing simple music, but can also show frequency components of voice. Thanks Again!!!
Hey Darwexter and world, I stumbled across the exact same problem as you, with more or less the same application in mind. Here's how I resolved the problem: I ditched the Minim library completely and instead dug into the documentation of the core Java sound library. Using multithreading it's possible to constantly feed the main loop with the last second of sound with no gaps between readings. I reduced the buffer size a lot so you can get really low response times, I'm getting a theoretical 8,3ms (which of course get added to the frame drawing lag of the main loop) So at 60fps you have about 25ms between a sound happening and a response on the screen.
Hope this helps somebody !
This is a very old forum! But since it's already dug up, I've upgraded "Listen Up" to P3: \m/
I was thinking about how to apply
synchronized
on Python Mode. ~O)And "Listen Up" seems a nice opportunity to try that out. :-bd
And it turns out we can annotate Jython functions w/
@make_synchronized
\m/Whose monitor object becomes their 1st parameter.
So its monitor is dynamic in nature, determined by the moment of invocation.
Besides synchronization, we can have Java style arrays on Jython as well via module "jarray". <:-P
And we can use arrayCopy() on them too! :bz
So here it is "Listen Up" for Python Mode for completeness' sake.
Even though it's impractical due to its über slowness! :o3
In my tests here, its slowness is located in the inner loop. 3:-O
If we just replace it w/
for x in ListenUp.COLS_RANGE: pass
, Python Mode almost reaches Java Mode's performance! >-)I guessed it was due to waves[] and/or pixels[]'s access.
But even if we remove both of them from the inner loop, we won't any significant improvement: @-)
Then I've tested it again w/ just waves[] & pixels[] inside the inner loop:
This time, it was much faster. But still only a fraction compared to
for x in ListenUp.COLS_RANGE: pass
. 8-|My conclusion is that even simple arithmetics, such as addition, multiplication, etc., slow down Jython to a crawl! :-SS
Much likely b/c numbers in Jython are references, not primitives! :-&
And each operation on them instantiate other numbers, unless they're cached. 3:-O