Stitch together audio samples from microphone to get continuous signal?

edited December 2017 in Library Questions

I want to capture real-time audio continuously without interruption, so I can take samples of arbitrary length from that capture, but without waiting for that capture-length of time. My goal is to be able to analyze the signal such that the higher frequency results appear immediately, while allowing some lag for the lower frequency components. The code below shows my attempt to do this using an array to paste together the input samples as they appear, using arrayCopy() to move older samples to the right, eventually falling off. I had expected that each new sample would start where the previous sample had ended, but this appears not to be the case. The sketch shows this as a discontinuity in waveform at the junctions shown in the middle (most easily seen with a sine wave input, varying the frequency). I've checked the timing various ways, and the problem is not with math or drawing speed. An additional waveform discontinuity seen only in the most recent sample at about 10-20% from the left suggests that new samples actually do not start at the end of the old sample.
As you can see I've tried using a "listener" and putting the capture and array processing in a separate thread from draw, as well as checking my assumptions about which end of the capture buffer is most recent. I'm sure there must be some solution to this. GetInputStream looks like a possibility, but I couldn't find any examples showing how to use it. Or am I better off going to Processing 3? Many Thanks for suggestions!!!

import ddf.minim.analysis.*;
import ddf.minim.*;

Minim       minim;
AudioInput in1;
ListenUp listenUp;

int bufSize = 1024, bufMult = 16;

float points[] = new float[bufSize];
float bigSrc[] = new float[bufMult*bufSize];
float bigDest[] = new float[bufMult*bufSize];

class ListenUp implements AudioListener
{
  private float[] left;
  private float[] right;

  ListenUp()
  {
    left = null; 
    right = null;
  }

  synchronized void samples(float[] samp)
  {
    left = samp;
  }

  synchronized void samples(float[] sampL, float[] sampR)
  {
    left = sampL;
    right = sampR;
  }  
}

void setup()
{
 // frameRate(30);
  size(bufSize/2, 400);
  minim = new Minim(this);
  listenUp = new ListenUp();

  in1 = minim.getLineIn(Minim.MONO, bufSize); 

  background(0);

  for (int i=0; i<bufMult*bufSize; i++)
  {
     bigSrc[i]=0;
     bigDest[i]=0;
  }
  in1.addListener(listenUp);
}


void stuff()
{  
   if ( listenUp.left != null )
    {
      points = listenUp.left; //read "listened" input buffer front to back
    //  for(int i = 0; i<bufSize; i++) {points[bufSize-1-i]=listenUp.left[i];} //read "listened input buffer back to front

//    arrayCopy(bigSrc, bufSize, bigDest, 0, (bufMult-1)*bufSize); //to put most recent input at back of array
//    arrayCopy(points, 0, bigDest, (bufMult-1)*bufSize, bufSize); // ''

    arrayCopy(bigSrc, 0, bigDest, bufSize, (bufMult-1)*bufSize);  //to put most recent input at front of array
    arrayCopy(points, 0, bigDest, 0, bufSize);                    //''

    arrayCopy(bigDest, bigSrc);
  }
}

void draw()
{
  background(0);
  stroke(255);

  thread("stuff"); //to see if capturing input as separate thread helps - no difference

  for (int j=0; j<bufMult/2; j++) //presenting result as adjacent buffers to show discontinuity in capture
  {
    for(int i = 0; i<bufSize*2; i+=4) //take every 4th point - to fit on screen and to decrease draw time
    { 
      float dH = bigDest[i+j*bufSize]*100;
      line( i/4, height*(j+1)/10-dH, i/4+1, height*(j+1)/10 - dH );
    }
  }
}

Answers

  • edited May 2018

    Hello! I've tweaked your code above. Removed some fat & added some tricks! :-bd
    Dunno the result is what you expect though. But here it comes anywayz: >-)

    /**
     * Listen Up (v2.21)
     * by  Darwexter (2015/May/19)
     * mod GoToLoop  (2015/May/20)
     *
     * forum.processing.org/two/discussion/10900/
     * stitch-together-audio-samples-from-microphone-to-get-continuous-signal#Item_1
     */
    
    import ddf.minim.Minim;
    import ddf.minim.AudioListener;
    import java.util.concurrent.atomic.AtomicBoolean;
    
    final ListenUp listener = new ListenUp();
    
    void setup() {
      size(ListenUp.COLS, ListenUp.ROWS * ListenUp.GAP, JAVA2D);
      frameRate(ListenUp.FPS);
      new Minim(this).getLineIn(Minim.MONO, ListenUp.COLS).addListener(listener);
    }
    
    void draw() {
      background(0);
      loadPixels();
    
      for (int offset = height / ListenUp.ROWS, y = 0; y < ListenUp.ROWS; ++y) {
        int buf = y * ListenUp.COLS;
        int gap = y * offset + (offset >> 1);
    
        for (int x = 0; x < ListenUp.COLS; ++x) {
          int h = gap + round(ListenUp.AMP * listener.waves[buf + x]);
          int idx = constrain(h*width + x, 0, pixels.length - 1);
          pixels[idx] = ListenUp.INK;
        }
      }
    
      listener.isBusy.set(false);
      updatePixels();
    
      frame.setTitle(str(round(frameRate)));
    }
    
    class ListenUp implements AudioListener {
      static final int COLS = 1024, ROWS = 16, DIM = COLS*ROWS;
      static final int AMP = 100, GAP = 60, FPS = 100;
      static final color INK = #F0A000;
    
      final float[] waves = new float[DIM];
      final AtomicBoolean isBusy = new AtomicBoolean();
    
      @ Override void samples(float[] sample) {
        if (isBusy.compareAndSet(false, true)) {
          arrayCopy(waves, 0, waves, COLS, DIM - COLS);
          arrayCopy(sample, waves);
        }
      }
    
      @ Override void samples(float[] sampL, float[] sampR) {
        samples(sampL);
      }
    }
    
  • edited May 2015

    Hi GoToLoop, Thank you, that's some very impressive code, well beyond my skills to understand, but it doesn't do what I'm trying to accomplish. The model code I showed was intended to demonstrate the gaps in captured input that I'm trying to eliminate. I modified your code to put the captured buffers side-by-side and still see the discontinuities at junctions. I want to generate a long array of microphone input that is updated without loss, and that I can access any desired length portion of without losing any of the signal. For example I'd like to access the continuous waveform of anything from 0.02 to 1 second of the most recent past data 50 times per second, and have it accurately updated each of those 50 times. (And, ideally, using code that's comprehensible to a newb!) Many Thanks again - clearly you put some effort into your answer

  • edited May 2015

    The model code I showed was intended to demonstrate the gaps in captured input that I'm trying to eliminate.

    Oh, so it was meant to show what you didn't want to! @-)

    I want to generate a long array of microphone input that is updated without loss, ...

    For lossless, that's gonna need another approach. Perhaps a Queue<float[]> container: :-??
    http://docs.oracle.com/javase/8/docs/api/java/util/Queue.html

    ... access the continuous waveform of anything from 0.02 to 1 second of the most recent past data 50 times per second, and have it accurately updated each of those 50 times.

    Perhaps increasing ListenUp.FPS to 100 might help ya reach that out.
    No promises though! Check out latest version 2.21 for it! O:-)

    And, ideally, using code that's comprehensible to a newb!

    When dealing w/ concurrency code, where 1 portion runs at the same time as the others,
    we gotta use advanced techniques I'm afraid. 8-X

    If you got any difficulties at understand some parts, just ask for an explanation here! :-c

  • edited May 2018 Answer ✓

    New version 2.3, check it out below! :D
    In order to avoid any AudioListener skips, AtomicBoolean w/ its compareAndSet() was removed!
    For correct concurrency, re-introduced synchronized and a new method: getWavesClone().
    So draw() can safely access a clone() of ListenUp's waves w/o affecting each other. \m/

    EDIT:
    Version 2.4 added copyWavesInto() method which replaces getWavesClone().
    It's more GC friendly b/c it doesn't clone() waves over & over. Rather it relies on arrayCopy().

    /**
     * Listen Up (v2.4)
     * by  Darwexter (2015/May/19)
     * mod GoToLoop  (2015/May/20)
     *
     * forum.processing.org/two/discussion/10900/
     * stitch-together-audio-samples-from-microphone-to-get-continuous-signal#Item_4
     */
    
    import ddf.minim.Minim;
    import ddf.minim.AudioListener;
    
    final ListenUp listener = new ListenUp();
    final float[] waves = new float[ListenUp.DIM];
    
    void setup() {
      size(ListenUp.COLS, ListenUp.ROWS * ListenUp.GAP, JAVA2D);
      frameRate(ListenUp.FPS);
      new Minim(this).getLineIn(Minim.MONO, ListenUp.COLS).addListener(listener);
    }
    
    void draw() {
      background(0);
      loadPixels();
    
      //final float[] waves = listener.getWavesClone();
      listener.copyWavesInto(waves);
    
      for (int offset = height / ListenUp.ROWS, y = 0; y < ListenUp.ROWS; ++y) {
        int buf = y * ListenUp.COLS;
        int gap = y * offset + (offset >> 1);
    
        for (int x = 0; x < ListenUp.COLS; ++x) {
          int h = gap + round(ListenUp.AMP * waves[buf + x]);
          int idx = constrain(h*width + x, 0, pixels.length - 1);
          pixels[idx] = ListenUp.INK;
        }
      }
    
      updatePixels();
      frame.setTitle(str(round(frameRate)));
    }
    
    class ListenUp implements AudioListener {
      static final int COLS = 1024, ROWS = 16, DIM = COLS*ROWS;
      static final int AMP = 100, GAP = 60, FPS = 50;
      static final color INK = #F0A000;
    
      final float[] waves = new float[DIM];
    
      synchronized float[] getWavesClone() {
        return waves.clone();
      }
    
      synchronized float[] copyWavesInto(float[] w) {
        arrayCopy(waves, w);
        return w;
      }
    
      @ Override synchronized void samples(float[] sample) {
        arrayCopy(waves, 0, waves, COLS, DIM - COLS);
        arrayCopy(sample, waves);
      }
    
      @ Override void samples(float[] sampL, float[] sampR) {
        samples(sampL);
      }
    }
    
  • Hi GoToLoop, Thanks again for taking the time and effort. I again modified your new code to show buffers adjacent, and still the ends don't match up, even with ListenUp.FPS set to 100 (and with reversing direction of input buffer). So I have to conclude that there are still gaps. If I tried to make an FFT of the whole, or any part longer than a single buffer I'd see a spurious signal corresponding to the junctions. I just need some way of capturing the input audio stream starting from right now to about a second ago, and doing the same as often as I choose. I'm thinking that you've mainly shown me that what I'm trying to do is beyond my current ability, so I'm marking the question as "answered". I'll take your suggestion and check out http://docs.oracle.com/javase/8/docs/api/java/util/Queue.html but I have the feeling it's gonna take some work before I'm up to using it! Many Thanks Again!

  • edited May 2015
    • I don't understand very much the physics behind sound, sorry!
    • ListenUp is dependent on how frequent AudioInput calls back AudioListener's samples().
    • Minim's getLineIn() determines buffer's size for each samples()'s callbacks.
    • Right now it's ListenUp.COLS = 1024 for buffer's size.
    • You see, waves[] array can't get more samples() than it's actually sent by AudioInput.
    • There's some reasonable gap between each callback. ^#(^
    • Perhaps increasing ListenUp.COLS buffer size to 1 second equivalence of data might do the trick?
  • Hi GoToLoop, From your description of the issues I'm pretty sure we've covered everything doable on the Processing coding side of the problem. I can't help wondering if there might be something closer to the hardware that would answer the need. Just so you can see where all this was heading here's the sketch I wanted to improve with the question. I wanted to improve the resolution of the lower pitched sounds without slowing the response to the higher pitched sounds. It works best for visualizing simple music, but can also show frequency components of voice. Thanks Again!!!

    import ddf.minim.analysis.*;
    import ddf.minim.*;
    
    Minim       minim;
    FFT         fft2;
    AudioInput in1;
    int bufMult = 2;
    
    float points[] = new float[1024*bufMult];
    
    int[][] spect;
    int divs = 8;
    int[] vColor;
    float[] sigAmp;
    int[] vxColor;
    
    PImage img1;
    PImage img2;
    
    SampleCollector waveform;
    boolean          listening;
    
    class SampleCollector implements AudioListener
    {
      private float[] left;
      private float[] right;
      private float[] half;
      private float[] bHalf;
    
      SampleCollector()
      {
        left = null; 
        right = null;
      }
    
      synchronized void samples(float[] samp)
      {
        left = samp;
        half = samp;
        bHalf = concat(samp, samp); //to give double length
      }
    
      synchronized void samples(float[] sampL, float[] sampR)
      { 
       // if (left!=null && half !=null) 
     //   {bHalf = concat(sampL, sampR);}
        left = sampL;
        right = sampR;
        half = left;
      }
    
      synchronized float[] aaa()
      {  
       if (left != null)
       {
        bHalf = concat(left, half); 
        for (int i=0; i<half.length; i++)
        {
         // half[i] = 0.09;//(bHalf[2*i] + bHalf[2*i+1])/2; //to give same array length over double the time
        }
       } 
        return left;
      }  
    }
    
    void setup()
    {
      waveform = new SampleCollector();
      size(1000, 768);
    
      spect = new int[768][3];
      background(0);
      for (int i=0; i<256; i++)
      {  
         int j=255-i;
         stroke(j, i, 0);
         line(width/2, i, width, i);
         spect[i][0] = j;
         spect[i][1] = i;
         spect[i][2] = 0;
    
         stroke(0, j, i);
         line(width/2, i+256, width, i+256);
         spect[i+256][0] = 0;
         spect[i+256][1] = j;
         spect[i+256][2] = i;
    
         stroke(i, 0, j);
         line(width/2, i+512, width, i+512);     
         spect[i+512][0] = i;
         spect[i+512][1] = 0;
         spect[i+512][2] = j;     
         int divs=8;
      }
      divs = 12;
      vxColor = new int[height];
      vColor = new int[height];
      for (int k=0; k<divs; k++)
      for (int j=0; j<height; j+=divs)
      { 
        int col = j*768/height;
        if (col>767) {col=767;} //spect undefined past 767
        stroke(spect[col][0], spect[col][1], spect[col][2], 255);
        int vert = j/divs+k*height/divs;
        line(50, vert, 70, vert); 
        vColor[vert] = color(spect[col][0], spect[col][1], spect[col][2], 255);
    
      }
    
      minim = new Minim(this);
    
      in1 = minim.getLineIn(Minim.STEREO, 1024*bufMult); 
      fft2 = new FFT( 1024*bufMult, 44100 );
      fft2.logAverages(16,64);//(minbandwidth Hz for an octave, bands per octave)
    
      sigAmp = new float[4000];
    
      img1 = createImage(width, height, RGB);
      img2 = createImage(5, height, RGB);
    
    //  frameRate(30);
      print("buffer size: ", in1.bufferSize(), "sample rate: ", in1.sampleRate(), "fft2.avgSize: ", fft2.avgSize() );
      print("  ",img2.pixels.length);
      in1.addListener( waveform );
    }
    
    void draw()
    {
      background(0);
      stroke(255);
    
      int startTrace = 0;
    
       while ((points[startTrace]>0 || points[startTrace+1]<0) && startTrace<200 )
        {
          startTrace++;
        }
    
      if (startTrace>200) {startTrace=200;}
      if ( waveform.aaa() != null) 
      for(int i = 0; i<800; i++) //i<waveform.aaa().length-startTrace-1; i++) 
      {
        line( i+100, height*5/6-waveform.aaa()[i+startTrace]*1000, i+101, height*5/6 - waveform.aaa()[i+1+startTrace]*1000 );
      } 
      if (waveform.aaa() != null)
      fft2.forward( waveform.aaa() ); //points captured from in1 below
      for(int i = 0; i < fft2.avgSize(); i++)
      {
        stroke(vColor[i]);
        sigAmp[i] = fft2.getAvg(i)*i*i/10000;  
      }
    
      smooth();//without this inclusion of the alpha value = 4th term in stroke causes simple black
      for (int i=0; i<height; i++)
      {     
        stroke(vColor[i], sigAmp[i]);
        line(width-5, height-i, width, height-i); 
      }
    
      for(int i = 0; i<in1.bufferSize()-1; i++)
      {
        points[i]=in1.mix.get(i);
      }
    
    
      img2 = get(width-2, 0, 2, height);
      img1.copy(img1, 10, 0, width-2, height, 0, 0, width-2, height-1);
      img1.copy(img2, 0, 0, 2, height, width-10, 0, width, height);
    
      image(img1, 0, 0);
    
    } 
    
  • Hey Darwexter and world, I stumbled across the exact same problem as you, with more or less the same application in mind. Here's how I resolved the problem: I ditched the Minim library completely and instead dug into the documentation of the core Java sound library. Using multithreading it's possible to constantly feed the main loop with the last second of sound with no gaps between readings. I reduced the buffer size a lot so you can get really low response times, I'm getting a theoretical 8,3ms (which of course get added to the frame drawing lag of the main loop) So at 60fps you have about 25ms between a sound happening and a response on the screen.

    Hope this helps somebody !

    import javax.sound.sampled.Mixer;
    import javax.sound.sampled.AudioFormat;
    import javax.sound.sampled.AudioSystem;
    import javax.sound.sampled.DataLine;
    import javax.sound.sampled.TargetDataLine;
    
    int fps = 60;
    int audioSamplingRate = 10000;   //in Hz
    float SampleHistoryLength = 0.5; //in seconds
    int audioInputID = 2;            //change to fit your system audio input
    
    int buffersPerFrame = 2; //how many audio frames are to be read per frame
    int audioResolution = 8; //in bits
    int bufferSize = audioSamplingRate/(fps*buffersPerFrame);  //size of each chunk of audio read
    int sampleHistoryLength = floor(SampleHistoryLength * audioSamplingRate); //length of the samples array
    byte[] sampleHistory = new byte[sampleHistoryLength];
    
    float step; //offset distance for drawing the samples
    
    void setup() {
      new soundInput().start(); //start thread for recording sound
      frameRate(fps);
      size(1920, 200);
      step = (float)width / (float)sampleHistory.length;
    }
    
    void draw() {
      background(0);
      noFill(); stroke(255);
      beginShape();
      float xoff = 0;
      for (byte pressure : sampleHistory) {
        vertex(xoff, height/2+pressure);
        xoff += step;
      }
      endShape();
    }
    
    //Audio handling Thread
    class soundInput extends Thread {
      public void run() {
        //------setup-audio-input--------
        AudioFormat format = new AudioFormat(audioSamplingRate, audioResolution, 1, true, true); //define the audio format
        DataLine.Info inputLineInfo = new DataLine.Info(TargetDataLine.class, format);  //
        Mixer.Info[] mixerInfo = AudioSystem.getMixerInfo(); //list all audio channels on the system
        Mixer audioSource = AudioSystem.getMixer(mixerInfo[audioInputID]); //choose one and create a mixer object for it
        println("Audio Input port : " + mixerInfo[audioInputID]);
        println("Audio Format : " + format.toString());
        println("Audio Input delay : " +  1000 * (float)bufferSize/(float)audioSamplingRate + "ms");
        //printArray(mixerInfo);  //uncomment to show all audio lines on the system
        try {
          TargetDataLine inputLine = (TargetDataLine) audioSource.getLine(inputLineInfo); //get a line from the mixer
          inputLine.open(format);
          inputLine.start();    //start audio capture 
        //----audio-reading-thread-loop-----
          while (true) {
            int availableBytes = inputLine.available();
            if (availableBytes >= bufferSize) {
              byte[] targetData = new byte[availableBytes];
              inputLine.read(targetData, 0, availableBytes);     //read new samples
              sampleHistory = concat(sampleHistory, targetData); //stitch new samples set on the end of the previous one
              if (sampleHistory.length > sampleHistoryLength) {
                int subsetIndex = sampleHistory.length - sampleHistoryLength;
                sampleHistory = subset(sampleHistory, subsetIndex);  //discard old samples
              }
            }
          }
        //-----------loop-end--------------
        }
        catch (Exception e) {
          System.err.println(e);
        }
      }
    };
    
  • edited May 2018

    This is a very old forum! But since it's already dug up, I've upgraded "Listen Up" to P3: \m/

    /**
     * Listen Up (v3.2)
     * by  Darwexter (2015/May/19)
     * mod GoToLoop  (2015/May/20)
     *
     * https://Forum.Processing.org/two/discussion/10900/
     * stitch-together-audio-samples-from-microphone-to-get-continuous-signal#Item_9
     */
    
    import ddf.minim.Minim;
    import ddf.minim.AudioListener;
    
    class ListenUp implements AudioListener {
      static final int COLS = 1024, ROWS = 16, DIM = COLS*ROWS;
      static final int RATE = 44100, BITS = 16;
      static final int AMP = 50, GAP = 40, FPS = 120;
      static final color INK = #F0A000, BG = PImage.ALPHA_MASK;
    
      final float[] waves = new float[DIM];
    
      synchronized float[] getWavesClone() {
        return waves.clone();
      }
    
      synchronized float[] copyWavesInto(final float[] w) {
        arrayCopy(waves, w);
        return w;
      }
    
      @Override synchronized void samples(final float[] sample) {
        arrayCopy(waves, 0, waves, COLS, DIM - COLS);
        arrayCopy(sample, waves);
        redraw = true;
      }
    
      @Override void samples(final float[] sampL, final float[] sampR) {
        samples(sampL);
      }
    }
    
    final ListenUp listener = new ListenUp();
    final float[] waves = listener.getWavesClone();
    
    void settings() {
      size(ListenUp.COLS, ListenUp.ROWS * ListenUp.GAP, JAVA2D);
      noSmooth();
      //noLoop();
    
      new Minim(this)
        .getLineIn(Minim.MONO, ListenUp.COLS, ListenUp.RATE, ListenUp.BITS)
        .addListener(listener);
    }
    
    void setup() {
      frameRate(ListenUp.FPS);
      loadPixels();
    }
    
    void draw() {
      //background(ListenUp.BG);
      //loadPixels();
      java.util.Arrays.fill(pixels, ListenUp.BG);
    
      //final float[] waves = listener.getWavesClone();
      listener.copyWavesInto(waves);
    
      final int offset = height / ListenUp.ROWS, offhalf = offset >> 1;
      final int len1 = pixels.length - 1;
    
      for (int y = 0; y < ListenUp.ROWS; ++y) {
        int buf = y * ListenUp.COLS;
        int gap = y * offset + offhalf;
    
        for (int x = 0; x < ListenUp.COLS; ++x) {
          int h = gap + round(ListenUp.AMP * waves[x + buf]);
          int idx = constrain(x + h*width, 0, len1);
          pixels[idx] = ListenUp.INK;
        }
      }
    
      updatePixels();
      surface.setTitle(str(round(frameRate)));
    }
    
  • edited May 2018

    I was thinking about how to apply synchronized on Python Mode. ~O)
    And "Listen Up" seems a nice opportunity to try that out. :-bd

    And it turns out we can annotate Jython functions w/ @make_synchronized \m/
    Whose monitor object becomes their 1st parameter.
    So its monitor is dynamic in nature, determined by the moment of invocation.

    Besides synchronization, we can have Java style arrays on Jython as well via module "jarray". <:-P
    And we can use arrayCopy() on them too! :bz

    So here it is "Listen Up" for Python Mode for completeness' sake.
    Even though it's impractical due to its über slowness! :o3

    """
     Listen Up (v3.2.2)
     by  Darwexter (2015/May/19)
     mod GoToLoop  (2015/May/20)
    
     https://Forum.Processing.org/two/discussion/10900/
     stitch-together-audio-samples-from-microphone-to-get-continuous-signal#Item_10
    """
    
    add_library('Minim')
    
    from synchronize import make_synchronized
    from jarray import zeros
    from copy import copy as clone
    from java.util import Arrays
    
    class ListenUp(AudioListener):
        COLS, ROWS = 1024, 16
        DIM = COLS * ROWS
        DIM_COLS = DIM - COLS
        COLS_RANGE, ROWS_RANGE = tuple(range(COLS)), tuple(range(ROWS))
    
        RATE, BITS = 44100, 16
        AMP, GAP, FPS = 50, 40, 120
    
        INK, BG = 0xffF0A000, PImage.ALPHA_MASK
    
        def __init__(self, ARR_TYPE='f'): self.waves = zeros(ListenUp.DIM, ARR_TYPE)
    
        @make_synchronized
        def getWavesClone(self): return clone(self.waves)
    
    
        @make_synchronized
        def copyWavesInto(self, w):
            arrayCopy(self.waves, w)
            return w
    
    
        @make_synchronized
        def samples(self, sampL, sampR=zeros(0, 'f')):
            w = self.waves
            arrayCopy(w, 0, w, ListenUp.COLS, ListenUp.DIM_COLS)
            arrayCopy(sampL, w)
            redraw()
    
    
    
    listener = ListenUp()
    waves = listener.getWavesClone()
    
    def settings():
        size(ListenUp.COLS, ListenUp.ROWS * ListenUp.GAP, JAVA2D)
        noSmooth()
        # noLoop()
    
        Minim(this)\
        .getLineIn(Minim.MONO, ListenUp.COLS, ListenUp.RATE, ListenUp.BITS)\
        .addListener(listener)
    
    
    def setup(): frameRate(ListenUp.FPS), loadPixels()
    
    
    def draw():
        Arrays.fill(pixels, ListenUp.BG)
        listener.copyWavesInto(waves)
    
        offset = height / ListenUp.ROWS
        offhalf = offset >> 1
        len1 = len(pixels) - 1
    
        for y in ListenUp.ROWS_RANGE:
            buf = y * ListenUp.COLS
            gap = y * offset + offhalf
    
            for x in ListenUp.COLS_RANGE:
                h = gap + PApplet.round(ListenUp.AMP * waves[x + buf])
                idx = constrain(x + h*width, 0, len1)
                pixels[idx] = ListenUp.INK
    
        updatePixels()
        this.surface.title = `round(frameRate, 1)`
    
  • edited December 2017

    In my tests here, its slowness is located in the inner loop. 3:-O
    If we just replace it w/ for x in ListenUp.COLS_RANGE: pass, Python Mode almost reaches Java Mode's performance! >-)

    I guessed it was due to waves[] and/or pixels[]'s access.
    But even if we remove both of them from the inner loop, we won't any significant improvement: @-)

    for x in ListenUp.COLS_RANGE:
        h = gap + PApplet.round(ListenUp.AMP)
        idx = constrain(x + h*width, 0, len1)
    

    Then I've tested it again w/ just waves[] & pixels[] inside the inner loop:

    for x in ListenUp.COLS_RANGE:
        waves[x] = x
        pixels[x] = ListenUp.INK
    

    This time, it was much faster. But still only a fraction compared to for x in ListenUp.COLS_RANGE: pass. 8-|

    My conclusion is that even simple arithmetics, such as addition, multiplication, etc., slow down Jython to a crawl! :-SS

    Much likely b/c numbers in Jython are references, not primitives! :-&
    And each operation on them instantiate other numbers, unless they're cached. 3:-O

Sign In or Register to comment.