<?xml version="1.0" encoding="utf-8"?>
<rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom">
	<channel>
      <title>Tagged with #fft - Processing 2.x and 3.x Forum</title>
      <link>https://forum.processing.org/two/discussions/tagged/feed.rss?Tag=%23fft</link>
      <pubDate>Sun, 08 Aug 2021 18:09:15 +0000</pubDate>
         <description>Tagged with #fft - Processing 2.x and 3.x Forum</description>
   <language>en-CA</language>
   <atom:link href="/two/discussions/tagged%23fft/feed.rss" rel="self" type="application/rss+xml" />
   <item>
      <title>How to add a trigger to an oscilloscope program?</title>
      <link>https://forum.processing.org/two/discussion/28074/how-to-add-a-trigger-to-an-oscilloscope-program</link>
      <pubDate>Sat, 14 Jul 2018 06:52:29 +0000</pubDate>
      <dc:creator>8bit_coder</dc:creator>
      <guid isPermaLink="false">28074@/two/discussions</guid>
      <description><![CDATA[<p>I'm working on a very bare bones oscilloscope program using the minim library:</p>

<pre><code>import ddf.minim.*;

Minim minim;
AudioInput in;
color white;

void setup()
{
  size(512, 100, P2D);
  white = color(255);
  colorMode(HSB,100);
  minim = new Minim(this);
  minim.debugOn();

  in = minim.getLineIn(Minim.STEREO, 512);
  background(0);
}

void draw()
{
  background(0);
  // draw the waveforms
  for(int i = 0; i &lt; in.bufferSize() - 1; i++)
  {
    stroke((1+in.left.get(i))*50,100,100);
    line(i, 50 + in.left.get(i)*50, i, 50 + in.left.get(i+1)*50);
  }
}


void stop()
{
  in.close();
  minim.stop();
  super.stop();
}
</code></pre>

<p>My question is, how do I make the waveform trigger/stabilize like an actual oscilloscope? I'm trying to figure that out so I can make the waveform stay at a certain spot so it doesn't jump around. Basically keep the waveform stable at all times.</p>

<p>Thanks for your help.</p>
]]></description>
   </item>
   <item>
      <title>FFT to identify bird's singing</title>
      <link>https://forum.processing.org/two/discussion/27271/fft-to-identify-bird-s-singing</link>
      <pubDate>Sun, 25 Mar 2018 11:49:29 +0000</pubDate>
      <dc:creator>zoulou</dc:creator>
      <guid isPermaLink="false">27271@/two/discussions</guid>
      <description><![CDATA[<p>Hello, i'm sorry if i'm not posting in the right Category, i'm new to the forum.</p>

<p>I'm working on a project for hiking help for android, and i thought about a functionnality that can be fun, recognizing a bird's singing. The idea is to record a bird that you pass by and the database tells you which species it is.</p>

<p>I'm not a pro of sound processing, but i though that implementing an FFT could help. I was planing on using the FFT to get the  max-min amplitude and compare it with the database's pre-processed information, of course i don't plan on using only the min-max indicators.</p>

<p>I inspired my code from this: <a href="https://github.com/blanche/shayam/blob/master/java/at.lw.shayam/src/at/lw/shayam/AudioAnalysis.java" target="_blank" rel="nofollow">https://github.com/blanche/shayam/blob/master/java/at.lw.shayam/src/at/lw/shayam/AudioAnalysis.java</a>
and as much as i undestand the maths behind the fourrier transform, i don't get everything in that code.</p>

<p>So here are my questions:</p>

<ol>
<li><p>The chunks are used to accelerate the computing time of the FFT ? if we have a 2^n chunks, we'll have 2^n smaller FT processed ?</p></li>
<li><p>The results[][] 2d Complex Array contains... complexs. But i don't understant what is x and y in results[x][y], how can 
you find the frequency and the amplitude. (of course i'll have to convert the complexs to doubles)</p></li>
<li><p>Do you think this approach is enough ? the projet is not professional so i'm not trying to get a recogntion rate of 100% !</p></li>
</ol>

<p>Thank you for your answers.</p>
]]></description>
   </item>
   <item>
      <title>Turn audio input sound into color</title>
      <link>https://forum.processing.org/two/discussion/19461/turn-audio-input-sound-into-color</link>
      <pubDate>Fri, 02 Dec 2016 00:49:30 +0000</pubDate>
      <dc:creator>kmooney</dc:creator>
      <guid isPermaLink="false">19461@/two/discussions</guid>
      <description><![CDATA[<p>Hello all,</p>

<p>I am currently having some major trouble attempting to turn input sound into color (using minim fft).</p>

<p>I am hoping to use bandpass filters to set three frequency ranges and then use the numbers that come out of the analysis of the audio input waves to determine the r,g,b fill values of a square.</p>

<p>(Also I am open to any other suggestions for ways to turn input audio into colors.)</p>

<p>Any help is MUCH appreciated! Thank you so much in advance! Cheers!</p>

<p>Katherine</p>
]]></description>
   </item>
   <item>
      <title>gpu fft/histogram with shader</title>
      <link>https://forum.processing.org/two/discussion/24948/gpu-fft-histogram-with-shader</link>
      <pubDate>Fri, 10 Nov 2017 10:57:39 +0000</pubDate>
      <dc:creator>pietroLama</dc:creator>
      <guid isPermaLink="false">24948@/two/discussions</guid>
      <description><![CDATA[<p>hi guys...i'm working on this shader...
it generate a real time fft/histogram of an audio/(image, video) input.</p>

<p>Now the problem is:
it works but i've created it on shaderToy, i'm trying to import it in processing, but i've a little problem.
i have no idea how shaderToys pass the audio/image input to the glsl texture function (iChannel0 ....).
in other words: 
how do I do to create the appropriate input to generate the same effect I generate here:
<a href="https://www.shadertoy.com/" target="_blank" rel="nofollow">https://www.shadertoy.com/</a></p>

<p>To try it, you can copy and paste the following code into any of the examples that the site displays and then press the play button at the bottom left.
then select the iChannel0 box select music and choose one of the proposed tracks.</p>

<pre><code>void mainImage(out vec4 fragColor, in vec2 fragCoord) {

    vec2 uv = fragCoord.xy / iResolution.xy;

    vec2 res = floor(400.0*vec2(10.15, iResolution.y/iResolution.x));

    vec3 col = vec3(0.);

    vec2 iuv = floor( uv * res )/res;

    float fft = texture(iChannel0, vec2(iuv.x, 0.1)).x; 
    fft *= fft;

    if(iuv.y&lt;fft) {
        col = vec3(255.,255.,255.-iuv.y*255.);
    }

    fragColor = vec4(col/255.0, 1.0);
}
</code></pre>

<p>below the code for the implementation in processing:</p>

<pre><code>import ddf.minim.*;
import com.thomasdiewald.pixelflow.java.DwPixelFlow;
import com.thomasdiewald.pixelflow.java.imageprocessing.DwShadertoy;

Minim minim;
AudioInput in;
DwPixelFlow context;
DwShadertoy toy;
PGraphics2D pg_src;

void settings() {
  size(1024, 820, P2D);
  smooth(0);
}
void setup() {
  surface.setResizable(true);

  minim = new Minim(this);
  in = minim.getLineIn();

  context = new DwPixelFlow(this);
  context.print();
  context.printGL();

  toy = new DwShadertoy(context, "fft.frag");
  pg_src = (PGraphics2D) createGraphics(width, height, P2D);

  pg_src.smooth(0);

  println(PGraphicsOpenGL.OPENGL_VENDOR);
  println(PGraphicsOpenGL.OPENGL_RENDERER);
}

void draw() {
  pg_src.beginDraw();
  pg_src.background(0);
  pg_src.stroke(255);
  //code to convert audio input to a correct input for the function :toy.set_iChannel(0, pg_src);
  pg_src.endDraw();

  toy.set_iChannel(0, pg_src);
  toy.apply(this.g);

  String txt_fps = String.format(getClass().getSimpleName()+ "   [size %d/%d]   [frame %d]   [fps %6.2f]", width, height, frameCount, frameRate);
  surface.setTitle(txt_fps);
}
</code></pre>

<p>and the glsl code for fft.frag file (is the same as before but I added the environment variables that shaderToys generates automatically and some other pixelFlow library instance to communicate with the fft.frag file):</p>

<pre><code>#version 150

#define SAMPLER0 sampler2D // sampler2D, sampler3D, samplerCube
#define SAMPLER1 sampler2D // sampler2D, sampler3D, samplerCube
#define SAMPLER2 sampler2D // sampler2D, sampler3D, samplerCube
#define SAMPLER3 sampler2D // sampler2D, sampler3D, samplerCube

uniform SAMPLER0 iChannel0; // image/buffer/sound    Sampler for input textures 0
uniform SAMPLER1 iChannel1; // image/buffer/sound    Sampler for input textures 1
uniform SAMPLER2 iChannel2; // image/buffer/sound    Sampler for input textures 2
uniform SAMPLER3 iChannel3; // image/buffer/sound    Sampler for input textures 3

uniform vec3  iResolution;           // image/buffer          The viewport resolution (z is pixel aspect ratio, usually 1.0)
uniform float iTime;                 // image/sound/buffer    Current time in seconds
uniform float iTimeDelta;            // image/buffer          Time it takes to render a frame, in seconds
uniform int   iFrame;                // image/buffer          Current frame
uniform float iFrameRate;            // image/buffer          Number of frames rendered per second
uniform vec4  iMouse;                // image/buffer          xy = current pixel coords (if LMB is down). zw = click pixel
uniform vec4  iDate;                 // image/buffer/sound    Year, month, day, time in seconds in .xyzw
uniform float iSampleRate;           // image/buffer/sound    The sound sample rate (typically 44100)
uniform float iChannelTime[4];       // image/buffer          Time for channel (if video or sound), in seconds
uniform vec3  iChannelResolution[4]; // image/buffer/sound    Input texture resolution for each channel

void mainImage(out vec4 fragColor, in vec2 fragCoord) {
    vec2 uv = fragCoord.xy / iResolution.xy;
    vec2 res = floor( 1000.0*vec2(1.0, iResolution.y/iResolution.x) );

    vec3 col = vec3(0.);

    vec2 iuv = floor( uv * res )/res;

    float f = 1.11-abs(fract(uv.x * res.x));
    float g = 1.11-abs(fract(uv.y * res.y));

    float fft = texture(iChannel0, vec2(iuv.x, 0.2)).x; 
    fft = 1.*fft*fft;

    if(iuv.y&lt;fft) {
        col = vec3(74.0,82.0,4.0);
    }

    fragColor = vec4(col/255.0, 1.0);
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Saving the results of inverse FFT as an audio file</title>
      <link>https://forum.processing.org/two/discussion/24655/saving-the-results-of-inverse-fft-as-an-audio-file</link>
      <pubDate>Fri, 20 Oct 2017 15:12:05 +0000</pubDate>
      <dc:creator>LokiBear</dc:creator>
      <guid isPermaLink="false">24655@/two/discussions</guid>
      <description><![CDATA[<p>Hi there,
I have just recently (last night, really) started using Processing and the Minim library with the intention to do some sound processing. The idea is to load an audio file (both for live and offline analysis), use FFT, adjust some frequencies, use inverse FFT and produce and audio file both for immediate playback and writing to disk.
I'm really struggling with saving the results of the buffer that the IFFT populates! I can see in the spectrum visualization that the right frequencies are adjusted and all seems fine, but this is the part where I'm not exactly sure what to do.
Currently I am mostly looking into the offlineAnalysis example, because doing these things offline is a bit more important for now than doing them live.</p>

<p>Edit------
The issue with live playback still stands - what I have done is create a huge float array containing data from the IFFT. I have a MultiChannelBuffer and using setSample I replace the original samples with the ones from the float array and then it black using a sampler. I know this works because the audio is just fine, but the moment I modify something using scaleFreq or so, I hear a lot of clicking noise. I'm not exactly sure what I am doing wrong, I'm sure there is some common practice how people would approach this?</p>
]]></description>
   </item>
   <item>
      <title>Help with Audio Visualizer</title>
      <link>https://forum.processing.org/two/discussion/24080/help-with-audio-visualizer</link>
      <pubDate>Sat, 09 Sep 2017 21:31:06 +0000</pubDate>
      <dc:creator>blenda_splenda</dc:creator>
      <guid isPermaLink="false">24080@/two/discussions</guid>
      <description><![CDATA[<p>Hello! I am trying to create an audio visualizer. I am using live sound. The problem I keep coming into is the nullpointer exception error. Please take a look at this code and give me some tips on how to fix this! Thank you.</p>

<p>import ddf.minim.analysis.*;
import ddf.minim.*;</p>

<p>Minim minim;
AudioPlayer jingle;
AudioInput input;
FFT fft;
int[][] colo = new int[100][10];</p>

<p>float color1 = 35;&gt; 
float color2 = 45;
float color3 = 65;
float color4 = 75;</p>

<p>void setup() {
  size(512, 400);
  background(0);
  colorMode(HSB,100,100,100,100);</p>

<p>minim = new Minim(this);</p>

<p>input = minim.getLineIn();</p>

<p>fft = new FFT(input.bufferSize(), input.sampleRate());</p>

<p>}</p>

<p>void draw() {
  background(0);
  noStroke();
  fill(0, 5);
  rect(0, 0, width, height);
  pushMatrix();
  translate(width/2, height/2);
  rotate(radians(frameCount % 360 * 2));</p>

<p>fft.forward(input.mix);</p>

<p>for(int i = 0; i &lt; fft.specSize(); i++) {</p>

<pre><code>  if(jingle.mix.get(i)*200 &gt; fft.specSize()) {
    stroke(color1,100,100);
  }
  else {
    stroke(color2,100,100);
  }

  line(cos(i)*50, sin(i)*50, cos(i)*abs(jingle.left.get(i))*200 + cos(i)*50, sin(i)*abs(jingle.right.get(i))*200 + sin(i)*50);
}

for(int j = fft.specSize(); j &gt; 0; j--) {

    if(jingle.mix.get(j)*200 &gt; fft.specSize()) {
    stroke(color3,100,100);
  }
  else {
    stroke(color4,100,100);
  }


  line(cos(j)*50, sin((j))*50, cos(j)*abs(jingle.right.get(j))*200 + cos(j)*50, sin(j)*abs(jingle.left.get(j))*200 + sin(j)*50);   
}
</code></pre>

<p>popMatrix();
}</p>

<p>void keyPressed() {</p>

<p>if(key == 'r') {
    color1 = 0;
    color2 = 5;
    color3 = 90;
    color4 = 95;
  }</p>

<p>if(key == 'g') {
    color1 = 35;
    color2 = 45;
    color3 = 65;
    color4 = 75; 
  }
}</p>
]]></description>
   </item>
   <item>
      <title>Generate a drawing from a sound file?</title>
      <link>https://forum.processing.org/two/discussion/23312/generate-a-drawing-from-a-sound-file</link>
      <pubDate>Tue, 04 Jul 2017 18:22:44 +0000</pubDate>
      <dc:creator>FleetingImages</dc:creator>
      <guid isPermaLink="false">23312@/two/discussions</guid>
      <description><![CDATA[<p>Hi All,</p>

<p>I'm wanting to use an audio file (specifically its values of pitch, tone, volume, etc.) to generate a line drawing. I'm not looking to make a music visualizer. I've looked at and modified a few sketches I've found but nothing is what I really want. So, I figured I should do it myself.</p>

<p>I'm wanting an output like <a rel="nofollow" href="http://imgur.com/EWUsVfN">this image</a>, which is one I've made using a modified sketch.</p>

<p>Where should / do I begin?</p>

<p>Thanks.</p>
]]></description>
   </item>
   <item>
      <title>FFT visualization in Python Mode</title>
      <link>https://forum.processing.org/two/discussion/23263/fft-visualization-in-python-mode</link>
      <pubDate>Fri, 30 Jun 2017 08:24:01 +0000</pubDate>
      <dc:creator>noahbuddy</dc:creator>
      <guid isPermaLink="false">23263@/two/discussions</guid>
      <description><![CDATA[<p>I created this to get more familiar with FFT. Like the Fortran example at the <a rel="nofollow" href="http://www.dspguide.com/">DSP Guide</a>, Python supports complex numbers directly.</p>

<p>Things to note: The forward and inverse FFT are very similar.</p>

<p>Pay close attention to how the sample sets ('signal' and 'wave' arrays) are displayed versus how they were created.</p>

<p>Included comments are my interpretation of the algorithm.</p>

<pre><code># Length of data to run through FFT
N = 256 # must be a power of 2 for FFT
M = int(log(N) / log(2)) + 1 # grab number of bit levels in N, plus one for range()

"""
Fast Fourier Transform
  Modifies data to represent the Complex Fourier Transform of the input
  Note: The inverse transform will also overwrite data passed to it

Reference: <a href="http://www.dspguide.com/ch12/3.htm" target="_blank" rel="nofollow">http://www.dspguide.com/ch12/3.htm</a>
"""
def FFT( data ):
  # Sort by bit reversed index
  nd2 = N/2
  j = nd2
  k = 0
  for i in xrange( 1, N-2 ):
    if i &lt; j:
      data[j], data[i] = data[i], data[j] # Pythonic swap
    k = nd2
    while not (k&gt;j):
      j = j - k
      k = k / 2
    j = j + k


  # Bulk of the algorithm from here:
  # For each bit level...
  for L in xrange( 1, M ):

    # Calculate which frequencies to work on this round,
    le = 1&lt;&lt;L
    le2 = le / 2
    # Phase step size at this bit level, AKA: the frequency
    # A complex number
    s = cos(PI/le2) - sin(PI/le2)*1j

    # Init our complex multiplier
    u = 1+0j

    for j in xrange( 1, le2+1 ):
      jm1 = j - 1

      for i in xrange( jm1, N, le ):
        ip = i + le2 # where in data? i and ip

        # Complex multiplication
        # This is what creates constructive or destructive interference
        #   (if sample data is similar to selected frequency, they combine)
        #   (if sample data is _not_ similar to selected frequency, they cancel)
        t = data[ip] * u

        # Positive and Negative frequency bins
        # The FFT is symmetric
        data[ip] = data[i] - t
        data[i] = data[i] + t

      # With each step, rotate multiplier by the frequency step
      # Multiplying complex numbers is easier if you convert them to polar representation first
      #     In polar coordinates add lengths and angles
      #     Convert back to rectangular (complex)
      u = u * s


#
# Inverse FFT
#
def IFFT( data ):
  for i in xrange( len(data) ): # Mirror imaginary values of data
    data[i] = data[i].conjugate()

  FFT( data ); # FFT of mirrored data

  for i in xrange( len(data) ): # Mirror again and scale
    data[i] = data[i].conjugate() / N


# Init samples with complex numbers, length of N
signal = [ (0+0j) ]*N
wave = [ (0+0j) ]*N

# Build a signal in frequency domain
F = 7
signal[F] = 8j
signal[N-F] = -8j

# Create a sine wave in time domain
for i in xrange( N ):
    wave[i] = 0.5*sin( 4*(i*PI)/N )

# Arbitrary drawing size
scl = 1

def setup():
    size(512,512,P3D)
    noFill()
    FFT(signal)
    FFT(wave)
    noLoop()

def mouseDragged():
    redraw()

def draw():
    background(0)
    translate( (width/2), (height/2) )
    rotateY( (TWO_PI*mouseX) / width )
    rotateX( (TWO_PI*mouseY) / height )

    # Draw FFT of frequency domain signal in green
    x = 0
    lastx = 0
    lasty = 0
    lastz = 0
    stroke(0,255,0)
    for n in signal:
        x += 1
        y = n.real * scl
        z = n.imag * scl
        line(lastx-(N/2),lasty,lastz, x-(N/2),y,z)
        lastx = x
        lasty = y
        lastz = z

    # Draw FFT of time domain wave in blue
    x = 0
    lastx = 0
    lasty = 0
    lastz = 0
    stroke(0,0,255)
    for n in wave:
        x += 1
        y = n.real * scl
        z = n.imag * scl
        line(lastx-(N/2),lasty,lastz, x-(N/2),y,z)
        lastx = x
        lasty = y
        lastz = z
</code></pre>
]]></description>
   </item>
   <item>
      <title>Defining Frequency Bands w/ Minim</title>
      <link>https://forum.processing.org/two/discussion/14610/defining-frequency-bands-w-minim</link>
      <pubDate>Mon, 25 Jan 2016 02:53:50 +0000</pubDate>
      <dc:creator>ProJammin</dc:creator>
      <guid isPermaLink="false">14610@/two/discussions</guid>
      <description><![CDATA[<p>I'm using minim BeatDetect object to analyze the incoming microphone signal. BeatDetect uses FFT. In the BeatDetect class, there are 4 functions of interest. isHat(), isKick(), isSnare() and isRange(int, int, int). The first three are customized versions of isRange(). What I'm trying to do is recognize more than just hat, kick and snare drums. In order to do this, I need to understand the math in the methods isHat(), isKick() and isSnare(). I'm hoping someone here can help me. Here is the code for the 4 functions.</p>

<p>`</p>

<pre><code>    public boolean isKick()
{
    if (algorithm == SOUND_ENERGY)
    {
        return false;
    }
    int upper = 6 &gt;= fft.avgSize() ? fft.avgSize() : 6;
    return isRange(1, upper, 2);
}

public boolean isSnare()
{
    if (algorithm == SOUND_ENERGY)
    {
        return false;
    }
    int lower = 8 &gt;= fft.avgSize() ? fft.avgSize() : 8;
    int upper = fft.avgSize() - 1;
    int thresh = (upper - lower) / 3 + 1;
    return isRange(lower, upper, thresh);
}

public boolean isHat()
{
    if (algorithm == SOUND_ENERGY)
    {
        return false;
    }
    int lower = fft.avgSize() - 7 &lt; 0 ? 0 : fft.avgSize() - 7;
    int upper = fft.avgSize() - 1;
    return isRange(lower, upper, 1);
}

public boolean isRange(int low, int high, int threshold)
{
    if (algorithm == SOUND_ENERGY)
    {
        return false;
    }
    int num = 0;
    for (int i = low; i &lt; high + 1; i++)
    {
        if (isOnset(i))
        {
            num++;
        }
    }
    return num &gt;= threshold;
}
</code></pre>

<p>`
I want to be able to recognize beats accurately across a range of instruments by manipulating the methods above. Can anybody help teach me what I need to recognize? Currently, I understand that the functions return true if a beat is detected within a specified range of frequency bands. What I don't understand is why the values for the parameters [low, high, threshold] in the functions correlate to specific instruments. Thanks for reading and please respond.</p>
]]></description>
   </item>
   <item>
      <title>Why does it work this way? Need help for a schoolproject</title>
      <link>https://forum.processing.org/two/discussion/22919/why-does-it-work-this-way-need-help-for-a-schoolproject</link>
      <pubDate>Sun, 04 Jun 2017 14:43:51 +0000</pubDate>
      <dc:creator>beladrum</dc:creator>
      <guid isPermaLink="false">22919@/two/discussions</guid>
      <description><![CDATA[<p>Hey Guys,</p>

<p>first of all, sorry if my englisch isnt perfect , because im from Germany but i will try my best :D</p>

<p>Im working on a school project and i need to create a Musicvisualizer in 3D. Now ive wrote a code thats works good and i also like how it looks, but i dont know why it work in this way. the Code is creating many Spheres if i use the translate command to place it in the middel of the screen. In my opinion it looks very cool with more of these Spheres but i dont understand why there are so many. If I just write the Sphere command without the Translate , it only creates one Sphere thats moving to the analysed float from the FFT.</p>

<p>Can someone maybe tell me why this happens ? 
because i need to present and also explain how my code works in front of my teacher. 
hope you can help me , thx and greetings.</p>

<hr />

<p>CODE :</p>

<pre><code>import shapes3d.*;
import shapes3d.animation.*;
import shapes3d.utils.*;

import ddf.minim.*;
import ddf.minim.analysis.*;
import ddf.minim.effects.*;
import ddf.minim.signals.*;
import ddf.minim.spi.*;
import ddf.minim.ugens.*;  



import peasy.*;
int i;
PeasyCam cam;
Minim minim; 
AudioPlayer player;
FFT fft;
void setup() {
  size(800, 800, P3D);
  cam = new PeasyCam(this, 0, -100, 200, 1000);
  cam.setMinimumDistance(20);
  cam.setMaximumDistance(5000);

  minim= new Minim(this);
  player= minim.loadFile("bmth.mp3");
  player.play();

  fft = new FFT(player.bufferSize(), player.sampleRate());
}

void draw() {
  rotateX(-.5);
  rotateY(-.5);
  background(0);
  //fill(255,0,0);
  //sphere(i);
  keyPressed();

  fft.forward(player.mix);// used to analyze the frequency coming from the mix 
  for (int i = 0; i &lt; fft.specSize(); i += 50)// specSize is changing the range of analysis
  {
    noFill();

    ellipse(width/2, height/2, 500, fft.getFreq(i/2)*2.5);

    ellipse(width/2, height/2, 200+fft.getFreq(i/2)*2.5, 300+fft.getFreq(i/2)*2.5);

    ellipse(width/2, height/2, fft.getFreq(i/2)*2.5, 400);

    fill(255, 0, 0);

    stroke(255, 255, 255);

    pushMatrix();
    translate(400, 400, 5);

    sphereDetail(20, 3);

    sphere(50+ fft.getFreq(i/2)*0.2);

    popMatrix();

    sphereDetail(3, 20);

    translate(400, 50, 5);

    sphere(50+ fft.getFreq(i/2)*0.2);

    pushMatrix();

    translate(0, 750, 5);
    sphere(50+ fft.getFreq(i/2)*0.2);
    popMatrix();
  }
}

void keyPressed () {

  if ( key=='s') { 
    player.pause();
  }
  if ( key=='d') { 
    player.play();
  }
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Max FTOM equivalent in Processing? Minim/FFT</title>
      <link>https://forum.processing.org/two/discussion/22648/max-ftom-equivalent-in-processing-minim-fft</link>
      <pubDate>Thu, 18 May 2017 00:43:19 +0000</pubDate>
      <dc:creator>jameswest</dc:creator>
      <guid isPermaLink="false">22648@/two/discussions</guid>
      <description><![CDATA[<p>Hi I was trying to find the dominant frequency in an audio signal using FFT/minim and convert that frequency to an equivalent midi note.</p>

<p>I was googling around and came across ftom for max.. I was wondering if something similar existed for Processing that someone might have come across? or would it be easier to port this out to max to find the dominant frequency and midi note and pipe that result back in to processing?</p>
]]></description>
   </item>
   <item>
      <title>How do I make a fading in/out transition when switching gameStates/images?</title>
      <link>https://forum.processing.org/two/discussion/22327/how-do-i-make-a-fading-in-out-transition-when-switching-gamestates-images</link>
      <pubDate>Mon, 01 May 2017 17:56:01 +0000</pubDate>
      <dc:creator>kurtsnafu</dc:creator>
      <guid isPermaLink="false">22327@/two/discussions</guid>
      <description><![CDATA[<p>Currently making a rhythm game and I want to create a fade to black/fade out animation for when I switch between the menu and the song selection screen. I also want to use this animation for when I'm trying to pick between songs in the song selection screen.</p>

<p>I know it's not entirely necessary and the game will still work without it, but I kinda wanna make it look prettier. Some ideas/code would be nice!</p>

<p>The animation would kinda look like this:</p>

<p>-fade to black</p>

<p>-change image behind it (hidden from user)</p>

<p>-start playing music of the song selection screen (hidden from user)</p>

<p>-fades out from black with entirely new background/music playing</p>
]]></description>
   </item>
   <item>
      <title>Make particles synchronize with sound</title>
      <link>https://forum.processing.org/two/discussion/22173/make-particles-synchronize-with-sound</link>
      <pubDate>Mon, 24 Apr 2017 18:31:55 +0000</pubDate>
      <dc:creator>rockie</dc:creator>
      <guid isPermaLink="false">22173@/two/discussions</guid>
      <description><![CDATA[<p>Hello there. I'm trying make the rain particles fall synchronized with the sound using the minim library, but I know nothing about this and I have no idea how to make this happen. If someone could explain me how to do it, it would be great. This is what I have right now. Thank you and sorry for the slightly dumb question!</p>

<pre><code>import ddf.minim.*;

Minim minim;
AudioPlayer player;
int numero = 200; // how many rain drops on your screen??
Rain[] rains = new Rain[numero]; 

void setup()
{
  size(512, 200, P3D);
  frameRate(40);
  noStroke();
  for(int i=0; i&lt;numero ;i=i+1){
     rains[i]=new Rain(); 
  }
  // we pass this to Minim so that it can load files from the data directory
  minim = new Minim(this);

  // loadFile will look in all the same places as loadImage does.
  // this means you can find files that are in the data folder and the 
  // sketch folder. you can also pass an absolute path, or a URL.
  player = minim.loadFile("sound.mp3");

  // play the file from start to finish.
  // if you want to play the file again, 
  // you need to call rewind() first.
  player.play();
}

void draw()
{
  background(0);
  stroke(255);
  for(int i=0 ; i&lt;numero ; i=i+1){
    rains[i].update();
    }
  // draw the waveforms
  // the values returned by left.get() and right.get() will be between -1 and 1,
  // so we need to scale them up to see the waveform
  // note that if the file is MONO, left.get() and right.get() will return the same value
  for(int i = 0; i &lt; player.bufferSize() - 1; i++)
  {
    float x1 = map( i, 0, player.bufferSize(), 0, width );
    float x2 = map( i+1, 0, player.bufferSize(), 0, width );
   line( x1, 50 + player.left.get(i)*50, x2, 50 + player.left.get(i+1)*50 );
    line( x1, 150 + player.right.get(i)*50, x2, 150 + player.right.get(i+1)*50 );
  }
}
class Rain {  //this class setups the shape and movement of raindrop.
 float x = random(0,600);
 float y = random(-1000,0);
 float size = random(3,7); // size of raindrop 
 float speed = random(20,80); // speed range
  void update() 
  { 
    y += speed; 
    fill(185,197,209);
    ellipse(x, y-20, size, size*6); // tail of raindrop
    fill(255-(100-speed));
    ellipse(x, y, size, size*6); //head of raindrop 

    if (y&gt; height) //initialize raindrop which arrives bottom.
     { 
       x = random(0,600);
       y = random(-1200,0);
     } 

  } 
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Audio and frame delay timing using fft and screen position. X-Post from reddit</title>
      <link>https://forum.processing.org/two/discussion/22193/audio-and-frame-delay-timing-using-fft-and-screen-position-x-post-from-reddit</link>
      <pubDate>Tue, 25 Apr 2017 09:41:20 +0000</pubDate>
      <dc:creator>SnailPropulsionLabs</dc:creator>
      <guid isPermaLink="false">22193@/two/discussions</guid>
      <description><![CDATA[<p>X-posted from <a rel="nofollow" href="https://www.reddit.com/r/processing/comments/66v1ya/audio_and_frame_delay_timing_using_fft_and_screen/">here</a></p>

<p>Hi all, I'm using the spectrum data from an FFT i'm performing to control the speed of the audio file as well as the delay between frames being drawn. This is what creates the avg. 
The issue is that i'm not sure how to make this work for a range of different files. Sometimes there'll be 1440 frames to play, othertimes there's 1700 and the audio is similarly varied.</p>

<p>How can a build this so that it'll always playback in the right amount of time, which is currently 1 minute but could be 2 or 3.</p>

<p>Thanks.</p>

<p>Apologies for the state of the code.</p>

<p>Full project <a rel="nofollow" href="https://drive.google.com/file/d/0B2rUs5HqKNRyR09Sd2tFMW9vbWs/view">here</a><br />
A folder that the program needs to process <a rel="nofollow" href="https://drive.google.com/file/d/0B2rUs5HqKNRyOWR2Mkg0NXBZSjg/view">here</a><br />
FFT function was taken from <a rel="nofollow" href="http://stackoverflow.com/questions/20408388/how-to-filter-fft-data-for-audio-visualisation">here</a></p>

<p>code snippet that i'm using to control the playback and delay:</p>

<pre><code>if (tConnectSound) {
  if (avg &gt; height/6+height/6*4 &amp;&amp; avg &lt; height) {
    //rateControl.value.setLastValue(frameSkipLoad/32);
    cp5.getController("sDelay").setValue(frameSkipLoad/32);
  }//
  else if (avg &gt; height/6+height/6*3 &amp;&amp; avg &lt;  height/6+height/6*4) {
    //rateControl.value.setLastValue(frameSkipLoad/16);
    cp5.getController("sDelay").setValue(frameSkipLoad/16);
  }//
  else if (avg &gt; height/6+height/6*2 &amp;&amp; avg &lt; height/6+height/6*3) {
    //rateControl.value.setLastValue(frameSkipLoad/8);
    cp5.getController("sDelay").setValue(frameSkipLoad/8);
  }//
  else if (avg &gt; height/6+height/6 &amp;&amp; avg &lt; height/6+height/6*2) {
    //rateControl.value.setLastValue(frameSkipLoad/4);
    cp5.getController("sDelay").setValue(frameSkipLoad/4);
  }//
  else if (avg &gt; height/6 &amp;&amp; avg &lt; height/6+height/6) {
    //rateControl.value.setLastValue(frameSkipLoad/2);
    cp5.getController("sDelay").setValue(frameSkipLoad/2);
  }//
  else if (avg &gt; 0 &amp;&amp; avg &lt; height/6) {
    //rateControl.value.setLastValue(frameSkipLoad);
    cp5.getController("sDelay").setValue(frameSkipLoad);
  }
  delay = cp5.getController("sDelay").getValue();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>How to make a generative visual react to sound</title>
      <link>https://forum.processing.org/two/discussion/21501/how-to-make-a-generative-visual-react-to-sound</link>
      <pubDate>Mon, 20 Mar 2017 14:27:19 +0000</pubDate>
      <dc:creator>Mxrx</dc:creator>
      <guid isPermaLink="false">21501@/two/discussions</guid>
      <description><![CDATA[<p>Hello, I am kind of new to Processing. What I am trying to do is to make this generative visual react to sound. For now the visual is reacting to the mouse movements, but I would like it to take input from the sound captured by the microphone and generate the visuals. The code for now is this. I guess I will have to use the minim library and the fft audio analyzer but I don't really know how to integrate it to this sketch. Thank you in advance for any help</p>

<pre><code>float[] x;
float[] y;
color[] col;
float s = 0.001;
float depth = 0.5;
PImage img; 

void setup() {
  size(1000, 1000);
  background(255);
  int n = 1000;
  x = new float[n];
  y = new float[n];
  col = new color[n];
  img = loadImage("imageName0.jpg");
  img.resize(width, height);
  img.loadPixels();
  for (int i = 0; i &lt; x.length; i++) {
    x[i]= random(0, width);
    y[i]= random(0, height);
    int loc = int(x[i]) + int(y[i])*width;
    col[i] = img.pixels[loc];
  }
}

void draw() {
  noStroke();
  depth = map(mouseY, 0, height, 0.5, 1.5);
  //fill(255, 4); //Uncomment if you don't want to use an image;
  for (int i = 0; i &lt; x.length; i++) {
    float alpha = customNoise(x[i] * s, y[i] * s)*2*PI;
    x[i]+= depth * cos(alpha); // + random(-0.4, 0.4);
    y[i]+= depth * sin(alpha); // + random(-0.4, 0.4);
    if (y[i] &gt; height) {
      y[i] = 0;
      x[i] = random(0, width);
    }
    x[i]= x[i]%width;
    fill(col[i], 4); //Comment if you don't want to use an image;
    ellipse(x[i], y[i], 2, 2);
  }
}


float customNoise(float x, float y) {
  return pow(sin(0.9*x + noise(x, y)*map(mouseX, 0, width, 0, 5)*y), 3);
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Is it possible to perform FFT with FilePlayer object? [Minim]</title>
      <link>https://forum.processing.org/two/discussion/21842/is-it-possible-to-perform-fft-with-fileplayer-object-minim</link>
      <pubDate>Wed, 05 Apr 2017 22:19:19 +0000</pubDate>
      <dc:creator>SnailPropulsionLabs</dc:creator>
      <guid isPermaLink="false">21842@/two/discussions</guid>
      <description><![CDATA[<p>I'm trying to affect the playback speed of an mp3 based on the positions of the bars.</p>

<p>The "vanilla" code (from the analyzeSound.pde example) below works with an AudioPlayer object but when I try to combine it with the <a rel="nofollow" href="http://code.compartmental.net/minim/tickrate_class_tickrate.html">tickRate example</a>, it says "the function 'bufferSize()' does not exist" and "the global variable 'mix' does not exist".</p>

<p>I don't understand Minim enough to know how to remedy this.</p>

<p>Is it possible to do it this way?</p>

<p>Thanks.</p>

<p>Vanilla:</p>

<pre><code>import ddf.minim.*;
import ddf.minim.analysis.*;

Minim minim;
AudioPlayer player;
FFT fft;

float spectrumAvg;

void setup() {
  fullScreen();
  //size(512, 200);
  minim = new Minim(this);
  selectInput("Select an audio file:", "fileSelected");
}

void fileSelected(File selection) {
  String audioFileName = selection.getAbsolutePath();
  player = minim.loadFile(audioFileName);
  fft = new FFT(player.bufferSize(), player.sampleRate());
  player.play();
}

void draw()
{
  background(0);
  stroke(255);

  if (player != null) {
    if (fft != null) {
      fft.forward(player.mix);

      for (int i = 0; i &lt; fft.specSize(); i++) {
        float lineStrength = height - fft.getBand(i)*height/2;
        spectrumAvg += lineStrength;
        line(i, height, i, lineStrength);
      }
      spectrumAvg = spectrumAvg / fft.specSize();
      println(spectrumAvg);
    }
  }
}
</code></pre>

<p>Combined with tickRate:</p>

<pre><code>import ddf.minim.*;
import ddf.minim.analysis.*;
import ddf.minim.spi.*; // for AudioRecordingStream
import ddf.minim.ugens.*;


Minim minim;
FilePlayer player;
FFT fft;

float spectrumAvg;

void setup() {
  fullScreen();
  //size(512, 200);
  minim = new Minim(this);
  selectInput("Select an audio file:", "fileSelected");
}

void fileSelected(File selection) {
  String audioFileName = selection.getAbsolutePath();
  player = new FilePlayer(minim.loadFileStream(audioFileName));
  fft = new FFT(player.bufferSize(), player.sampleRate()); //error
  player.play();
}

void draw()
{
  background(0);
  stroke(255);

  if (player != null) {
    if (fft != null) {
      fft.forward(player.mix); //error

      for (int i = 0; i &lt; fft.specSize(); i++) {
        float lineStrength = height - fft.getBand(i)*height/2;
        spectrumAvg += lineStrength;
        line(i, height, i, lineStrength);
      }
      spectrumAvg = spectrumAvg / fft.specSize();
      println(spectrumAvg);
    }
  }
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>music visualization</title>
      <link>https://forum.processing.org/two/discussion/21684/music-visualization</link>
      <pubDate>Wed, 29 Mar 2017 18:35:19 +0000</pubDate>
      <dc:creator>vfxstudent</dc:creator>
      <guid isPermaLink="false">21684@/two/discussions</guid>
      <description><![CDATA[<p>hello, i want to add controlp5 buttons in a music visualizer code.But i am having trouble with loading the mp3 file.i  am using selectInput method but i am getting null pointer exception at fftForward(song.mix).please help me  :|</p>
]]></description>
   </item>
   <item>
      <title>Analyse a Sound File Without Playback</title>
      <link>https://forum.processing.org/two/discussion/20723/analyse-a-sound-file-without-playback</link>
      <pubDate>Wed, 08 Feb 2017 20:50:22 +0000</pubDate>
      <dc:creator>cygig</dc:creator>
      <guid isPermaLink="false">20723@/two/discussions</guid>
      <description><![CDATA[<p>I read up on examples of how to use FFT functions to create an audio visualiser as the audio is being played backed. Just wondering, is there a way to analyse a sound file without playing back? For example, read through the sound file and store the FFT analysis in a 2D array of spectrum[frame][band].</p>
]]></description>
   </item>
   <item>
      <title>How to hide sphere overlap point</title>
      <link>https://forum.processing.org/two/discussion/20435/how-to-hide-sphere-overlap-point</link>
      <pubDate>Mon, 23 Jan 2017 22:41:35 +0000</pubDate>
      <dc:creator>Moeemoee</dc:creator>
      <guid isPermaLink="false">20435@/two/discussions</guid>
      <description><![CDATA[<p>Hey everyone! I'm working on making a sphere out of individual points and calculating the radius of each point by the volume of the music. If you load my code you can see my problem. There is a big mark that indicates where the sphere starts and stops. Is there a way to make these two connected so it doesnt look as ugly? Any help is appreciated!!</p>

<pre><code>import peasy.*;
import peasy.org.apache.commons.math.*;
import peasy.org.apache.commons.math.geometry.*;
import ddf.minim.*;
import ddf.minim.analysis.*;
import ddf.minim.effects.*;
import ddf.minim.signals.*;
import ddf.minim.spi.*;
import ddf.minim.ugens.*;

PeasyCam cam;
Minim minim;
AudioPlayer player;
AudioMetaData meta;
BeatDetect beat;
FFT fft;

PVector[][] globe;
color[][] colormap;
float[][] r;
int total = 100;
float rad = 25;
float nI=0;
float nJ=0;
float hu;
float minRad = 30;
float maxRad = 100;
boolean calculated = false;
boolean up=true;
void setup() {
  cam = new PeasyCam(this, 50);
  minim = new Minim(this);
  //player = minim.loadFile("D:/programeer gedoe/Programs/Music visualiser/PROCESSING VISUALIZER/Audio/SAIL - AWOLNATION.mp3");
  //player = minim.loadFile("D:/programeer gedoe/Programs/Music visualiser/PROCESSING VISUALIZER/Audio/Ganja White Night - Mr Nice.mp3");
  player = minim.loadFile("D:/programeer gedoe/Programs/Music visualiser/PROCESSING VISUALIZER/Audio/Imagine Dragons, Radioactive HD.mp3");
  size(1000, 1000, P3D);
  r = new float[total+1][total+1];
  colormap = new color[total+1][total+1];
  globe = new PVector[total+1][total+1];
  colorMode(HSB);
  smooth();
  frameRate(30);
  meta = player.getMetaData();
  beat = new BeatDetect();
  fft = new FFT(player.bufferSize(), player.sampleRate());
  player.play(35000);
}

void draw() {
  background(0);
  lights();
  noStroke();
  // if (!calculated) {
  //float nJ=0;
  println("nI: " + nI);
  println("nJ: " + nJ);
  for (int i = 0; i &lt; total+1; i++) {
    float lat = map(i, 0, total, 0, PI);
    for (int j = 0; j &lt; total+1; j++) {
      r[i][j] = map(player.left.get(i)*player.left.get(j), -1, 1, minRad, maxRad);
      //r[i+1][j] = map(player.left.get(i)*player.left.get(j), 0, 1, minRad, maxRad);
      //if (j == total &amp;&amp; i == total) {
      //  r[i][j] = map(player.left.get(0)*player.left.get(0), 0, 3.5, minRad, maxRad);
      //}
      if (nI &gt; 15 &amp;&amp; up) {
        up = false;
      }
      if (nI &gt;= 15) {
        up = false;
      } else if (nI &lt;= 0) {
        up = true;
      }
      if (up) {
        nI+=0.0000002;
        nJ+=0.0000002;
      } else {
        nI-=0.0000005;
        nJ-=0.0000005;
      }
      hu = map(r[i][j], minRad, maxRad, 0, 255);
      colormap[i][j] = color(hu, 255, 255);
      //fill(hu, 255, 255);
      float lon = map(j, 0, total, 0, TWO_PI);
      //float x = rad*cos(lon)*sin(lat);
      //float y = rad*sin(lat)*sin(lon);
      //float z = rad*cos(lat);
      float x = r[i][j]*cos(lon)*sin(lat);
      float y = r[i][j]*sin(lat)*sin(lon);
      float z = r[i][j]*cos(lat);
      globe[i][j] = new PVector(x, y, z);
    }
  }
  calculated = true;
  //}

  for (int i =0; i &lt; total; i++) {
    beginShape(TRIANGLE_STRIP);
    for (int j = 0; j &lt; total + 1; j++) {
      fill(colormap[i][j]);
      PVector v = globe[i][j];
      PVector v2 = globe[i+1][j];
      vertex(v.x, v.y, v.z);
      fill(colormap[i+1][j]);
      vertex(v2.x, v2.y, v2.z);
    }
    endShape();
  }
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>What range does getAvg() return in Minim?</title>
      <link>https://forum.processing.org/two/discussion/20491/what-range-does-getavg-return-in-minim</link>
      <pubDate>Fri, 27 Jan 2017 01:37:33 +0000</pubDate>
      <dc:creator>Apothem</dc:creator>
      <guid isPermaLink="false">20491@/two/discussions</guid>
      <description><![CDATA[<p>I am trying to create a music visualizer using the FFT in Minim. For my calculations, I need getAvg() to return a number between 0 and 1, which I can then convert to dBA. However, it seems to give me numbers between 0 and some other number like 27 when I play a file with full amplitude noise created in Audacity. So what is getAvg() actually returning and how can I get it to be between 0 and 1?</p>

<p>I have this in my setup function:</p>

<pre><code>fft = new FFT(song.bufferSize(), song.sampleRate());
fft.logAverages(22, bandsPerOctave);
</code></pre>

<p>and this in my draw function:</p>

<pre><code>fft.forward(song.mix);
for(int i = 0; i &lt; fft.avgSize(); i++)
{
  float amplitude = fft.getAvg(i);
  println(amplitude);
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>How/where do I alter output (minim, reverb)</title>
      <link>https://forum.processing.org/two/discussion/20355/how-where-do-i-alter-output-minim-reverb</link>
      <pubDate>Thu, 19 Jan 2017 15:55:35 +0000</pubDate>
      <dc:creator>adambass15</dc:creator>
      <guid isPermaLink="false">20355@/two/discussions</guid>
      <description><![CDATA[<p>looking to add reverb to the keyboard, do not know how to add this effect can anybody help</p>

<pre><code>import ddf.minim.analysis.*;
import ddf.minim.*;
import ddf.minim.signals.*;

Minim minim;
AudioOutput out;

void setup()
{
  size(512, 200, P3D);

  minim = new Minim(this);
  out = minim.getLineOut(Minim.STEREO);
}

void draw()
{
  background(0);
  stroke(255);
  for(int i = 0; i &lt; out.bufferSize() - 1; i++)
  {
    float x1 = map(i, 0, out.bufferSize(), 0, width);
    float x2 = map(i+1, 0, out.bufferSize(), 0, width);
    line(x1, 50 + out.left.get(i)*50, x2, 50 + out.left.get(i+1)*50);
    line(x1, 150 + out.right.get(i)*50, x2, 150 + out.right.get(i+1)*50);
  }
}

void keyPressed()
{
  SineWave mySine;
  MyNote newNote;

 float pitch = 0;
  switch(key) {
    case 'z': pitch = 262; break;
    case 's': pitch = 277; break;
    case 'x': pitch = 294; break;
    case 'd': pitch = 311; break;
    case 'c': pitch = 330; break;
    case 'v': pitch = 349; break;
    case 'g': pitch = 370; break;
    case 'b': pitch = 392; break;
    case 'h': pitch = 415; break;
    case 'n': pitch = 440; break;
    case 'j': pitch = 466; break;
    case 'm': pitch = 494; break;
    case ',': pitch = 523; break;
    case 'l': pitch = 554; break;
    case '.': pitch = 587; break;
    case ';': pitch = 622; break;
    case '/': pitch = 659; break;
  }

   if (pitch &gt; 0) {
      newNote = new MyNote(pitch, 0.2);
   }
}

void stop()
{
  out.close();
  minim.stop();

  super.stop();
}

class MyNote implements AudioSignal
{
     private float freq;
     private float level;
     private float alph;
     private SineWave sine;

     MyNote(float pitch, float amplitude)
     {
         freq = pitch;
         level = amplitude;
         sine = new SineWave(freq, level, out.sampleRate());
         alph = 0.9;
         out.addSignal(this);
     }

     void updateLevel()
     {
         level = level * alph;
         sine.setAmp(level);

         if (level &lt; 0.01) {
            out.removeSignal(this);
         }
     }

     void generate(float [] samp)
     {
         sine.generate(samp);
         updateLevel();
     }

    void generate(float [] sampL, float [] sampR)
    {
        sine.generate(sampL, sampR);
        updateLevel();
    }

}
</code></pre>
]]></description>
   </item>
   <item>
      <title>How to get an octave based frequency spectrum from the FFT in Minim</title>
      <link>https://forum.processing.org/two/discussion/19936/how-to-get-an-octave-based-frequency-spectrum-from-the-fft-in-minim</link>
      <pubDate>Tue, 27 Dec 2016 01:52:46 +0000</pubDate>
      <dc:creator>Apothem</dc:creator>
      <guid isPermaLink="false">19936@/two/discussions</guid>
      <description><![CDATA[<p>I want to create audio visuals like the bars visual that is commonly used in music videos. Minim has an FFT that gets the amplitude of each frequency band, but each band is evenly spaced. So the result is that the bars are mostly showing only the high frequencies, and the mid and low frequencies are bunched up on the left. I tried this, but I'm not good with logarithms and have no clue what the correct math to calculate this should be, and it didn't really work:</p>

<pre><code>import ddf.minim.analysis.*;
import ddf.minim.*;

Minim minim;
AudioPlayer song;
FFT fft;

// the number of lines/bands to draw
int bands = 512;

float a = (log(20000 - 20) / log(bands));

void setup()
{
  size(512, 200);

  minim = new Minim(this);

  song = minim.loadFile("test.mp3", 2048);
  song.loop();

  fft = new FFT(song.bufferSize(), song.sampleRate());
}

void draw()
{
  background(0);
  stroke(255);

  fft.forward(song.mix);

  for(int i = 0; i &lt; bands; i++)
  {
    // calculate the frequency for the current band on a logamorithmic scale
    float freq = pow(i, a) + 20;

    // get the amplitude at that frequency
    float amplitude = fft.getFreq(freq);

    // convert the amplitude to a DB value. 
    // this means values will range roughly from 0 for the loudest
    // bands to some negative value.
    float bandDB = 20 * log(2 * amplitude / fft.timeSize());
    // so then we want to map our DB value to the height of the window
    // given some reasonable range
    float bandHeight = map( bandDB, 0, -150, 0, height );

    // draw the band as a line
    line(i, height, i, bandHeight);
  }
}
</code></pre>

<p>I don't even know if it can be done using the FFT in Minim because there might not be enough bands calculated for the lower frequencies. If not, what can I use to accomplish this?</p>
]]></description>
   </item>
   <item>
      <title>Processing Minim BPM analysis serial.write error</title>
      <link>https://forum.processing.org/two/discussion/19814/processing-minim-bpm-analysis-serial-write-error</link>
      <pubDate>Sun, 18 Dec 2016 22:44:17 +0000</pubDate>
      <dc:creator>johnnyUtah05</dc:creator>
      <guid isPermaLink="false">19814@/two/discussions</guid>
      <description><![CDATA[<p>Hi.
The code I am using to search for the BPM of a song in Processing with Minim works fine when I println the 'bpm'. Unfortunately, when I attempt to myPort.write(bpm), I am getting the JavaSound Minim Error: Don't know the ID3 code TXXX and TSSE</p>

<p>I know this is related to the mp3 tag embedded in the file and I have removed it; however, I still get a NullPointer Exception which is freezing my java.</p>

<p>Any help would be appreciated.</p>

<pre><code>import ddf.minim.analysis.*;
import ddf.minim.*;
import controlP5.*; 
import processing.serial.*;
//----------------------------------------------------------------------------------------------------
Serial myPort;
Minim minim;
AudioPlayer player;
FFT fft;
ControlP5 controlP5;

int colmax = 100;
int rowmax = 25;
int[][] sgram = new int[30][colmax];
float[] sgram2 =new float[256];

int col;
int leftedge;
int b;
float currMaxVal;
int maxValPos;
int mydelay;
int milidif;
int time  ;
int runtime;
int timeold;
int Speed=0;


void setup()
{
  frameRate(61);
  size(510, 125);
  time = 0;
  controlP5 = new ControlP5(this);
  controlP5.addSlider("rowmax", 0, 30, 25, 500, 0, 10, 125);
  // set colormode
  colorMode(HSB, 255, 255, 255, 255);

  // get the audio
  minim = new Minim(this);
  player = minim.loadFile("Major Lazer - Watch Out For This (Bumaye) (instrumental).mp3", 2048);
  player.loop();
  fft = new FFT(player.bufferSize(), player.sampleRate());
  fft.window(FFT.HAMMING);
}

void draw()
{
  background(0);
  time = millis();
  milidif=time-mydelay;
  // perform a forward FFT on the samples in the input buffer
  fft.forward(player.mix);
  // fill array with spectrum
  for (int i = 0; i&lt;256; i++)
  {
    sgram2[i] = (fft.getBand(i));
    if ((fft.getBand(i)) &gt; (currMaxVal)) {
      currMaxVal=((fft.getBand(i)));
      maxValPos = i;
    }
  }
  //set the color by: highest value and the freq-band
  color a = color(maxValPos, 255, currMaxVal );
  // set values to zero
  currMaxVal =0;
  maxValPos = 0;
  //fill the last column of the array with the color
  sgram[b][99] = (a);
  // count up the rows
  b=b+1;
  // when row is full, shift array 1 step left
  if (b==rowmax) {
    b=0;
    mydelay=time;

    //-----------------------------------------------------------------------------------------------
    //ERROR HERE//
    int bpm = 60000/milidif;
    println(bpm);
    myPort.write(bpm);
    //-----------------------------------------------------------------------------------------------

    for (int z = 0; z &lt; 99; z++) {
      for (int y = 0; y &lt; 25; y++) {
        sgram[y][z]=sgram[y][z+1];
      }
    }
  }
  for (int i = 0; i &lt; colmax; i++) {
    for (int j = 0; j &lt; 25; j++) {
      stroke(0);
      fill(sgram[j][i]);
      rect(i*5, height-(j*5), 5, 5);
      noStroke();
    }
  }
}


void stop()
{
  // always close Minim audio classes when you finish with them
  player.close();
  minim.stop();

  super.stop();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>How to make text disappear when blob intersects words using Kinect V1?</title>
      <link>https://forum.processing.org/two/discussion/19730/how-to-make-text-disappear-when-blob-intersects-words-using-kinect-v1</link>
      <pubDate>Tue, 13 Dec 2016 18:59:46 +0000</pubDate>
      <dc:creator>cstroud</dc:creator>
      <guid isPermaLink="false">19730@/two/discussions</guid>
      <description><![CDATA[<p>Hi guys</p>

<p>I'm using version 1 kinect to make a blob interact with sound-reactive raining text. I was wondering how to make the text disappear once the blob intersects with one of the words? Please help! Thank you!</p>

<pre><code>//Make text individual words disappear (change position off page) if blob position is equal word position.

import org.openkinect.freenect.*;
import org.openkinect.processing.*;
import ddf.minim.*;
import ddf.minim.analysis.*;
import ddf.minim.effects.*;
import ddf.minim.signals.*;
import ddf.minim.spi.*;
import ddf.minim.ugens.*;

Minim minim;
AudioInput in;
FFT fft;
KinectTracker tracker;
Kinect kinect;

String [] data = {"327-64-4367", "448-01-4857","553-15-8880","016-10-8387","222-07-1302","574-68-5578","104-42-0570","237-40-7110","212-21-5143","559-30-1997", "574-76-8818", "007-64-7860", "145-42-9696", "574-24-6501", "477-49-7241", "517-62-2683", "315-52-7169", "750-12-8784", "678-20-3681","5895 13th Street, Webster, NY 14580","7776 Grove Street, Romulus, MI 48174","2294 Brook Lane, Paramus, NJ 07652","2424 Route 64, Maineville, OH 45039","9645 Valley View Drive, Tewksbury, MA 01876","153 Cross Street, The Villages, FL 32162","920 Route 30, Loxahatchee, FL 33470","238 Lincoln Street, Jamestown, NY 14701","729 Meadow Street, Dubuque, IA 52001","133 Franklin Court, South Portland, ME 04106","994 Elm Avenue, Sun Prairie, WI 53590","392 Woodland Drive, Liverpool, NY 13090","901 Willow Lane, Bridgeport, CT 06606","92236","07731","45420","10701","08094","07030","19047","41017","17543","80302","80021","60134","32082","(494) 133-5459","(249) 186-5356","(818) 402-5177","(424) 926-1653","(201) 672-5146","(245) 199-2255","(147) 520-8998","(349) 179-5381","(758) 621-4845","(988) 984-0554","(786) 693-1798","(291) 731-8164","(345) 209-6039","(537) 175-2683","(918) 990-4772","(820) 906-9228","(916) 565-9637","(633) 293-3501","(881) 991-5264","(193) 382-1818","(329) 603-4359","(160) 347-3446","(870) 293-5829","5165 1130 7400 9060","6011 5436 3995 4943","3455 419447 60971","3486 888573 38316","6011 5910 4628 7894","6011 7865 3449 2309","4929 2788 4706 8154","5427 8194 1616 5774","5446 9823 4141 6972","6011 2940 9543 3242","4532 8600 5212 9117","6011 9832 4310 8231","3784 431486 81659","5520 6129 5998 3075","4485 5114 2769 9519","4716 7962 0326 5780","3418 877718 33322","4539 1378 7971 4063","3730 394465 12151","4539 4187 1183 6953","5295 2968 4751 7338","3755 105752 68078","157.95.65.166","107.165.28.12","75.62.96.132","249.88.95.147","187.244.83.79","134.131.159.96","49.206.109.182","95.33.230.69","211.162.26.78","186.58.2.250","PcCX5mQhf","s58@ubFUM","n1IPf4zJo","ShSUN0pPc","d-2eug@Qj","zy*@fYOaG","npujcoBVE","Kenzie Flores","Shamar Fox","Nadia Hatfield","Brodie Hood","Nathanial James","Leanna Woodard","Wilson Murillo","Lillie Sullivan","Lillianna Mcintyre","Edward Chan","Jay Chambers","Shirley Vaughn","Jaylah Daniel","Eden Gordon","Joanna Freeman","Jazmyn Hamilton","Molly Costa","Julie Snow","Kole Wu","Raul Santos","Jadiel Mercado","Anna Silva","Mckinley Wyatt","Lance Odom","Nylah Whitney","Melanie Berger","Colin Strong","Irvin Kent","Donna Levine","Tess Wilkinson","Micheal Shields","Helen Rocha","Destinee Bowers","Keyon Graves","Bradley Knight","Kaitlynn Santos","Natalya Hernandez","Humberto Knapp","Camryn Farrell","Pablo Aguilar"};  //40

String note;
color c;
int n;
int noteNumber;
int sampleRate= 44100;

float [] max= new float [sampleRate/2];
float maximum;
float frequency;
float hertz;
float midi;
float deg;
float brightnessThresh;

boolean ir = false;
boolean colorDepth = false;
boolean mirror = false;

Drop[] drops = new Drop[150];

void setup() {
  size(1280, 520); //size of kinect screen
  textSize(8);
  kinect = new Kinect(this);
  tracker = new KinectTracker();
  kinect.initDepth();
  kinect.initVideo();
  //kinect.enableIR(ir);
  kinect.enableColorDepth(colorDepth);

  brightnessThresh = 0;

  //mirror Image
  mirror = !mirror;
  kinect.enableMirror(mirror);

  deg = kinect.getTilt();
  // kinect.tilt(deg);

  minim = new Minim(this);
  minim.debugOn();
  in = minim.getLineIn(Minim.MONO, 4096, sampleRate);
  fft = new FFT(in.left.size(), sampleRate);

  textAlign(CENTER);
  for (int i = 0; i &lt; drops.length; i++) {
    drops[i] = new Drop();
  }
}

void draw() {
  background(0);
  //smooth();
  findNote();
  image(kinect.getVideoImage(), 0, 0);
  image(kinect.getDepthImage(), 640, 0); //

  // Run the tracking analysis
  tracker.track();
  // Show the image
  tracker.display();

  int t = tracker.getThreshold();

  for (int i = 0; i &lt; drops.length; i++) {
    drops[i].fall();
    drops[i].show();
  }

  fill(255);
  /*text(
    "Press 'i' to enable/disable between video image and IR image,  " +
    "Press 'c' to enable/disable between color depth and gray scale depth,  " +
    "Press 'm' to enable/diable mirror mode, "+
    "UP and DOWN to tilt camera   " +
    "Framerate: " + int(frameRate), 10, 515);
  text("threshold: " + t + "    " +  "framerate: " + int(frameRate) + "    " + 
    "UP increase threshold, DOWN decrease threshold", 10, 500); */

  }

void keyPressed() {
  int t = tracker.getThreshold();
  if (key == 'i') {
    //ir = !ir;
    //kinect.enableIR(ir);
  } else if (key == 'c') {
    colorDepth = !colorDepth;
    kinect.enableColorDepth(colorDepth);
  } else if (key == CODED) {
    if (keyCode == UP) {
      deg++;
    } else if (keyCode == DOWN) {
      deg--;
    } else if (key == CODED) {
      if (keyCode == RIGHT) {
        t+=5;
      tracker.setThreshold(t);
      }
    } else if (key == CODED) {
      if (keyCode == LEFT) {
        t-=5;
      tracker.setThreshold(t);
      }
    }
    deg = constrain(deg, 0, 30);
    kinect.setTilt(deg);
  }
}

class KinectTracker {
  int threshold = 1000; //depth threshold
  int[] depth;
  PImage display;

  KinectTracker() {
    kinect.initDepth();
    kinect.enableMirror(true);
    // Make a blank image
    display = createImage(kinect.width, kinect.height, RGB); 

  }

  void track() {
    depth = kinect.getRawDepth(); //Get raw depth as array of integers

    if (depth == null) return;

    float sumX = 0;
    float sumY = 0;
    float count = 0;

    for (int x = 0; x &lt; kinect.width; x++) {
      for (int y = 0; y &lt; kinect.height; y++) {

        int offset =  x + y*kinect.width;
        // Grabbing the raw depth
        int rawDepth = depth[offset];

        // Testing against threshold
        if (rawDepth &lt; threshold) {
          sumX += x;
          sumY += y;
          count++;
        }
      }
    }
  }

  void display() {
    PImage img = kinect.getDepthImage();

    if (depth == null || img == null) return;

    // Going to rewrite the depth image to show which pixels are in threshold
    display.loadPixels();
    for (int x = 0; x &lt; kinect.width; x++) {
      for (int y = 0; y &lt; kinect.height; y++) {

        int offset = x + y * kinect.width;
        // Raw depth
        int rawDepth = depth[offset];
        int pix = x + y * display.width;
        if (rawDepth &lt; threshold) {
          display.pixels[pix] = color(c, 150); //cream color 252,251,227
        } else {
          display.pixels[pix] = color(0);
        }
      }
    }
    display.updatePixels();


    // Draw blob image
    image(display, 0, 0);


  }

  int getThreshold() {
    return threshold;
  }
  void setThreshold(int t) {
    threshold =  t;
  }
}


//NOTES With Sounds
void findNote() {
  fft.forward(in.left);
  for (int f=0;f&lt;sampleRate/2;f++) { //analyses the amplitude of each frequency analysed, between 0 and 22050 hertz
    max[f]=fft.getFreq(float(f)); //each index is correspondent to a frequency and contains the amplitude value 
  }
  maximum=max(max);//get the maximum value of the max array in order to find the peak of volume

  for (int i=0; i&lt;max.length; i++) {
    if (max[i] == maximum) {
      frequency= i;
    }
  }

  midi= 69+12*(log((frequency-6)/440));// formula that transform frequency to midi numbers
  n= int (midi);

//the octave has 12 tones and semitones. 
if (n%12==9)
  {
    note = ("a");
    c = color (255, 99, 0);
  }

  if (n%12==10)
  {
    note = ("a#");
    c = color (255, 236, 0);
  }

  if (n%12==11)
  {
    note = ("b");
    c = color (153, 255, 0);
  }

  if (n%12==0)
  {
    note = ("c");
    c = color (40, 255, 0);
  }

  if (n%12==1)
  {
    note = ("c#");
    c = color (0, 255, 232);
  }

  if (n%12==2)
  {
    note = ("d");
    c = color (0, 124, 255);
  }

  if (n%12==3)
  {
    note = ("d#");
    c = color (5, 0, 255);
  }

  if (n%12==4)
  {
    note = ("e");
    c = color (69, 0, 234);
  }

  if (n%12==5)
  {
    note = ("f");
    c = color (85, 0, 79);
  }

  if (n%12==6)
  {
    note = ("f#");
    c = color (116, 0, 0);
  }

  if (n%12==7)
  {
    note = ("g");
    c = color (179, 0, 0);
  }

  if (n%12==8)
  {
    note = ("g#");
    c = color (238, 0, 0);
  }
}

void stop()
{
  in.close();
  minim.stop();

  super.stop();
}

class Drop {
  float x;
  float y;
  float z;
  float len;
  float yspeed;
  String textHolder = "text";
  float word;

  Drop() {
    x  = random(width); //rain drops at random width on x-axis
    y  = random(0, 700); //sections of rain
    z  = random(0, 1); //general speed
    len = map(z, 0, 20, 10, 20);
    yspeed  = map(z, 0, 20, 5, 2); //Speeds of raindrops, 4 is variation

    textHolder = data[int(random(data.length))];

  }

  void fall() { //speed of rain
    y = y + yspeed;
    float grav = map(z, 0, 20, 0, 0.2); //(z, 0, 20, 0, 0.2)
    yspeed = yspeed - grav; //+ grav;


    if (y &gt; height) {
      y = random(-500, 40);
      yspeed = map(z, 0, 20, 4, 20);  //(z, 0, 20, 4, 1000);
    }
  }

  void show() {
    float thick = map(z, 0, 20, 1, 3);

    fill(c);
    text(textHolder, x, y, y+len); 
  }
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>trying to use minim to trigger animation when input sound level exeeds a certain height</title>
      <link>https://forum.processing.org/two/discussion/19549/trying-to-use-minim-to-trigger-animation-when-input-sound-level-exeeds-a-certain-height</link>
      <pubDate>Tue, 06 Dec 2016 05:44:39 +0000</pubDate>
      <dc:creator>berona</dc:creator>
      <guid isPermaLink="false">19549@/two/discussions</guid>
      <description><![CDATA[<p>I've just started a short class for processing,  have a simple script for an array of ellipses moving around the screen 
what I want is for the ellipses to flash when sound exceeds a certain height and to go back to white when input comes below this level. 
so far Ive not been able to stitch together what I've grasped. please help. Its already 7am and after hours of tutorials I'm no further. Should an if statement go in, or should I be using true/false to switch on or off this reaction</p>

<p>all advice welcome</p>
]]></description>
   </item>
   <item>
      <title>Change color of line based on song frequency.</title>
      <link>https://forum.processing.org/two/discussion/19512/change-color-of-line-based-on-song-frequency</link>
      <pubDate>Sun, 04 Dec 2016 19:51:34 +0000</pubDate>
      <dc:creator>jpnielsen</dc:creator>
      <guid isPermaLink="false">19512@/two/discussions</guid>
      <description><![CDATA[<p>Hello,</p>

<p>I am trying to write a code that will use a song's frequency to change the color of a line that moves across the campus to create some sort of abstract picture. Currently, I am stumped on how to achieve this effect. If anyone has any advice on how to do this, it would be much appreciated.</p>

<p>This is my code currently (which probably has a great amount of errors in it; I'm still new to processing):</p>

<p>import ddf.minim.*;
import ddf.minim.signals.*;
import ddf.minim.analysis.*;
import ddf.minim.effects.*;
float k;
float r;
float g;
float b;</p>

<p>Minim minim;
AudioPlayer player;
FFT fft;</p>

<p>void setup()
{
    size( 800, 600 );</p>

<pre><code>minim = new Minim( this );

player = minim.loadFile("Little Secrets.mp3");
player.play();
fft = new FFT( player.bufferSize(), player.sampleRate() );
</code></pre>

<p>}</p>

<p>void draw()
{</p>

<pre><code>float x = map( player.position(), 0, player.length(), 0, width );

float e = 0;
for ( int i = 0; i &lt; fft.specSize(); i++ ) {
    e += abs( fft.getBand( i ) ) / 1000;
</code></pre>

<p>}</p>

<p>{
      int k = color(r-e,g/e,b+e);
      float r = red(pixels[k]);
      float g = green(pixels[k]);
      float b = blue(pixels[k]);</p>

<p>}</p>

<pre><code>stroke( color(r-e, g/e, b+e) );
strokeWeight (5);
line( x, 0, x, height );
</code></pre>

<p>}</p>

<p>void stop()
{
    player.close();
    minim.stop();
    super.stop();
}</p>
]]></description>
   </item>
   <item>
      <title>Audio Visualization - color issue</title>
      <link>https://forum.processing.org/two/discussion/19486/audio-visualization-color-issue</link>
      <pubDate>Sat, 03 Dec 2016 15:12:58 +0000</pubDate>
      <dc:creator>kmooney</dc:creator>
      <guid isPermaLink="false">19486@/two/discussions</guid>
      <description><![CDATA[<p>Hi guys,
I'm working on a project that visualizes input audio and I'm having difficulties making the color change according to the input sound frequency.  Right now, tI'm just multiplying the band and freq by different numbers to mimic a lame attempt at the effect I'm going for.  Ideally, the color would turn yellow when the input sounds are high pitch, and it would turn orange when the input sounds are medium pitch, and then it would turn purple when the input sounds are low pitch.</p>

<p>Here's what I have so far:</p>

<pre><code>import ddf.minim.analysis.*;
import ddf.minim.*;

//global variables
Minim minim;          //minim object
AudioInput input;     //object for realtime audio
FFT fft;

void setup() 
{
  size(displayWidth,displayHeight, P3D);         //scene is in 3D space  
  minim = new Minim(this);                       //instantiate minim object
  input = minim.getLineIn(Minim.STEREO, 2048, 192000.0);
  fft = new FFT( input.bufferSize(), input.sampleRate());
  angle = new float[fft.specSize()]; 
  frameRate(240);

}

void draw() 
{
  fft.forward(input.mix);       //use sound from microphone
  arcs();                       //visuals
}

float[] angle;
float[] y, x;

int rings = 7;         //variable for number of sets of rings
int ringDensity = 4;   //variable for number of rings within ring sets
int number = 24;       //variable for number of arcs that make up ring

void arcs()
{
  translate(width/2, height/2);     //move to center of window
  noFill();

  for (int h=1; h&lt;(ringDensity*2); h=h+2) { 
    for (int i=h; i&lt;(rings); i=i+10) {
        for (int k = 0; k &lt; fft.specSize() ; k++) {

        angle[k] = angle[k] + fft.getFreq(k)/3000;

        rotateX(sin(angle[k]/20));    //control 3D x rotation
        rotateY(cos(angle[k]/10));    //control 3D y rotation
        rotateZ(tan(angle[k]/30));    //control 3D z rotation


        //color stuff
        int count = 0;
        int lowTot = 0;
        int midTot = 0;
        int highTot = 0;
        for (int l = 0; l &lt; input.left.size()/10; l+=5) 
        {

          lowTot+=  (abs(fft.getBand(k)));
          midTot+=  (abs(fft.getAvg(k)));
          highTot+= (abs(fft.getFreq(k)));
          count++;
          }

        float diameter = map(40 * i * angle[k], 0, count, 50, 90);    //movement doesn't grow
        arc(0, 0, diameter, diameter, radians(k*(360/number)), radians((k+1)*(360/number)));

        stroke( map( lowTot, 0, count * 10, 0, 255 ),map( midTot, 0, count * 10, 0, 255 ),map( highTot, 0, count * 10, 0, 255 ));
        strokeWeight(map(fft.getBand(h), 0, count , 1, 200));
        }
       }
      }
    }

void stop()
{
  //closes minim classes
  input.close();
  minim.stop();
  super.stop();
}
</code></pre>
]]></description>
   </item>
   <item>
      <title>Visualizing FFT values on a 3D object using microphone input from Android Device.</title>
      <link>https://forum.processing.org/two/discussion/19348/visualizing-fft-values-on-a-3d-object-using-microphone-input-from-android-device</link>
      <pubDate>Mon, 28 Nov 2016 08:39:27 +0000</pubDate>
      <dc:creator>TimothyThomasson</dc:creator>
      <guid isPermaLink="false">19348@/two/discussions</guid>
      <description><![CDATA[<p>Hello,</p>

<p>I am having trouble with an audio reactive project I am working on. So far I have a 3D environment that is using the Minim library to analyze FFT values drawn from the microphone of my PC to change the size of the objects in the environment. I am trying to get this effect to work on my Android phone with the "Google Cardboard" library (<a href="http://android.processing.org/tutorials/cardboard_intro/index.html" target="_blank" rel="nofollow">http://android.processing.org/tutorials/cardboard_intro/index.html</a>). I am able to display the environment and look around perfectly in VR, however I am unable to analyze the microphone input from my phone. I understand this is not possible with Minim, and I have spent hours looking for, and trying various alternative solutions. I am relatively new to programming and many explanations I have found are very advanced. Is there a simple way to achieve this?</p>

<p>Thank you so much.</p>
]]></description>
   </item>
   <item>
      <title>FFT Landscape generator</title>
      <link>https://forum.processing.org/two/discussion/19314/fft-landscape-generator</link>
      <pubDate>Sat, 26 Nov 2016 18:21:43 +0000</pubDate>
      <dc:creator>chanof</dc:creator>
      <guid isPermaLink="false">19314@/two/discussions</guid>
      <description><![CDATA[<p>Hi there, I followed the Terrain Generator by Perlin Noise tutorial from youtube, and I tried to switch the perlin noise with an FFT of the mic audio input.
X axis is fine, in the horizon I can clearly see my FFT effect, in the y axis I can't get the progressive generation, so to create a landscape from the sequences of FFT moments/frames.
to get the Effect That the observer walks toward the horizon self-generated by FFT.
Below is where I am:</p>

<p>Could someone help me?
Thanks a lot</p>
]]></description>
   </item>
   <item>
      <title>how do i use processing to control a string of LEDs to match sound frequencies?</title>
      <link>https://forum.processing.org/two/discussion/19270/how-do-i-use-processing-to-control-a-string-of-leds-to-match-sound-frequencies</link>
      <pubDate>Thu, 24 Nov 2016 15:24:15 +0000</pubDate>
      <dc:creator>ninjabutcher</dc:creator>
      <guid isPermaLink="false">19270@/two/discussions</guid>
      <description><![CDATA[<p>i have begun a project for my first electronics class and need help.  my plan is to read audio input, calculate the frequencies inputted and have led's light up with certain frequencies.   i have circuitry needed, but i am new to processing and am not familiar with it.  i appreciate any help.</p>
]]></description>
   </item>
   </channel>
</rss>