How seperate audio frequencies for audiovisualizer
in
Core Library Questions
•
3 months ago
Hello,
I've managed to patch together a Minim example to listen to the audio playing on the PC. Now I need to figure out what values to watch to cause different events.
So I want let's say, the bass (low freq) to produce a circle... and that circle to grow whenever the bass hits. And then I want the hi-hats (hi freq) to make a square and the square to grow everytime that hi hat frequency is hit.
My problem is I don't know where/how the frequency values are being stored and how I would determine which is currently on beat.
So basically my code would work something like this:
// bass is on a beat
if (Frequency 21khz volume is > previous volume){
draw a circle += 5
//hi-hat is on a beat
if (Frequency 80khz volume is > previous volume){
draw rect(x,x, +5, +5);
So:
I've managed to patch together a Minim example to listen to the audio playing on the PC. Now I need to figure out what values to watch to cause different events.
So I want let's say, the bass (low freq) to produce a circle... and that circle to grow whenever the bass hits. And then I want the hi-hats (hi freq) to make a square and the square to grow everytime that hi hat frequency is hit.
My problem is I don't know where/how the frequency values are being stored and how I would determine which is currently on beat.
So basically my code would work something like this:
// bass is on a beat
if (Frequency 21khz volume is > previous volume){
draw a circle += 5
//hi-hat is on a beat
if (Frequency 80khz volume is > previous volume){
draw rect(x,x, +5, +5);
So:
- How do I determine which frequency is being hit on the beat?
- How do I determine whether or not the volume on that frequency has increased (for a beat) or decreased since its last beat?
I've tried println on a few different variables just to try to understand what values need to be evaluated (like which holes the frequency band or decible level) but I'm not making much progress.
Any help is much appreciated.
Thanks
Any help is much appreciated.
Thanks
- /**
* This sketch demonstrates how to use an FFT to analyze
* the audio being generated by an AudioPlayer.
* <p>
* FFT stands for Fast Fourier Transform, which is a
* method of analyzing audio that allows you to visualize
* the frequency content of a signal. You've seen
* visualizations like this before in music players
* and car stereos.
*/
import ddf.minim.analysis.*;
import ddf.minim.*;
Minim minim;
AudioInput in;
FFT fft;
void setup()
{
size(512, 200, P3D);
minim = new Minim(this);
// specify that we want the audio buffers of the AudioPlayer
// to be 1024 samples long because our FFT needs to have
// a power-of-two buffer size and this is a good size.
in = minim.getLineIn(Minim.STEREO, 1024);
// loop the file indefinitely
//jingle.loop();
// create an FFT object that has a time-domain buffer
// the same size as jingle's sample buffer
// note that this needs to be a power of two
// and that it means the size of the spectrum will be half as large.
fft = new FFT( in.bufferSize(), in.sampleRate() );
}
void draw()
{
background(0);
stroke(255);
// perform a forward FFT on the samples in jingle's mix buffer,
// which contains the mix of both the left and right channels of the file
fft.forward( in.mix );
for(int i = 0; i < fft.specSize(); i++)
{
// draw the line for frequency band i, scaling it up a bit so we can see it
line( i, height, i, height - fft.getBand(i)*10 );
println("buffer size: " + fft.getBand(i));
}
}
1