#### Howdy, Stranger!

We are about to switch to a new forum software. Until then we have removed the registration on this forum.

# Defining Frequency Bands w/ Minim

edited January 2016

I'm using minim BeatDetect object to analyze the incoming microphone signal. BeatDetect uses FFT. In the BeatDetect class, there are 4 functions of interest. isHat(), isKick(), isSnare() and isRange(int, int, int). The first three are customized versions of isRange(). What I'm trying to do is recognize more than just hat, kick and snare drums. In order to do this, I need to understand the math in the methods isHat(), isKick() and isSnare(). I'm hoping someone here can help me. Here is the code for the 4 functions.

`

``````    public boolean isKick()
{
if (algorithm == SOUND_ENERGY)
{
return false;
}
int upper = 6 >= fft.avgSize() ? fft.avgSize() : 6;
return isRange(1, upper, 2);
}

public boolean isSnare()
{
if (algorithm == SOUND_ENERGY)
{
return false;
}
int lower = 8 >= fft.avgSize() ? fft.avgSize() : 8;
int upper = fft.avgSize() - 1;
int thresh = (upper - lower) / 3 + 1;
return isRange(lower, upper, thresh);
}

public boolean isHat()
{
if (algorithm == SOUND_ENERGY)
{
return false;
}
int lower = fft.avgSize() - 7 < 0 ? 0 : fft.avgSize() - 7;
int upper = fft.avgSize() - 1;
return isRange(lower, upper, 1);
}

public boolean isRange(int low, int high, int threshold)
{
if (algorithm == SOUND_ENERGY)
{
return false;
}
int num = 0;
for (int i = low; i < high + 1; i++)
{
if (isOnset(i))
{
num++;
}
}
return num >= threshold;
}
``````

` I want to be able to recognize beats accurately across a range of instruments by manipulating the methods above. Can anybody help teach me what I need to recognize? Currently, I understand that the functions return true if a beat is detected within a specified range of frequency bands. What I don't understand is why the values for the parameters [low, high, threshold] in the functions correlate to specific instruments. Thanks for reading and please respond.

Tagged:

• While I don't use minim yet, I do compose electronic music and use percussion samples. Every instrument has a particular sound "profile", an envelope and frequency domain. Commonly known as an ADSR envelope, Attack, Decay, Sustain, Release and a set of dominant frequencies. A high hats envelope/freq is different from a snare, a kick, etc. Seeing as the first 3 are already profiled, I"m assuming the software is tuned to recognize an instrument based on it's FFT analysis and a set of predetermined parameters. For you to recognize more instruments using "isRange", you'll need to probably isolate a waveform of the percussive instrument you want to identify and adjust the minim parameters accordingly. Audacity: http://audacityteam.org/ has a wide range of tools to record, playback and analyze sounds and wave forms. It's also free and well recommended. You'll have to map the results you get to minim. Hope this makes sense. Cheers.

• Hey Sirius, thanks for the quick reply. The first three are profiled, yes, but I'm not sure how the values relate to the profile. The function fft.avgSize() is called often, returning the number of bands in the signal. That information is then used to define the range being monitored. All this happens dynamically based on the number fft.avgSize() returns, so I have no idea how it all fits together.

• edited January 2016 Answer ✓

This may be more helpful: http://code.compartmental.net/minim/index_analysis.html I see it includes a number of short Processing sketches that demonstrate what you're looking for. The code appears well commented so it could be a good starting point. I think one could modify the parameters a bit and come up with the needed information to determine your other instruments. Trial and error but with some visual output as well. :)

• To some degree this has helped me move forward. I still would like to know how a particular frequency band relates to a particular sound. With that link, however, I'm able to put some things together. Thanks.

• edited January 2016

Your welcome. One last suggestion. You might try analyzing some drum sounds in Audacity which has a decent spectrum analysis tool. You should be able to discern the shifts in spectrum between different percussive instruments that way. The spectrogram uses a log scale so you can identify the upper and lower frequency limits. Copy and modify the snare function, which I think would cover most of the toms, congas, etc, and vary the parameters in your code. I think you'll get close. It's then just a matter of naming the new function, isTom() or whatever you're working on. Cheers and good luck.

• Hello ProJammin,

I am having the same issue here and I am struggling to find an answer. I need to detect when my drum has been hit or not. With Audacity I have recorded my drum and now I am trying to analyse the sample using processing to determine my lower and upper frequency in order to test later in real-time if it is the drum or not which has been hit. As I understand it is what I should do right ?

However, avgSize() return zero on my sample .. So I can't go further ..

Moreover, using this sketch : code.compartmental.net/minim/fft_method_getaveragecenterfrequency.html

I tried to analyze my drum sample and after a sample where I am hitting my desk and I can't see any difference... I am completely lost and don't really know what to do ... Any advices ?

Here are my two samples if you need them: https://we.tl/flVrid5mhl