We closed this forum 18 June 2010. It has served us well since 2005 as the ALPHA forum did before it from 2002 to 2005. New discussions are ongoing at the new URL http://forum.processing.org. You'll need to sign up and get a new user account. We're sorry about that inconvenience, but we think it's better in the long run. The content on this forum will remain online.
IndexProgramming Questions & HelpSound,  Music Libraries › "merging" two audio samples
Page Index Toggle Pages: 1
"merging" two audio samples (Read 892 times)
"merging" two audio samples
Feb 24th, 2008, 12:20am
 
From a separate topic:

Quote:
The Convolver works in the time-domain and requires a "kernel" to convolve the signal with. FFT is a way to convert audio from the time-domain to the frequency domain. If you want to do convolution, it's actually a much better idea to do it in the frequency domain by simply multiplying the spectra of the two signal you want to convolve and then doing the inverse transform.


So I think you're saying that I could take two sound clips, convert them both to the frequency domain, multiply their frequencies at each band, and do an inverse transform? Could I then play the resulting audio w/out playing the two original audio clips? What if I pre-processed the two original clips and stored all their frequencies in some data structure?

Taking a step back, what I'm trying to do is take two sound clips, somehow "merge" them together in an "interesting" way, and play the resulting audio. I was thinking the above method would be a likely good approach. Can you think of another method I could maybe try?  On a purely abstract/conceptual level, I'd like to have one sound (like a musical track) be a kind of "mask" for another sound (like a voice track).

Any info would be much appreciated. Thanks!

R
Re: "merging" two audio samples
Reply #1 - Feb 24th, 2008, 4:05am
 
Yes, that is basically how frequency domain convolution works. But I have not read enough about it to say that that is exactly how you should go about it. I do know that the resulting signal winds up being longer than the two you convolve together, so you'd have to handle that somehow. I'm pretty sure you could accomplish what you want using the existing stuff in Minim. But you should do some internet research on the topic.
Re: "merging" two audio samples
Reply #2 - Mar 29th, 2008, 4:26pm
 
Hi ddf,

Could you maybe help me out a bit more with this? ie, walk me through the pseudo code needed. Basically, I'm trying to load an audio file, modify it somehow in the frequency domain, and play it back. Is that possible with minim? I guess I'd have to continually update the buffer and do forward FFT's on that? And if I could do all this in an Effect, in generate I could do the inverse transform and stuff that back into a buffer for playback.

I tried modifying the ForwardFFT example in the following way, thinking that TestSignal should hopefully be generating a signal that sounds the same or similar to the AudioPlayer being looped ... but the results are noise.

any advice?

thanks so much
R

import ddf.minim.*;
import ddf.minim.analysis.*;
import ddf.minim.signals.*;

AudioPlayer jingle;
FFT fft;
String windowName;
TestSignal test;
AudioOutput out;

void setup()
{
 size(512, 200);
 // always start Minim before you do anything with it
 Minim.start(this);
 jingle = Minim.loadFile("jingle.mp3");
 jingle.loop();
 fft = new FFT(jingle.left.size(), 44100);
 textFont(createFont("Arial", 16));
 windowName = "None";

 out = Minim.getLineOut(Minim.STEREO, 512);
 test = new TestSignal();
 out.addSignal(test);
}

void draw()
{
 background(0);
 stroke(255);
 fft.forward(jingle.left);
 for(int i = 0; i < fft.specSize(); i++)
 {
   line(i, height, i, height - fft.getBand(i)*4);
 }
 fill(255);
 // keep us informed about the window being used
 text("The window being used is: " + windowName, 5, 20);
}

void keyReleased()
{
 if ( key == 'w' )
 {
   fft.window(FFT.HAMMING);
   windowName = "Hamming";
 }

 if ( key == 'e' )
 {
   fft.window(FourierTransform.NONE);
   windowName = "None";
 }
}

void stop()
{
 // always close Minim audio classes when you finish with them
 out.close();
 jingle.close();
 super.stop();
}

class TestSignal implements AudioSignal
{
 void generate(float[] samp)
 {
   fft.inverse(samp);
 }

 void generate(float[] left, float[] right)
 {
   generate(left);
   generate(right);
 }
}

Re: "merging" two audio samples
Reply #3 - Mar 30th, 2008, 6:46am
 
The whole point of implementing AudioEffect to make your own effect is so that you can modify the samples being read by an AudioPlayer before they get sent out to the speakers. So what you want to do is write a class the implements AudioEffect and then in the process function you will want to take the forward transform of the buffer you get, modify that spectrum somehow, and then take the inverse transform of the modified spectrum, putting the result into the float array passed to the function.
Page Index Toggle Pages: 1