We closed this forum 18 June 2010. It has served us well since 2005 as the ALPHA forum did before it from 2002 to 2005. New discussions are ongoing at the new URL http://forum.processing.org. You'll need to sign up and get a new user account. We're sorry about that inconvenience, but we think it's better in the long run. The content on this forum will remain online.
IndexProgramming Questions & HelpSound,  Music Libraries › Synchronizing generative video + audio
Pages: 1 2 3 4
Synchronizing generative video + audio (Read 29942 times)
Synchronizing generative video + audio
Mar 17th, 2007, 10:15pm
 
Hi, folks. So I've written a Flight404 knockoff sketch that uses a traer particle system and ess to generate a particle cloud that "flashes" when the audio input (either from mic or from an MP3) goes over a threshold. Pretty basic stuff for Processing, I gather, but this is only my first week messing with it.

So I fired up Air's "Redhead Girl" from their new album and tried rendering out the animation generated from it (with plans to sync it up in iMovie or After Effects or whatever afterwards).

This is where I run into a problem: when I use saveFrame() the animation bogs down, goes below 30fps, and so it's not actually rendering synced with the audio.

This seems wrong to me, as I'm running an Intel Macbook Pro with 1GB of RAM, and using OpenGL as my renderer.

Can anybody suggest how I could optimize my code to make this work? Code is below:

---------------

//import processing.opengl.*;
import traer.physics.*;
import krister.Ess.*;


AudioFile myInput;
AudioStream myStream;
FFT myFFT;
float volume;
float avgVol;
ParticleSystem ps;
Particle[] particles;
Particle audioController;
Attraction[] attract;
int numParticles;
int i;
int offset;
PImage particlePic;
PImage starfield;
int bgColorChange;

int maxpal = 512;
int numpal = 0;

void setup() {
 size(720,480,P3D);
 frameRate(30);
 particlePic = loadImage("star.gif");
 particlePic.mask(particlePic);
 starfield = loadImage("bg.gif");

 Ess.start(this);
 
 myInput=new AudioFile("redhead_girl.mp3",0,Ess.READ);
 myStream = new AudioStream(256*1024);
 myStream.sampleRate(myInput.sampleRate);


 myFFT=new FFT(8);


 myStream.start();

 numParticles = 100;
 ps = new ParticleSystem(0,1.5);

 particles = new Particle[numParticles];
 attract = new Attraction[numParticles];
 audioController = ps.makeParticle(100,width /2, height / 2,0);
 audioController.makeFixed();
 audioController.moveTo(width / 2, height / 2,0);

 for(i = 0; i < numParticles; i++){
   particles[i] = ps.makeParticle(300,random(0,width) - (particlePic.width / 2),random(0,height)- (particlePic.height / 2),0);
   attract[i] = ps.makeAttraction(audioController,particles[i],50,500);



 
 }


 noStroke();




 offset = 8;
 background(0);

}

void draw() {
 ps.tick();
 myFFT.smooth = true;
 //myFFT.damp(0.001);


noCursor();
 background(0);
 image(starfield,0,0);

 avgVol = myFFT.getLevel(myStream);
 volume = round(avgVol * 10);

 audioController.moveTo(mouseX,mouseY,0);
 tint(255);
 image(particlePic, audioController.position().x() - (particlePic.width / 2),audioController.position().y() - (particlePic.height / 2));
 stroke(255,50);
 noStroke();

 for(i=0; i< numParticles; i++){
   tint(255, 100);
   float sizeRange = max(20,volume * 40);
    if(mousePressed | keyPressed | avgVol > 0.5){
     attract[i].setStrength(-500);
     sizeRange = sizeRange + 25;
   }
   else{
     attract[i].setStrength(400);
   }
   image(particlePic, particles[i].position().x() - (sizeRange / 2),particles[i].position().y() - (sizeRange / 2),sizeRange,sizeRange);
 
 
 }
 
 //saveFrame("redheadgirl-#####.jpg");
}

void audioStreamWrite(AudioStream theStream) {
 // read the next chunk

 int samplesRead=myInput.read(myStream);
 if (samplesRead==0) {
   // start over

   myInput.close();


   samplesRead=myInput.read(myStream);
 }
}


public void stop() {
 Ess.stop();
 super.stop();
}

---------------
Re: Synchronizing generative video + audio
Reply #1 - Mar 17th, 2007, 10:34pm
 
Update: this problem seems relatively solved with the moviemaker library, though I haven't done a frame-by-frame comparison.

Note to self: monkey boy ought to test all libraries before needlessly posting to forums.
Re: Synchronizing generative video + audio
Reply #2 - Apr 13th, 2007, 12:17am
 
Or record to another device.  Like DVD recorder, then rip it back to AVI.  (This is the best way to avoid sync problems for me.)
Re: Synchronizing generative video + audio
Reply #3 - Apr 15th, 2007, 4:30pm
 

In your sketch, instead of using live sound input, read data from a file. The output of your movie can't keep up with the audio input.

My a sketch that listens to audio and outputs your data to a text file. Then in your first sketch, read this data in for each frame instead of live audio. Does that make sense?

Robert said in one of his comments...
"I have been filming a projection for a lot of these videos, but this one is an actual render in that I am saving a image for each frame i show and piecing those images together in Quicktime and pasting back in the audio. I should probably blog that process as well because I get a lot of questions about it. So this one is not live. But, it was developed as a live project… the live version runs smoothly with about 150 particles but for this render, I cranked it up to 500 particles so it ended up dropping to about 2 to 5 frames per second."
Re: Synchronizing generative video + audio
Reply #4 - Apr 26th, 2007, 7:38am
 
hey chris.. im interested in that idea of yours..
what you mean export data to a text file?

if i understood it correctly u mean saving spectrum data to a file and read from it later when saving frames ?

vic
Re: Synchronizing generative video + audio
Reply #5 - May 9th, 2007, 3:23am
 
Also interested in Chris's comments re: Robert's method...

Conceptually I get it.  You have an "analyzer" sketch that reads the audio, does the fft, and writes out the spectrum each frame.  That's all.  (and for my uses, all I really need is the raw spectrum[] values, everything else (averages, etc) could be recreated offline, so a really easy fft dump)

Then you have a "renderer" sketch that reads the stored fft data frame by frame and renders the output images.  I get that too.

What I'm having a bit of conceptual trouble with is how to position the stream in the "analyzer" sketch to properly grab the *right* bit of fft data per frame....

Let's say I've got a 44100 sample rate, and want exactly 30 frames per second output.  Some simple math suggests that I should advance the stream by exactly 1470 samples between each analyze frame.  (I don't want to trust to just playing the stream live with P5 running at 30 fps - not accurate enough)

So, what's the "right" way to do that?  Let's say I'm using ESS (I am) which doesn't appear to have positioning methods on the stream.  So just set the buffer size to 1470 and each time I need a new buffer do an FFT, and call that a frame?  Any reason that wouldn't be exact?

And if that's the right approach, exactly "where" in the frame would I want to be to get perfect sync?  (um, think of it as a phase offset)  In other words, would I perhaps ideally want to render the first frame at offset 0 from the stream?  Or should I be offseting by half a frame (1470/2 samples) early so that the rendering frame occurs at the halfway point where audio would be playing through that frame?  (given 1/30th of a second of audio, do you fft at 0/30ths? or at -15/30ths?, or elsewhere?)

Yikes, what gibberish!  That probably doesn't make any sense at all except to someone who's already done it!!  :-D

Anyway, cheers to any who can provide tips to save me some headbanging, I'll share if I come up with anything useful.
Re: Synchronizing generative video + audio
Reply #6 - May 9th, 2007, 9:47am
 

Timestamps.

If you store the raw fft values with a timestamp (milliseconds since the recording started). Then on playback, find the nearest timestamp to the current playback milliseconds time.

I've yet to test the accuracy of this, so I may be wrong.

I will upload some code in the next few days you can use.



Re: Synchronizing generative video + audio
Reply #7 - May 9th, 2007, 6:50pm
 
Thanks Chris.  Timestamp -- clever approach.

However, I've just re-RTFM and realized that (for my purposes anyway) I'm making this wayyy more difficult than it has to be.  With ESS, the FFT's getSpectrum() method will take a float array (um, samples?) and an offset (um, timecode?) - how did I miss that?!.  IOW, offline analysis is already built in, you don't even need to play the audio at all.  Demo usage for others who might've missed it also:

Quote:


import krister.Ess.*;

AudioChannel myChannel;
FFT fft;
int frameNumber = 0;
int framesPerSecond = 30;

void setup() {
 size(320,240,P3D);
 Ess.start(this);
 myChannel = new AudioChannel(dataPath("music.wav"));
 fft = new FFT(512);
 fft.limits();
}

public void stop() {
 Ess.stop();
 super.stop();
}

void draw() {
 analyze();
 render();
 advance();
}

void analyze() {
 fft.getSpectrum(myChannel.samples, (int)(frameNumber * myChannel.sampleRate / framesPerSecond));
}

void render() {
 background(255);
 fill(0);
 noStroke();
 for (int i=0; i<256; i++) {
   float sp = fft.spectrum[i];
   rect(32+i,230,1,-sp*220);
 }
 // pretend the rendering took a long time just as proof of concept...
 try { Thread.sleep(100); } catch(InterruptedException e) {}
 // save it
 saveFrame("out\\img_"+nf(frameNumber,6)+".jpg");
}

void advance() {
 frameNumber++;
}



Now just assemble the frames, overlay the audio, and viola! perfect sync.  Smiley
Re: Synchronizing generative video + audio
Reply #8 - May 9th, 2007, 10:28pm
 
Will this work with the mister shiffmans movie lib, instead of saving every frame as jpg?
Re: Synchronizing generative video + audio
Reply #9 - May 9th, 2007, 10:58pm
 
I myself prefer individual non-lossy tif or tga frames so I haven't tested moviemaker, but it ought to work since you're just running a plain old non-real-time-audio sketch at this point.  Of course you'll still have to overlay the audio in post.
Re: Synchronizing generative video + audio
Reply #10 - May 9th, 2007, 11:00pm
 
Moviemaker does have an uncompressed frame option (MovieMaker.RAW), so you can then add music and only compress it once to produce a finalised version.
Re: Synchronizing generative video + audio
Reply #11 - May 10th, 2007, 12:09am
 
Ahh, thanks for the heads up John.  Tested, and all the pieces seem to get along ok, though I lost sync when just dropping the audio over the .mov, whereas it was perfect with the individual frames.  Perhaps I'm just not using the mm class correctly (as it's all new to me), here's the updated demo framework if anyone can suggest further improvements:

Quote:


/**
FFTOfflineRenderer.pde
May 2007 Dave Bollinger
to demonstrate use of non-real-time fft analysis and rendering
putting ESS, OpenGL and MovieMaker all together to make sure they all get along
*/

import processing.opengl.*;
import krister.Ess.*;
import moviemaker.*;

static final int OUTPUT_TYPE_IMAGE = 0;
static final int OUTPUT_TYPE_MOVIE = 1;
int outputType = OUTPUT_TYPE_IMAGE;  // change this as desired

MovieMaker mm;
AudioChannel chn;
FFT fft;
int frameNumber = 0;
int framesPerSecond = 30;

void setup() {
 size(320,240,OPENGL);
 Ess.start(this);
 if (outputType==OUTPUT_TYPE_MOVIE)
   mm = new MovieMaker(this,width,height,"output.mov",MovieMaker.RAW,MovieMaker.HIGH,framesPerSecond);
 chn = new AudioChannel(dataPath("music.wav"));
 fft = new FFT(512);
 fft.limits();
}

public void stop() {
 Ess.stop();
 super.stop();
}

void draw() {
 progress();
 analyze();
 render();
 store();
 advance();
}

void progress() {
 if ((frameNumber%100) == 0) {
   println("Working on frame number " + frameNumber);
 }
}

void analyze() {
 int pos = (int)(frameNumber * chn.sampleRate / framesPerSecond);
 if (pos >= chn.size) {
   if (outputType==OUTPUT_TYPE_MOVIE)
     mm.finishMovie();
   exit();
 }
 fft.getSpectrum(chn.samples, pos);
}

void render() {
 background(255);
 fill(0);
 noStroke();
 for (int i=0; i<256; i++) {
   float sp = fft.spectrum[i];
   rect(32+i,230,1,-sp*220);
 }
 // (optional) pretend the rendering took a long time just as proof of concept...
 //try { Thread.sleep(100); } catch(InterruptedException e) {}
}

void store() {
 if (outputType==OUTPUT_TYPE_MOVIE) {
   loadPixels();
   mm.addFrame(pixels,width,height);
 } else {
   saveFrame("out\\img_"+nf(frameNumber,6)+".jpg");
 }
}

void advance() {
 frameNumber++;
}


Re: Synchronizing generative video + audio
Reply #12 - May 10th, 2007, 7:50pm
 
hi davbol, what program are you using for the video creation with all the sequenced images ?
Re: Synchronizing generative video + audio
Reply #13 - May 10th, 2007, 8:42pm
 
Thanks dave, thats what I'm looking for a while.
Re: Synchronizing generative video + audio
Reply #14 - May 11th, 2007, 4:38am
 
@crmx:  Since you're asking, I'll read between the lines and assume you don't have anything like Premiere/etc, almost all of which do it (fe Premiere is File-Import, then check "Numbered Frames") and assume you're asking "what's free?"  If on Windows, VirtualDub (www.virtualdub.org) is perfect.  If on Mac, maybe someone else can suggest something?  (and of course if .mov works for you then just use moviemaker lib)

@eskimoblood: Bitte

P.S. Someone is sure to ask why I added frameNumber instead of just using frameCount... intentionally decoupled audio frame from display frame to allow use of something like Marius Watz's tilesaver to render hirez (like HD 1920x1080) even if your screen isn't that big - hopefully it'll make sense architecturally if/when you ever need it. (yep, it works)
Pages: 1 2 3 4