We closed this forum 18 June 2010. It has served us well since 2005 as the ALPHA forum did before it from 2002 to 2005. New discussions are ongoing at the new URL http://forum.processing.org. You'll need to sign up and get a new user account. We're sorry about that inconvenience, but we think it's better in the long run. The content on this forum will remain online.
IndexProgramming Questions & HelpSound,  Music Libraries › Synchronizing generative video + audio
Pages: 1 2 3 4 
Synchronizing generative video + audio (Read 29941 times)
Out of Synch or doing something wrong???
Reply #45 - Nov 25th, 2008, 3:43pm
 
Hi there! i´ve just started doing some reactive visuals and have been struggling to get the audio and video in synch, something funny happens and I don´t know if its a synch problema or I am doing something wrong visualizaing the fft.

I think my video flickers a lot when rendered and then assembled but i was thinking that maybe i am using the fft data too raw and even when i (think) i have done some easing on the movement of my images they are very "flickery" i dont care about mm library cause I like the uncompressed images so thats what im using.

I know they are flickering a lot because when I run some stuff real time they don´t flicker like that... =( what could it be??? If it´s my problem with the code how could I make it run smoother? thanks!

heres my code!
Code:


import damkjer.ocd.*;
import processing.opengl.*;
import traer.physics.*;
import javax.media.opengl.*;
import krister.Ess.*;

int promedio = 100;
int bandas = 512;
float easing = 0.9;
float numeros[] = new float[promedio];

int inten = 1;
int numEstrellas = 100;
int numCubes = 100;

Cube[]cubes = new Cube[numCubes];
PGraphicsOpenGL pgl;
GL gl;
PImage star;
Estrellas[] estrellas = new Estrellas[numEstrellas];
int x,y,z;
Camera camera1;
AudioChannel myChannel;
FFT fft;
int frameNumber = 0;
float fpsFudgeFactor = 30.0 / 29.97; // normally 1.0f
float framesPerSecond = 30.0 * fpsFudgeFactor;
//int framesPerSecond = 30;

void setup() {
size(720,480,OPENGL);
hint(ENABLE_OPENGL_4X_SMOOTH);
imageMode(CENTER);
background(0);
Ess.start(this);
myChannel = new AudioChannel(dataPath("samskeyti.mp3"));
fft = new FFT(bandas);
fft.limits();
noStroke();
star = loadImage("reflection.png");
for (int i = 0; i < numEstrellas; i++) {
x = int(random(-width,width));
y = int(random(-height, height));
z = int(random(-1000,1000));
estrellas[i] = new Estrellas(10,x,y,z,1,star);

}
camera1 = new Camera(this, width/2.0, height/2.0, (height/2.0) / tan(PI*60.0 / 360.0), width/2.0, height/2.0, 0, 0, 1, 0);
fft.averages(promedio);

for (int i = 0; i < promedio; i++) {
numeros[i] = 0;
}
}

public void stop() {
Ess.stop();
super.stop();
}

void draw() {
analyze();
render();
advance();
//print(frameRate);
}

void analyze() {
fft.getSpectrum(myChannel.samples, (int)(frameNumber * myChannel.sampleRate / framesPerSecond));
}

void render() {
camera1.feed();
pgl = (PGraphicsOpenGL) g;
gl = pgl.gl;
pgl.beginGL();
gl.glDisable(GL.GL_DEPTH_TEST);
gl.glEnable(GL.GL_BLEND);
gl.glBlendFunc(GL.GL_SRC_ALPHA,GL.GL_ONE);
pgl.endGL();
for (int i=1; i<promedio; i++) {
float sp = fft.averages[i]*200;
numeros[i] += (sp - numeros[i])*easing;
estrellas[i].setSize(sp);
estrellas[i].render();
}

spotLight(180, 150, 200, 0, 0, 0, 0, 0, -1, PI/2,1.5);
fill(200);
sphere(1000);

try { Thread.sleep(100); } catch(InterruptedException e) {}
// save it
saveFrame("out\\img_"+nf(frameNumber,6)+".jpg");

}

void advance() {
frameNumber++;
}

void mouseDragged() {

camera1.truck(mouseX - pmouseX);
camera1.boom(mouseY - pmouseY);

}
Re: Synchronizing generative video + audio
Reply #46 - Nov 25th, 2008, 5:49pm
 
DUH!!!! forget it... it is my code that is not moving too well, apparently it is synchronized but i need to make it move smoother =( any ideas?????
Re: Synchronizing generative video + audio
Reply #47 - Mar 30th, 2009, 12:21pm
 
moka wrote on Jan 10th, 2008, 4:15pm:
okay anyways, I was wondering if there is also a way to grab the levels for each frame in ESS without actually playing the sound.(for rendering porpuses)

You can do it with the spectrum through this function:

myFFT.getSpectrum(myChannel.samples, (int)(frameNumber * myChannel.sampleRate / framesPerSecond));  

is there something similar in ESS for the levels



Hi gang,

I have exactly the same question (but since someone perhaps found the clue !) :
when we do the offline anlysis to have perfect audio/video sync, how can we have access to level?

Re: Synchronizing generative video + audio
Reply #48 - May 31st, 2009, 9:15pm
 
I just up this thread because it seems to be something more or less essential for many of us.
It could be up in the FAQ or somewhere like that if it is not already...

Also, I noticed that other possibilities haven't appeared in 2 years...
It would be nice to see minim offering a method equivalent to ESS Smiley

Indeed, I like minim which is still evolving and offering new perspective, but I found myself using ESS because such things are only available there...  Embarrassed
Re: Synchronizing generative video + audio
Reply #49 - Jun 2nd, 2009, 7:48am
 
I made my first video following dave's method and generate my 30fps for 5m05s (got around 10k jpeg).
Since I'm on mac I had to create my movie from jpeg's.

Here is a way I found there:
http://www.carbonsilk.com/development/timelapse-video-mac/

first you'll have to get the soft to make the compression (Here FFMPEG)
So open the terminal (you can use the finder Control+SPACE and type terminal to get it quickly)

then:
mkdir ~/Documents/temp
cd ~/Documents/temp/
svn checkout svn://svn.ffmpeg.org/ffmpeg/trunk ffmpeg
cd ffmpeg
./configure --enable-shared --disable-mmx
sudo make
sudo make install

This will create a folder temp in your Documents folder
Then download the soft in a ffmpeg folder
and install it (you need to type your root password)

then create a folder for your jpeg (or modify the following):
here we'll create a jpeg folder in the previous temp folder

mkdir ~/Documents/temp/jpeg/


put your jpeg (a copy may be preferable in case of doubt) in the folder jpeg (located in your Documents/temp)


mkdir ~/Documents/temp/jpegprocessed/
mkdir ~/Documents/temp/videos/
mkdir ~/Documents/temp/scripts/
cd ~/Documents/temp/scripts/
vi make_complete_sequence.sh


this creates:
- jpegprocessed folder where jpeg with ffmpeg friendly will be copied.
- videos folder where your final video will be created.
- a scripts folder where the script you'll have to launch will be
This also launch the editor vi.
In it copy and past the following


COUNTER=0;
for i in `find ~/Documents/temp/jpeg -name '*.jpg'` ;
do
#Write the filename to be friendly with ffmpeg's odd filename input
FILENAME=`printf '%05d.jpg' $COUNTER`
cp $i ~/Documents/temp/jpegprocessed/$FILENAME
let COUNTER=COUNTER+1;
done
nice ffmpeg -i ~/Documents/temp/jpegprocessed/%5d.jpg -r 30 -b 5000k ~/Documents/temp/videos/movie_complete.mp4


This will search jpeg in your jpeg folder and then rename/copy those as 00001.jpeg ... (if you have more than 99999 change the %5d to %6d or more decimals) in the jpegprocessed folder
Then it will make your movie thanks to this command

nice ffmpeg -i ~/Documents/temp/jpegprocessed/%5d.jpg -r 30 -b 5000k ~/Documents/temp/videos/movie_complete.mp4


-i #input
-r #fps (here 30)
-b #bitrate
(5000k for HD video, 2500k for DV, 1800k for 4:3 cf. http://www.vimeo.com/help/compression)

then you'll have to change the permission to be able to run the script

chmod a+x make_complete_sequence.sh

finally you can run it:

./make_complete_sequence.sh

Then you'll have to wait for your video to appear in the videos folder !
To have the music on it you'll have to use iMovie

Export setting depends on the quality you want so I'll let you see for this or if you have any questions...  Smiley
Re: Synchronizing generative video + audio
Reply #50 - Jun 10th, 2009, 1:02pm
 
Mukei wrote on Jun 2nd, 2009, 7:48am:
I made my first video following dave's method and generate my 30fps for 5m05s (got around 10k jpeg).



using getSpectrum to have access to FFT
or did you achieve to have access to the Level off line?



FFt
Re: Synchronizing generative video + audio
Reply #51 - Jun 11th, 2009, 6:04pm
 
hi,

I'm new to processing and so far I've been having a blast messing around with audio visualizations and now of course I want to share this stuff as a video.  looking through this thread I see people talking about both different audio libraries and different methods for rendering movies so it's hard for me to pick a good starting point going forward.  so far I've been using minim for audio and doing some playing around with the movie maker library to render things but video/audio sync completely eludes me at this point.  so, I guess my question is what is the best path to start on with this?  are there libraries that are best for this sort of thing as opposed to others - are there any tutorials I've over looked?  mainly I want to get off to a good start and hopefully not spend hours running the opposite direction of where I want to end up.

thanks!
Re: Synchronizing generative video + audio
Reply #52 - Jun 16th, 2009, 3:31pm
 
I would like to use davbol's approach with minim, since my whole program heavily relies on that sound library now. The only thing, minim does not know is the size of the track, as used in
Code:
if (pos >= chn.size) { 



What could I use here for minim?
Re: Synchronizing generative video + audio
Reply #53 - Jun 19th, 2009, 10:22am
 
@ff10
Minim does know that. Check http://code.compartmental.net/tools/minim/manual-playable/ - the "Position and Length" nearly at the end of the page. It works like this:
Code:

if( chn.position() >= chn.length() ){
}
Re: Synchronizing generative video + audio
Reply #54 - Jun 20th, 2009, 12:26pm
 
ff10 wrote on Jun 16th, 2009, 3:31pm:
The only thing, minim does not know is the size of the track, as used in
Code:
if (pos >= chn.size) { 



What could I use here for minim?


mog's response is correct if you're using an AudioPlayer.  However you can't access the raw samples with an AudioPlayer so it's not useful in the context of offline (non-real-time) analysis/rendering.

For offline analysis, with Minim, you'll need an AudioSample and getChannel(), something like: "float [] sampleBuffer = sample.getChannel(BufferedAudio.LEFT);"

But the fft details get a bit messy, this is still easier with ESS...

What you have to do is manually copy out a buffer-sized portion of the samples into a smaller float buffer that you can pass to fft.forward() (since it doesn't take an index parameter into the entire sample array as ESS does).  To determine when you're "done", just compare the length of the the entire sample buffer to your pointer, something like:  "if (pos >= sampleBuffer.length)"

(and watch out on the very last partial frame that you properly copy only what's left and zero out the remainder of the buffer before passing it to the fft)

I have a Minim version of that framework around somewhere, but don't have access to it right now - if I can find time I'll try to remember to dig it up and post it here if anyone's interested.
Re: Synchronizing generative video + audio
Reply #55 - Jun 26th, 2009, 3:07pm
 
Quote:
I have a Minim version of that framework around somewhere, but don't have access to it right now - if I can find time I'll try to remember to dig it up and post it here if anyone's interested.


Definitely. Thanks a lot!
Re: Synchronizing generative video + audio
Reply #56 - Jun 29th, 2009, 8:27am
 
here: http://www.davebollinger.com/works/p5/FFTOfflineRendererMinim.zip
see included notes to (hopefully) make sense out of it
Pages: 1 2 3 4