We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hi there,
yesterday evening i wrote some code that allows to render any frame rate with fft analysis. The question was around somewhere.
The problem is, that the minim-fft analyser can only be used in realtime (i tried to use cue but without playback the audio buffer seems to be empty so the fft returns nothing)
The solution is to create a fft analysis file in the first step. This is done in realtime, while the audioplayer is playing the sound file. The fft mode runs in "RecordFile" Mode.
in the second step the rendering of the frames will be done using the fft analyisis file. Whle the fft mode is "PlayRecordesFile" (Rendering the frames is not done in realtime but frame by frame, so that you get x frames per second, independently of how fast your computer is. I call that "bounce time".
To deal with realtime/bouncetime your animations need to use a timing master which has 2 implementations: 1 to get the realtime clock, just system.nanoTime. and the other to get a clock that is depending on the currently bounced frame (this implementaton of the timesource interface is done in the movie renderer).
Depending on wether to use realtime strategy or bounce time strategy i instanciate 2 different timesource objects.
i put the source code to my public dropbox so you can have a look if you like. [1]
here's the movie i rendered. [2]
[1] https://dl.dropboxusercontent.com/u/45560813/Charly Beck - Indeed (Starsigns Visual v2).zip?dl
Comments
Hello ! The "fake-directional-light-effect" you added is great ! I love it ! :) I'm still thinking you should add an element a little more "static" , something with motion but always on the screen, then the eyes can track it. (well I don't know, sometimes I think your stuff is really good actually but sometimes I think something 's missing)
" i wrote some code that allows to render any frame rate with fft analysis" I'm sorry but I don't understand... The "AnalyseSound" example from Minim already works at any frame rate with FFT.
it's not the fft stuff that slows things down, it's the graphical rendering. and given that you have to do that offline (and that each frame takes an unknown time to render) then you need to precalculate and store the fft values.
@koogs: thats exactly what it's all about. @fanthomas: thanx. i still have some little ideas for the next version. there'll be some more static elements that moving only slowly. also i'll add some very small stars. it's not yet finished ;-)
I agree with koogs, but not exactly because GL-Shaders suppose a rendering-pipeline based on GPU and thanks to GPU, rendering is not the main framerate-issue anymore. It could be a part of the problem if the FragmentShader was complex but it's not your case. Rendering pipeline is slow when you use CPU to do the work (when you list the color-pixel of an image for example)
I think most of speed-problems comes from a bad structure, using a lot of objects instead of re-use existing one.
Really, if you need to precalcultate your movie to reach 24 frames per second, there is something wrong in your code.
well i do reuse objects. except some pimages where at least 2 per frame i must allocate dynamically because that is the interface of processing api.
however if there is a way to blend images without creating an pimage this would be a way to optimize. but i don't think it's possible to deal with different rendering buffers. however, doublebuffering is commonly used but i dont see a way to do it with processing api.
will hae a look at it to optimize and check where the main bottleneck is in my code...
well i checked it. the bottleneck is PImage, especially PImage.blend. that's the "fake-directional-light-effect" fanthomas talked about. it's done by blending.
and because the blending is only done for the "LaserScene" and not for the "Star" scene i also need to use PApplet.get, PApplet.set(PImage) which is also quite expensive.
i first thought PImage is some wrapper for the gpu pixel buffer but actually it seems it isnt. it seems PApplet.get requests all pixels from the gpu, PApplet.set sends them to the gpu and PImage.blend is processed on the cpu - so it's no big wonder that it's slowing down so much.
so since i see no way to work with different buffers in the gpu and do the blending on the gpu i guess i have to live with this slow fps rate.
but no big deal, that's why i wrote the movie renderer.
correction: so since i see no way to work with different buffers in the gpu and do the blending on the **cpu **i guess i have to live with this slow fps rate.
Hello ! look at this lib ! I think it contains everything you need http://glgraphics.sourceforge.net/
cool (y)