Music Video with Particle Systems, Rendered by Processing Libraries
in
Share your Work
•
6 months ago
When I was searching for examples of music videos made with Processing a couple months ago, I didn't find many. Most of the best work I found was by folks like Memo Akten and Quayola, who roll their own code in C and/or use dedicated video effects programs. Because of constraints on my time, money, and skill set, I needed to limit my tools to Processing, Java, Quicktime Pro, and the free trial of FCP. Here's the video:
Jazari -- Squelch from Jazari on Vimeo.
The video uses footage of a recent show as seed material for particle systems that pixellate and distort that material. When I was experimenting, I found that Processing's video libraries were too imprecise to use directly with video; the api lacks a method for advancing frame by frame through a movie, which is what I needed to do because the particle system is too CPU intensive for RT processing. You can jump to a specific time, and by knowing the frame rate you can *kind of* advance frame by frame, but errors do occur. In place of the video libraries, I took the source footage and exported individual frames as images with QT Pro and then loaded individual images into memory for processing. After updating the particle system to render a new frame of output, that frame was saved as an image into an output directory. When all of the frames for a clip were rendered as images, I imported the image sequence into QT Pro and exported the sequence as a normal QT movie. Those clips were then edited in FCP.
Most of the code is written inside Eclipse because for a project of this size I needed a real IDE to keep it organized. It's fairly sloppy, with a lot of hard-coded paths and the like, and I doubt sharing it wouldn't be too useful to anyone, so I'll offer an an overview and highlights the important parts. There's a main class that launches the display window, instantiates all of the other classes, and runs the basic loop that renders each frame; there's a particle system class with about seven different methods with different kinds of rules that describe how the particles behave; and there are JSON files that define parameter values for the particles system. The JSON files describe how parameter values change over time with reference to the tempo of the original track, which is what allowed me to sync effects to the beat. For example, in one file, I have JSON arrays that define breakpoint functions for the parameters mouseX and mouseY like so:
-
{ "mouseX" :[[0,0],[1,30],[2,15],[3,50],[4,0]]},
{ "mouseY" :[[0,60],[1,25],[2,45],[3,0],[4,60]]},
The first value in each pair is the time in beats, and second is value of each parameter at that time. In the method that renders individual particles, the mouseX and mouseY parameters influence the stroke weight of particles rendered as diagonal lines:
-
if (random.nextFloat() > 0.5){
pg.stroke(ptc. color , ptc. alpha );
pg.strokeWeight( mouseX );
pg.line(tranX,tranY,tranX+ptc. size ,tranY+ptc. size );
} else {
pg.stroke(ptc. color2 , ptc. alpha2 );
pg.strokeWeight( mouseY );
pg.line(tranX, tranY+ptc. size , tranX+ptc. size , tranY);
}
The result is that the thickness of the two sets of diagonal lines varies with the beat. That moment occurs around 0:52 in the video. This particular manipulation, like most of what I did in the video, was inspired by the algorithms described in Generative Design, a book mostly about creating images with Processing. What I did, basically, was take those algorithms and use them to operate on video.
I hope this is entertaining and/or useful. Apologies for badly formatted code examples. Cheers,
-- Patrick
1