How to make a music video with GSVideo
in
Contributed Library Questions
•
2 years ago
So after nearly a full day of searching for how to save a Processing sketch into a music video (audio + video) I've settled on the fact that GSVideo is the only way to do it. I was actually pretty surprised to discover there wasn't a cookie-cutter solution out there already (I know the moviemaker library can do just video, and I know it can be done after the fact with some other tools/software). I now feel compelled to figure this out.
Here's the GSVideo library which is based off the GStreamer framework. It's going into Processing 2.0 so its legit.
Here's the GSVideo library which is based off the GStreamer framework. It's going into Processing 2.0 so its legit.
The examples are great and work just fine. After a lot of Googling to try and understand how the GStreamer framework does the encoding/decoding and I was able to use GSPipeline to mux the test video source with the groove.mp3 that comes with the library. However, I'm not really understanding how to "mux" a video and audio stream together in a general way.
- /**
- * Audio pipeline.
- * By Andres Colubri
- * mcpantsface pipeline test
- */
- import codeanticode.gsvideo.*;
- GSPipeline pipeline;
- void setup() {
- size(100, 100);
- //set up a pipeline to save videotestscr and groove.mp3 to audio/video file vid+audio.avi
- //groove.mp3 needs to be in the main processing folder... right next to the Processing.exe
- //the file is saved to this same directory
- pipeline = new GSPipeline(this, "videotestsrc num-buffers=250 ! video/x-raw-yuv,format=(fourcc)I420,width=320,height=240,framerate=(fraction)25/1 ! xvidenc ! queue ! mux. filesrc location=groove.mp3 ! mad ! audioconvert ! audioresample ! queue ! mux. avimux name=mux ! filesink location=vid+audio.avi");
- // The pipeline starts in paused state, so a call to the play()
- // method is needed to get thins rolling.
- pipeline.play();
- }
- void draw() {
- // No need to draw anything on the screen. The audio gets
- // automatically directed to the sound card.
- }
So how can I transform this example into a method that takes the video frames from a sketch, say the GSMoviemaker class, and combines it with an mp3 audio source?... I don't know!... I've tried tweaking the Pipeline parameters and there are occasions where I see some sparks but in general it seems like a deep dark Linux forest...
So here's the basic project I'm working on... It's a bare bones music visualizer using the minim library. It draws out the waveform of a song as its played. Here's a youtube video of what it looks like. However, I was only able to make the video by jury rigging the audio and video together with CamStudio and Windows Moviemaker after the fact.
So here's the basic project I'm working on... It's a bare bones music visualizer using the minim library. It draws out the waveform of a song as its played. Here's a youtube video of what it looks like. However, I was only able to make the video by jury rigging the audio and video together with CamStudio and Windows Moviemaker after the fact.
Here's music visualizer sketch
- //music waveform sketcher - mcpantsface
- import ddf.minim.*;
- import processing.video.*;
- AudioPlayer player;
- Minim minim;
- int SAMPLES = 512;
- float[] wavform = new float[SAMPLES];
- float x;
- float y;
- float prev_x;
- void setup()
- {
- size(SAMPLES, 800, P2D);
- minim = new Minim(this);
- x=0;
- y=0;
- prev_x=1;
- loadFile(); //pick an mp3 file
- background(0);
- }
- void draw()
- {
- int wraps = 8; //sets the number of sections/scroll speed
- int mag_scale = height/wraps;
- float average = 0;
- float sum = 0;
- //maps song position to spatial position
- x = map(player.position(), 0, player.length()/wraps, 0, width);
- //takes care of wrap-arounds... a little glitchy
- if (x>width){
- x = x%SAMPLES;
- if (x<prev_x)y++;
- prev_x=x;
- }
- //calculates an average value based on the buffer
- for(int j = 0; j < player.left.size()-1; j++)
- {
- if (player.left.get(j)>0) sum += player.left.get(j);
- }
- average = sum/player.left.size();
- println(x + " " + y + " " + average);
- //normalized 0-1
- wavform[int(x)]=average/0.5;
- //draws lines representing amplitude of the audio waveform
- stroke(255);
- float offset = height/wraps*(y+0.5);
- line(0,offset-mag_scale/2,width,offset-mag_scale/2);
- line(0,offset+mag_scale/2,width,offset+mag_scale/2);
- line(int(x), offset - mag_scale/2*wavform[int(x)], int(x), offset+mag_scale/2*wavform[int(x)]);
- }
- void stop()
- {
- player.close();
- minim.stop();
- super.stop();
- }
- void loadFile(){
- String loadPath = selectInput(); // Opens file chooser
- if (loadPath == null) {
- println("No file was selected...");
- }
- else {
- println(loadPath);
- player = minim.loadFile(loadPath, SAMPLES);
- player.play();
- }
- }
So I'm reaching out to see if anyone has input on how to plug the GSVideo library into this sketch to record the music video in realtime. I think if we can nail this down, recording sketches basically, with synced audio and video, it would be a real nice feature for the Processing community. I can't even tell you how many posts I've sifted through that are after this basic feature.
Cheers.
4