I've been working on a local web application to dynamically generate animations in real-time. This video is a live recording of myself running and playing the software to music, Panoramic by Lusine. As a result it is a single shot of my computer's desktop. The application interfaces through the midi controller Monome 128. Because of this the application works well alongside a DJ, musician, or band to accompany audio like the video shown here. When you hit a button an animation begins; when you hit three buttons, a chord of animations. I used Processing extensively for the initial sketches and to help with some of the communication with the Monome.
Over the last year I've been working on an application where sound triggers animations. The prototype was built with Processing, the ddf minim library, and the animation library Ani. Today I'm happy to share with you all a sample of it in poster form. All shapes were generated from the app and even the type is warped based on sound input. It comes in many sizes and formats. You can also purchase a poster over at society6: http://society6.com/jonobr1/Synesthesia-Zbs_Print
Been messing around with
Ani lately and it's a fantastic little library. Inspecting the example Ani_Callback and adding a delay to the instantiation of diameterAni
doesn't delay the onStart callback invoked. I expected that it would — is there a way to account for this?
e.g:
/**
* shows how to use the callback features onStart and onEnd
*
* MOUSE
* click : set end of animation and trigger a new one
*/
import de.looksgood.ani.*;
float x = 256;
float y = 256;
int diameter = 50;
Ani diameterAni;
void setup() {
size(512,512);
smooth();
noStroke();
// Ani.init() must be called always first!
Ani.init(this);
// define a Ani with callbacks, specify the method name after the keywords: onStart and onEnd
// there are two possiblities to declare the methods for the callbacks:
// without any parameters or with one parameter of type Ani
// called onStart of diameterAni animation
void itsStarted() {
println("diameterAni started");
}
// called onEnd of diameterAni animation
void newRandomDiameter(Ani theAni) {
float end = theAni.getEnd();
float newEnd = end + random(-20,20);
theAni.setEnd(newEnd);
println("diameterAni finished. current end: "+end+" -> "+"new end: "+newEnd);
}
The onStart method fires when ani.start is called. This is great, but when trying to sync with animations I believe it should fire when the animation starts, not when the start method is called. The only other callback method seems to onEnd.
Not sure what a good workaround would be? Calling ani.isDelaying() until it's false?
I went looking through the documentation for Minim, but couldn't find clear documentation on this.
I'm running an FFT on an AudioInput buffer and would like to know the max amplitude I could receive for any given band so that I can draw a line that stretches the height of the screen. e.g:
/* ... update loop ... */
fft.forward(in);
for (int i = 0; i < depth; i++) {
float f = fft.getBand(i);
float normal = f / n;
float ypos = normal * height;
line(i, 0, i, ypos);
}
/* what could n be ??? */
Could this be computer specific or is it a fixed value like 0 - 255? Finally, does
BeatDetect.setSensitivity() share the same value space as this?
I'm using Processing 2.0 alpha and had this idea to make a Video Slideshow of sorts. I have a bank of 30 videos, but I'd hardly like to load them all at setup. I was thinking similar to a slideshow.., I could have Movie references for the currently visible video and the next video in line. I briefly browsed through the examples and looked on the forum, but to no avail.
movie = new Movie(this, filenames[current_movie]);
movie.pause();
movie.goToBeginning();
movie.loop();
}
..proves that it doesn't get rid of the previous videos. I remember from Life In A Day there was a way of deleting these references, but have now forgotten and since it's part of Processing Core the syntax seems buried. How do you think I should delete the previous video and what's the easiest way to play videos in parallel ( at the same time ) ?
For me (10.6.6 OSX) this example pushes audio effectively creating an audio feedback loop. Is there a way to disable the audio's output? I would like to visualize the mic's feed without pushing it's audio back out. I looked through the source and didn't see something I that could resolve this. Here's the example:
/**
* Get Line In
* by Damien Di Fede.
*
* This sketch demonstrates how to use the <code>getLineIn</code> method of
* <code>Minim</code>. This method returns an <code>AudioInput</code> object.
* An <code>AudioInput</code> represents a connection to the computer's current
* record source (usually the line-in) and is used to monitor audio coming
* from an external source. There are five versions of <code>getLineIn</code>:
* <pre>
* getLineIn()
* getLineIn(int type)
* getLineIn(int type, int bufferSize)
* getLineIn(int type, int bufferSize, float sampleRate)
* getLineIn(int type, int bufferSize, float sampleRate, int bitDepth)
* </pre>
* The value you can use for <code>type</code> is either <code>Minim.MONO</code>
* or <code>Minim.STEREO</code>. <code>bufferSize</code> specifies how large
* you want the sample buffer to be, <code>sampleRate</code> specifies the
* sample rate you want to monitor at, and <code>bitDepth</code> specifies what
* bit depth you want to monitor at. <code>type</code> defaults to <code>Minim.STEREO</code>,
* <code>bufferSize</code> defaults to 1024, <code>sampleRate</code> defaults to
* 44100, and <code>bitDepth</code> defaults to 16. If an <code>AudioInput</code>
* cannot be created with the properties you request, <code>Minim</code> will report
* an error and return <code>null</code>.
*
* When you run your sketch as an applet you will need to sign it in order to get an input.
*
* Before you exit your sketch make sure you call the <code>close</code> method
* of any <code>AudioInput</code>'s you have received from <code>getLineIn</code>.
*/
import ddf.minim.*;
Minim minim;
AudioInput in;
void setup()
{
size(512, 200, P2D);
minim = new Minim(this);
minim.debugOn();
// get a line in from Minim, default bit depth is 16
This particular example is using toxiclibs, but could just as easily be done using PVector, so I wasn't sure whether to place this in
Programming Questions or
Contributed Library Questions.
Anyways, onto the question:
Below is code to generate a path from
(0, height) to an unknown destination. I was
trying to use the highlighted line to steer the path towards the target
Vector2D
t, but it doesn't prove to be reliable.
Is there a way to get the path to hit the target 100% of the time? Perhaps I don't fully understand
interpolateToSelf()
and
interpolateTo()
. Well more precisely, I don't fully understand the difference between those two methods. Maybe the problem lies there?
t = noiseStrength * noise(x / noiseScale, y / noiseScale);
x += cos(t) * ss;
y += sin(t) * ss;
if(x < -r || x > width + r ||
y < -r || y > height + r) {
oob = true;
}
if(oob) {
place();
}
render();
}
void place() {
int d = int(random(4));
switch(d) {
case 0:
x = -r;
y = random(height);
break;
case 1:
x = width + r;
y = random(height);
break;
case 2:
x = random(width);
y = -r;
break;
case 3:
x = random(width);
y = height + r;
break;
default:
x = random(width);
y = height + r;
break;
}
oob = false;
}
void render() {
pushMatrix();
translate(x, y);
rotate(TWO_PI * t);
arc(0, 0, r, r, PI / 64, TWO_PI - PI / 64);
popMatrix();
}
}
This sketch works great as a Processing Applet, but I was recently asked to create a video of something similar to this as a seamless loopable video, around 5 seconds. Has anyone dealt with something like this before? I was thinking that maybe there is a way by saving the initial position of the Particle
and then setting that as the destination for the particle after a certain number of times it has been place
d? I thought I'd confer with the community first to see if anyone had dealt with something like this before?
If you run this code you'll notice that the 3rd string from the top of the viewport has no holes. Is there any way to retain the holes when splitting up an
RShape into it's respective
RPath's?
OR
Is there a way to know which letter fires the
.contains() in the
RShape form?
If you need the font to run this you can find
DroidSans over at Google.
I need a way to know the particular (in this case) letter that the mouse is over.
// create a new positive attraction force field around the mouse position (radius=250px)
mouseAttractor = new AttractionBehavior(mousePos, 250, 0.9f);
physics.addBehavior(mouseAttractor);
}
void mouseDragged() {
// update mouse attraction focal point
mousePos.set(mouseX, mouseY);
}
void mouseReleased() {
// remove the mouse attraction when button has been released
physics.removeBehavior(mouseAttractor);
}
This is for the most part just the Attraction2D demo, highlighted part I changed. The
.heading() method isn't exactly what I was expecting (there is rotation, just not very much). Am I using it incorrectly?
This is kind of a long time coming, but here goes anyways.
Narcissus-app is a light desktop application for creating visual connections between you and your actions. It's a rather simple form of frame-differencing to draw shapes over a live feed. While the proof-of-concept application is written in
cinder, there are a myriad of
videos that are built with Processing, including but not limited to the HD videos found on the site!
This project has been a 2 year love affair with programming and learning to program, namely processing, so it's dear to my heart and I want to share it — especially with this community.
In addition to the
Mac application there is a
gallery section to the website that pulls most recent videos on vimeo or youtube tagged with
narcissus-app
. Currently those videos are of just me
.
Example of what the image process looks like.
For those interested there is also a version of the application that takes a .mov as an input and re-renders an image sequence with the original image and the shapes drawn on top (this is in Processing!). This is meant to be used in conjunction with HD footage. Finally, there's a
tumblr with very infrequent updates.
GSVideo works like a dream in the PDE, but as soon as I try to bring it over to Eclipse I get this linking error. I have Imported all the .jar files (jna, gstreamer, and gsvideo) as well as the gstreamer folder with all the .dll files into my eclipse project. The error I get spit back to me:
can't load library SDL (SDL|libSDL|libSDL-0) with -Djna.library.path=C:/Users/jonobr1/workspace/GSMovieTestgstreamerwin. Last error:java.lang.UnsatisfiedLinkError: Unable to load library 'SDL': The specified module could not be found.
Two things in this error strike a cord with me. The first is that the folder
C:Ujonobr1workspaceGSMovieTestgstreamerwin exists and the other is that the slashes are a different direction after
GSMovieTest. Could the jna.library.path be set incorrectly? I tried changing it with
System.setProperty("jna.library.path", "C:Usersjonobr1workspaceGSMovieTestgstreamerwin"); in
setup() but no luck
I've been playing around a little bit with GSVideo and the GSMovie object to play back .mp4 and .mov files. It's fantastic! I noticed in the
reference that there is a
setEventHandlerObject()
. I'm curious how this works, but didn't see any examples packaged with the library as to what purpose it could serve.
If it does anything that I think the name implies it would be great to use it for setting events like MovieLoaded, MovieDisplayed, MovieClicked, these sorts of things.
The
Threading Example on the Processing wiki is a great resource. I'm curious — where does the
run() method get fired? I'd like to pass arguments to it to change variables within the class. Below is the Threading example modified to do something that I'd like.
SimpleThread thread1;
SimpleThread thread2;
void setup() {
size(200,200);
}
void initThreads() {
if(thread1 == null) thread1 = new SimpleThread(1000,"a");
thread1.start();
if(thread2 == null) thread2 = new SimpleThread(1200,"b");
thread2.start();
// Execute some code here to:
// Pass variables to run() to change the id and count
I've pushed myself into a corner where I have to scale an entire sketch. The
scale() works well, however I have these mouse sensitive areas that don't scale with the render.
Now I could use a multiplier on the radius to make
r
in the if statement reflect the new radius, 50, but I was wondering if there was a way that I could do it without having to actually rewrite the number value. I'm totally stumped...
Okay, didn't totally understand this from the documentation, but here it goes. I noticed that all the object constraints for toxiclibs Verlet Physics ( ie SoftBoxConstraint, CylinderConstraint, etc. ) all interface with ParticleConstraint.
This is how all these objects apply the constraints?
Is it possible to interface our own custom shapes, say for instance an array of Vec3D points, with ParticleConstraint or would we have to build a custom shape out of the existing constraints?
Geomerative is a great library, I've been using it quite a bit. But, I've been stumped on this one for the longest time...Basically there are many forms that you can import a .ttf file. The base RFont can go to many different items, RPoint, RShape, RMesh — it's fantastic! The annoying thing about RFont - RPoint is that in your array list it doesn't separate the letters let alone words. As a result you get images like this where a line jumps from one letter to the next.
Preliminarily (is that a word ?) a friend and I figured out a way
around, but — goddamn! — kills the processor. Anyone else have this issue / know a workaround?
So, I've got my hands on what I think could be a very interesting library to use. However, there are no examples of how the code works. As a result I don't know how to import the library. Is there a way to figure out how to write the import script for importing a library?
E.G.
import processing.opengl.*;
It doesn't seem like it has anything to do with the folder structure...is it just traversing classes from the source file?
Hey so I went digging in the previous forum today and was looking at the Monome connectivity via Processing. There's one in the library section, Monomic, and then also one, Momonome, in an early state (although worked on more recently via github). Has anyone used these recently?
I can't say that I've been able to get either to work. Frankly though, I started using eclipse and proclipsing yesterday so Momonome is a little bit of a struggle. I haven't done the proper research to utilizing external classes in eclipse. With Monomic I can't seem to get any of the demos to work. It does seem odd that the main method of communication by the creators of Monome is OSC, while Monomic is heavily reliant on Serial (even the OSC example is mis-imported as a Serial).
I'd really like to update these to better serve the community. Any preferences as to which?