Was just trying to get procontroll going and I'm stuck on a java.lang.NoSuchMethodError. Anyone have any experience with this? Is it not supported in 2.0?
Processing 2.09b & 2.1
on OS X 10.8.2 (12C60)
Any help would be greatly appreciated as I'm trying to get logitech support back into my new visuals software! =)
Just did a quick write-up of how I used processing to generate packing labels for my recent relocation. I just used processing's native PDF rendering capabilities to generate the PDF to print from a parsed text file.
Fun project and useful script for general organization also.
Perfect to extend to make more artistic labels too, I was just wasting time!
So, let me start by explaining the high-level problem first...
I work for a company that wants a 3D model viewer that they can demo from safari on an ipad, embeded in their website. The second requirement is that this same 3D model viewer be embedded in their native iPad app.
...
Pretty hard problem, but the sticking point is how much they're pushing to be able to view the web portion of their tech in the iPad's browser.
Anyways, at first inspection it looked like Processing.js would be perfect, (it's even embeddable) except there's no word yet from apple on how long till iOS's support for WebGL, so it's stuck in 2D.
I've gotten a prototype working for building a mesh from a string that could be sent over an ajax request: (For testing, the mesh is a quad) and I've tested loading this from within a UIWebView in a native app on the device and it works perfectly.
The only conclusion I could come to was to implement the 3D rendering portion of the functionality myself until WebGL is supported officially by apple. *crosses fingers*
Since I don't need to handle textures, and I could pre-triangulate the mesh, it could be a feasible amount of work to write the renderer. I believe I just have to perform frustum & backface culling, 3d transforms, depth buffer, and polygon clipping.
Would it totally just be more-sane to render the data with the 2D renderer from a top down view and preprocessed mesh to avoid writing the more complicated portions of a 3d renderer until WebGL is supported? I imagine this might be more sane from a performance standpoint also because you can make better assumptions about what portion of the model is visible on the screen.
I would love any advice / thoughts on this at all, from where to get started working it in to processing.js' code to links to completely different technologies that could potentially solve this problem. Maybe I'm stuck looking at it the wrong way and someone can enlighten me!
So I wanted to start a discussion to get a feel if many people here use Midi keyboards or joysticks to adjust parameters in their sketches?
I have just been using themidibus library and a gamepad library to create a simple controller class for my M-Audio Oxygen 8 keyboard & Logitech Wireless Rumblepad 2, but this seems to be an area that might be better suited for a specialized library that could support several different types of keyboards. Is there something like this that already exists?
I've read a little about Open Sound Control, but it doesn't sound like this is what I'm talking about.
If there isn't a good alternative out there right now, I'd consider writing the library as it seems like it could be really useful. I'd need some help figuring out the correct mappings for various keyboards and controllers I don't own, though this would probably just be saved in a series of XML files so it's easy to update.
I'd also be interested in hearing other's use-cases for a library like this. One situation I can think of would be for VJing. Using something like the Mother library and writing simple sketches that bind their dynamic parameters and events to key presses and knobs, a VJ artist could queue up sketches and control them real time with the keyboard or joypad.
In this case however it would be important for the variables to not be bound to any particular key, knob, or slider, and it would be helpful if they could be dynamically assigned to their physical counterparts based on what two sketches are currently active.
This is just a snippet, but It could possibly work something like this:
void setup()
{
controllerLib = new VarController(this);
}
void controlChange()
{
spotLoc.set( controllerLib.getFloat( "Spot location on X axis", -1.0, 1.0 )
, controllerLib.getFloat( "Spot location on Y axis", -1.0, 1.0 ) );
if ( controllerLib.isEvent( "Next Color Palette" ) ) nextPalette();
else if( controllerLib.isEvent( "Previous Color Palette" ) ) prevPalette();
}
It could be fast enough to use strings if mapped a special way:
I would call
controlEvent() and return false for every
isEvent()callback registering the event name and mapping it to it's call index. Because following this pattern the calls would always happen in the same order, when real events come in I could just call controlEvent() and return false in isEvent() incrementing a counter and only returning true when that counter is equal to the call index of the registered event.
I would do the same for
controlChange() in
getFloat() and
getInt(), registering the variable names and min/max values passed in to their call index.
This way, I don't have to do any string comparisons at run time and the library would know what each variable is called by an easily understandable description that could be passed to another connecting tool.
The only remaining question is how one would dynamically map the events and variables to knobs, keys and joypads. What I'm picturing here is a separate tool you can open, kind of like how the serial window works in arduino, that would list all the different available events and variables on one side and controllers on the other side. Then the user could drag arrows to map keyboard notes to events and sliders/knobs to variables.