I've done quite a bit of searching both here and on Google, but still can't find the answer.
I've just purchased a Toshiba Thrive Tablet that has Android version 3.2.1 I have been able to use MSAFluid (
MSAFluid site) with Processing on Windows7 with no problem, but now I'd like to be able to use the touch screen on my tablet with MSAFluid at full screen. Is this possible? Will I need to somehow create a flash program (or whatever will run on Android) from Processing? I know I can't run Processing on Android (or at least I don't know how), so I figure I'm going to have to create something in Windows for Android, but I have no idea where to begin.
*EDIT*
I've since found
app development made easy which helped me figure out how to develop new apps using the Android selection in Processing. I downloaded the USB drivers as well, and that part works fine. But when I load the MSAFluidDemo.pde and try to run it in Android mode I get this:
Processing 1.5.1 "Error from inside the Android tools, check the console."
Processing 2.0.1 "package javax.media.opengl does not exist."
The MSAFluidDemo runs fine in Standard mode in Processing 1.5.1.
I don't really care which Processing version is used, I just want to be able to run MSAFluidDemo on my tablet.
Javier (jacracar) has been extremely helpful in directing me toward the ability to implement a particle generator emitting particles from interactive movement captured via Kinect. I have tried to contact him regarding more questions, but I don't want to bother him as I have a feeling he is busy any may not be able to offer any more help.
I would like to be able to create similar visual effects as these:
I believe the Midas Project is using VVVV and the Christian Mio video is using processing with MSAfluid.
I've been experimenting with various sketches trying to figure out how to incorporate *any* particle generator with moving humans using the Kinect. As a non-programmer, I'm still struggling with some things.
The sketch that Javier was so gracious to post in the forums here is this one which works fine, but Javier suggested that I "
Use the array limits[] to position your particles" but I am at a loss as to what that entails:
Say, for instance, I wanted Memo's MSAfluid (
http://www.memo.tv/msafluid/) particles and have the human form be the particle generator (instead of the cursor), would I be able to somehow reference the msafluid sketch from this sketch that Javier has offered? Or even something much simpler like *any* kind of particles coming off the array limits that would replace the red outlined echo effect?
I am using Windows7 with Processing 1.5.1 and the Kinect for Windows, would it be better to use the latest Processing version?
I have been fumbling around with TUIO trying to figure out how to use it in processing to do hand tracking with the Kinect. I have tried many different things but I still can't figure out how to get it to track a user's hands. One example I was hoping to use can be seen here:
I've messaged Vlad asking him how I start TUIO before running the sample from processing, but he might be too busy to answer because it's been a few days and I have no reply.
Does anybody know how I "start" TUIO so I can use it in Processing to track a user's hands via the Kinect for Windows?
Forgive me if there is already an easier way to do this, but I didn't find one.
I was doing a search tonight trying to figure out how to get a Processing sketch to a video file. Well, there were bits and pieces here and there, but not one post which had all the steps (at least that I could find), so I thought I'd make a post for anybody else who is new to this as to how to get your sketch to a video file to be burned to a dvd.
1. T
o implement in any sketch, just add the following line at the end of draw()...
saveFrame("/output/seq-####.tga");
This will create a series of .tga files in a directory called "output" within the same directory your sketch is.
2. If you don't have After Effects or any other paid version of a video editor, I suggest using
VirtualDub ( http://www.virtualdub.org/) to turn the .tga files into an .avi, or any other format vdub offers.
3. Once you've processed the .tga files into a video, I will suggest another video editor (not free) called AVS (
http://www.avs4you.com/AVS-Video-Editor.aspx) which will allow you to take your new .avi file or files and create a nice video with fades, etc., which can be saved to whatever video format you'd like or simply burn straight to dvd which can be read by any DVD player.
This wasn't an in-depth tutorial, so if you have any questions, post them here and I'll answer the best I can.
I have run particle emitters and many other interactive things with Processing, but I can't figure out how to attach particles to the dancer's hand or body. I have a deadline of a couple of weeks to use something similar to the link above for a dance performance. I am not getting paid to do it, I'm doing it to gain the experience and to also learn to code in Processing. The problem is I am running out of time and once I understand how to attach particle emitters to dancers in front of a Kinect, I can move on to advance myself further. I just need to get past this one hurdle.
I do not know how much to pay for a .pde file that will do what is being done in the link above, but I will pay you if I can afford it.
I am using the Kinect for Windows on Windows 7. I have a projector.
I have been struggling with trying to figure out how to put a particle emitter onto the hand of someone in front of a kinect. I have come to understand, somewhat, how particle emitters work but I still don't know how to attach one to a hand.
Just for a basic beginning, I am trying to take Daniel Shiffman's Example 23-2: Simple particle system with ArrayList from his "Learning Processing":
As a first step toward figuring out how to get a particle generator to be connected to a user's hand, I am wondering how I would mesh two already-created sketches to achieve this effect.
and would like to understand what I would need to do to replace the Icosahedrons in Jason Stephens' hand tracking sketch
http://www.openprocessing.org/sketch/43513 I have all the necessary libraries installed and working fine as I can open and run both of those sketches individually.
I don't really need start out with Jason's sketch, it could be something much more simpler like a silhouette or skeleton. All I really want to do is figure out the first step of connecting particle generators to user depth information provided by the Kinect Sensor so I can move on from there.
I was under the assumption that the Microsoft Kinect Sensor could be used with other 3rd party frameworks besides the Microsoft SDK (like OpenNI) for commercial purposes. But I've recently come across a company designing commercial programs for the Kinect using the SDK and they've suggested that they haven't created anything commercial for any other framework because they say the Kinect for Windows sensor is restricted to use the Microsoft SDK for commercial applications.
Is this true? I've visited the Kinect for Windows forums but it seems they are not certain. I tried to figure out how to contact the right contact at Microsoft but it is too confusing to know exactly who to email my question to. I couldn't even find a "support" for the Kinect for Windows, only for the Xbox.
Does anybody have any experience with using a projector on moving subjects to create effects like flames on a person or physics interaction with a live person on stage? I've been able to run the examples on Amnon's site (
http://www.creativeapplications.net/processing/kinect-physics-tutorial-for-processing/), but how would I go about calibrating a camera and projector so the subject can integrate with the scene? For example:
I have a Kinect and I have the OpenNI, NITE, and SimpleOpenNI libraries working just fine. I have nearly figured out Christian Mio Loclair's calibration tool but there are a final few steps I need to figure out to get it working. Mostly I'm interested in doing what he has done here:
I am looking to purchase a projector that will project images and effects from Processing onto dancers on a stage using the Kinect sensor. The stage and house lights will be down and only the projector will be working. For a distance of about 20 feet and 1 to 3 dancers, what would an effective lumens be for enough brightness. How about from a distance of 40 or 50 feet with 10 to 20 dancers?
Thanks for anyone who has had experience with this. If I get no replies, I will probably update this post with what I have discovered through my own trial and error.
I've just noticed that there is a new 1.6 version of the Microsoft SDK for the Kinect
New SDK that extends the Depth Data further than 4 meters. I'm wondering how this might effect the depth data that Processing can get. Does Processing already have a way to gather data further than 4 meters with the Kinect? Or does Processing need the Microsoft SDK depth information?
I would like to use Processing to project images on dancers upon a stage which will mean the Kinect will need to be further away from the stage than 10 feet, can this already be done with Processing and the Kinect sensor?
I am planning on purchasing a projector this week to start testing sketches I've been experimenting with this last month. I suppose I'll figure out how far the subject can be from the Kinect with Processing, but if anybody has more info on how depth is determined in Processing, I'd be grateful.
I've done a forum search on Kinect, distance, depth data, distance.. But the ones I could find were problems people were having that haven't been answered yet.
I love the work Memo Akten has done with
MSAFluid, I would like to be able to use Kinect and have the particles emit from a person's hands as he moves them around (in place of using the mouse). I have purchased "Making Things See" and was able to use one of the tutorials to use the Kinect to attach a dot on a person's hand, as well as draw a line by moving the hand around. But I do not know how to integrate something like MSAFluid with the drawing sketch to replace the line with a particle emitter.
Can anybody refer me to something that as been done like this?
I did "search" the forums but couldn't find anything doing specifically what I want to do.
I have searched the forum with "particles, text, animated text, text particles" and couldn't find anything that refers to a program that combines particles with text.
Are there any examples of using particles to create text? I am impressed with Memo Aktens
MSAFluid and would like to have the fluid particles follow text as it spells out a word with particles emitting from the text.
Would anybody have a direction for me to begin? I have been experimenting with Processing for a month but I know I have so much more to learn.