Good afternoon!
I am working on an interactive forest dancefloor: Visuals that react to people entering the space.
Using a Kinect, OpenTSPS, Processing 2.0 and an adapted version of MSAFluid, I got to this level: http://youtu.be/z2nUw-pIsf8
Dancing in this space, we realized that it would be great if you could shoot off flow forces with arms or legs (think of a Tai Chi energy warrior). However, my current software only tracks blob centroids, which makes it impossible to shoot off things in different directions (two arms, one leg = three!). And because it is recorded from above, the skeleton-tracking properties of Synapse etc. won't work.
So I thought: As I can import the OpenTSPS contours, I could track changes in these contours by intersecting the new and previous contour of a given blob, selecting the largest resulting fragments (perhaps using internal buffers to identify those that are not "edge noise", but actually constitute changes in position) and then using their centroids as an indication of direction. I admit I have no idea how to work with contours, intersections, etc. This is all based on how I conceptualize it could work, based on my knowledge of distant vector applications (Corel Draw, Illustrator, ArcGIS).
Is there an easier way to get there? Do related sketches already exist? Would you be able to point me in a direction that would speed up this learning process considerably? I'll invite you to our guerrilla forest circus parties :)
Christoph
I am working on an interactive forest dancefloor: Visuals that react to people entering the space.
Using a Kinect, OpenTSPS, Processing 2.0 and an adapted version of MSAFluid, I got to this level: http://youtu.be/z2nUw-pIsf8
Dancing in this space, we realized that it would be great if you could shoot off flow forces with arms or legs (think of a Tai Chi energy warrior). However, my current software only tracks blob centroids, which makes it impossible to shoot off things in different directions (two arms, one leg = three!). And because it is recorded from above, the skeleton-tracking properties of Synapse etc. won't work.
So I thought: As I can import the OpenTSPS contours, I could track changes in these contours by intersecting the new and previous contour of a given blob, selecting the largest resulting fragments (perhaps using internal buffers to identify those that are not "edge noise", but actually constitute changes in position) and then using their centroids as an indication of direction. I admit I have no idea how to work with contours, intersections, etc. This is all based on how I conceptualize it could work, based on my knowledge of distant vector applications (Corel Draw, Illustrator, ArcGIS).
Is there an easier way to get there? Do related sketches already exist? Would you be able to point me in a direction that would speed up this learning process considerably? I'll invite you to our guerrilla forest circus parties :)
Christoph
1