We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hi I'm working with a couple of guys on the Kinect. We're trying to create a project that kind of resembles the old Duck Hunt game from the Super Nintendo. We'll instead use the Kinect to capture the motion of our arms as we wave the "gun" around and use an IR light to capture whether the gun was shot or not.
Right now, the current motion tracking system is the very primitive "closest point to the kinect" style of motion that we learned from the OpenKinect library by Daniel Shiffman. Basically taking the average of two points. However, there are a series of issues in that:
The response time and reaction of the Kinect is not as good as we'd like. Every now and then the cursor/reticle will fly off in a random direction that we did not intend.
The tracker requires very exaggerated movements. So if we want the cursor to go from one end of the screen to the other, we would have to stretch our arms out and point the Kinect and move the entire arm up to the shoulder as opposed to just the forearm.
The cursor struggles to reach the edge of the game screen. As we get closer, because our hand is not going out of the reach of the Kinect lens, it prevents the cursor from matching the targets on the screen. Shiffman mentioned about this issue because the Kinect accepts things from a 3D distance and when we move our arms to the edge of the camera, the hand gets farther away as well. So we would have to almost move our arm in a 2D line that is parallell to the Kinect itself as opposed to in an arc.
We're currently wondering how we can improve the current motion tracking system of closest point. Some members are saying that we should try to enable skeleton tracking and use the hand exclusively. While others want to keep the gun object and gun trigger system that we currently have. What would be the best method of tracking given our situation?