I've already post a question (auto-answered by myself) about this issue on
Stackoverflow
Basically the problem is this:
I was trying to implement a simple test in 3D where using the keyboard you can move the camera along the 3 axis and rotate your view around the Y axis.
And I found three strange things (maybe related):
The default position of the camera is not in (0, 0, 0), but in (width/2, height/2, (height/2)/(PI/6))
Without calling the resetMatrix method rotation seems to rotate around a wrong centre.
Another strange thing is that despite I wasn't resetting the matrix the objects in the scene didn't move at speed of light
as I would expected. I mean in OpenGL if you don't initialise the transformation matrix at each loop it is used (multiplied) as it was left in the previous loop, so for example if you have a matrix with a translation of 2 point on the X you continue to translate of 2 point each loop (at 60FPS you should your objects run away at 120 points/second!).
So... Could anyone explain me the reason of this strange behaviour?
It seems to me that none use resetMatrix() in examples and tutorials I've seen, and I don't understand how the things can work!
I would like to use Processing and Kinect to
film the depth map/points cloud of a scene and then import to Blender as an animation.
I mean my idea is to use Kinect as a 3D camera, take a scene (of few minutes) and then import the points cloud data in Blender and then render as I prefer.