We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hello guys,
I'm trying to adjust a detail of an animation with Kinect, to achieve not the expected result, will try to explain, already sorry, my native language is Portuguese ...
Below is an animation of a print (based on obtivos examples OpenProcessing - do not put links), where I am making some adjustments.
It works like this:
As the mouse moves the bird as captured coordinates when enabled kinect, move the mouse is substiuído by kinect coordinates through the Robot class ...
When capturing the image (kinect) at a distance, okay, okay ... kinect capture coordinated, and is made reescala to move across the screen.
My problem is you need to track your hand when you're sitting, because the objective of the project is to benefit those who are in a wheel chair for example, with little mobility.
With Kinect seems we have a good chance to use, but ran into this detail, that the model was wearing, only captures the coordinates standing.
But I found in other code (If necessary, put the links):
The question then is: When you increase the screen size (larger than the resolution of the Kinect), an error occurs, from what I saw he can not do the loop and an area larger than the kinect.
Below is the sample code ...
I do not know if get the idea, anyway thank I thank the attention.
Thank you,
Trecho do código:
void atualizaKinect(){
kinect.update(); // update the camera
if (rastreamentoIMG){
// let's be honest. This is kind of janky. We're just setting a huge(ish) value for the initial value of closestValue.
// This totally works but it's not very elegant IMO.
closestValue = 8000;
// this initializes and places the values from the kinect depth
// image into our single dimensional array.
int [] depthValues = kinect.depthMap();
/*for (int j = 0; j <= depthValues.length; j++){
depthValues[j]= int(map(depthValues[j], 0, w, 0, 640));
}*/
// this breaks our array down into rows
for(int y = 0; y < 480; y++ ){
// this breaks our array down into specific pixels in each row
for(int x = 0; x < 640; x++){
// this pulls out the specific array position
int i = x + y * 640;
int current = depthValues[i];
//now we're on to comparing them!
if ( current > 0 && current < closestValue){
closestValue = current;
closestX = x;
closestY = y;
}
}
// draw the depth image on the screen
image(kinect.depthImage(), 0, 0);
//if (desenhaEsqueleto){
// draw that swanky red circle identifying it
fill(255, 0, 0); //This sets the colour to red
ellipse(closestX, closestY, 25, 25);
//Reescala pX e pY, em relação ao ponto capturado pelo Kinect, e o tamanho da tela para ajuste
pX = int(map(closestX, 0, 640, 0, w));
pY = int(map(closestY, 0, 480, 0, h));
//}
}
} else {
kinectDepth = kinect.depthImage(); // get Kinect data
userID = kinect.getUsers(); // get all user IDs of tracked users
// loop through each user to see if tracking
for(int i=0;i<userID.length;i++){
// if Kinect is tracking certain user then get joint vectors
if(kinect.isTrackingSkeleton(userID[i])){
// get confidence level that Kinect is tracking head
confidence = kinect.getJointPositionSkeleton(userID[i], SimpleOpenNI.SKEL_HEAD, confidenceVector);
// if confidence of tracking is beyond threshold, then track user -- Compara valor Capturado ( "Calibragem" )
if(confidence > confidenceLevel){
// get 3D position of head/hand/foot
kinect.getJointPositionSkeleton(userID[i], SimpleOpenNI.SKEL_HEAD, headPosition);
kinect.getJointPositionSkeleton(userID[i], SimpleOpenNI.SKEL_RIGHT_HAND, handPositionRight);
kinect.getJointPositionSkeleton(userID[i], SimpleOpenNI.SKEL_LEFT_HAND, handPositionLeft);
kinect.getJointPositionSkeleton(userID[i], SimpleOpenNI.SKEL_RIGHT_FOOT, footPositionRight);
kinect.getJointPositionSkeleton(userID[i], SimpleOpenNI.SKEL_LEFT_FOOT, footPositionLeft);
// convert real world point to projective space
kinect.convertRealWorldToProjective(headPosition,headPosition);
kinect.convertRealWorldToProjective(handPositionRight, handPositionRight);
kinect.convertRealWorldToProjective(handPositionLeft, handPositionLeft);
kinect.convertRealWorldToProjective(footPositionRight, footPositionRight);
kinect.convertRealWorldToProjective(footPositionLeft, footPositionLeft);
// create a distance scalar related to the depth in z dimension - get 3D position of hand
distanceScalarHead = 525/headPosition.z;
distanceScalarHandRight = 525/handPositionRight.z;
distanceScalarHandLeft = 525/handPositionLeft.z;
distanceScalarFootRight = 525/footPositionRight.z;
distanceScalarFootLeft = 525/footPositionLeft.z;
//Atualiza variáves globais, pX e pY, para substituir o movimento do Mouse com o Kinect
//pX = int(handPositionRight.x);
//pY = int(handPositionRight.y);
//Reescala pX e pY, em relação ao ponto capturado pelo Kinect, e o tamanho da tela para ajuste
pX = int(map(int(handPositionRight.x), 0, 640, 0, w));
pY = int(map(int(handPositionRight.y), 0, 480, 0, h));
distanciaHandRight = dist(posX, posY, handPositionRight.x, handPositionRight.y);
distanciaHead = dist(posX, posY, headPosition.x, headPosition.y);
stroke(userColor[(i)]); //Change draw color based on hand id#
fill(userColor[(i)]); //Fill the ellipse with the same color
//println("pX/pY - 1 : " + pX + " - " + pY);
if (desenhaEsqueleto) drawSkeleton(userID[i]); //Draw the rest of the body
}
}
}
}
}
Answers
Hello again,
I think it's not very clear in the previous post which need problem solving, I will try to explain better.
This code works:
But with this other code does not work, that is, changed the parameter related to the screen size.
changed code:
As I understand it, the kinect returns 640 x 480 resolution, I got around this by using the map function to rescaling the values in the screen size, but in this case I do not know how to do.
The error that occurs is: ArrayIndexOutOfBoundsException: 307200
The question then is:
It would be possible to work with a larger screen than the resolution of this case kinect? How could I do that?
Again many thanks to all.
José, I'm from Brazil too! I dont know how to help you! Take my facebook **Paulo Purcino **we can change expirience!
Olá Paulo,
Obrigado, adicionar o contato...
Estou colocar um post sobre o assunto que estou mencionando, pra ver se conseguiremos chegar a alguma solução.
Obrigado,
Hello again,
I think I still can not explain very well what needs to be done in these links is the most accurate idea of what I need, ie the track "skeleton" sitting.
Youtube links:
These links are connected to this: http://www.codeproject.com/Articles/260741/Sitting-posture-recognition-with-Kinnect-sensor
In it, they explain some principles of how to crawl sitting, the problem is that the line of development is another in C #.
Has anyone seen to have something similar in Processing? That is, as mentioned before, be able to do the tracking sitting.
Similar discussion is also this link: http://forum.processing.org/two/discussion/3726/skeleton-alive
Well, keep researching on the subject, any information would be helpful.
Thank you all
I made a mess in our post in Portuguese, sorry Paul, lacked a few words ... rrsss ...
thanks
Hello again,
Another thing that I do not think I made it very clear, the reason for wanting to do the tracking sitting, is that design is intended to be used by the majority of people who use a wheelchair, and has little mobility, still believe that with the Kinect will have good possibilities of interaction.
Thank you all for now.
Hello,
I will leave a post here, it can be useful for other, related to solution I found, to the the tracking sitting, it is actually what was already this link:
http://forum.processing.org/two/discussion/3726/skeleton-alive
While not a perfect solutio yet, because it needs a certain movement to start, but a crucial point, was the tip of kinect position, which should be at chest height, so you can track sitting, the information is at this link (in Portuguese):
https://support.xbox.com/pt-BR/xbox-360/kinect/accessibility-kinect#831aff2be3aa4736a8db1197d8c01c8e
Now we can do some testing, we have achieved as we start using ...
Thanks,