How to tell where the user is pointing with a kinect

edited August 2017 in Kinect

I've been using the kinect 2 and it works pretty well in Processing 3 but now I want to develop some interactivity.

What I want to do is to get the data from the 3D skeleton and tell what object the user is pointing to.

Like point at the lamp and the lamp turns on.

Point at the PC a video starts playing.

Controlling the objects is easy but I'm having a lot of trouble conceptualizing the 3D picking.

How do I detect what the 3D skeleton is pointing at? I can imagine I should cast a ray out of their hand down the finger and check the intersection with collision detection spheres but I have no clue how to implement that. I've tried with some vectors but I just can't check collisions with them. I've read some of the 3D picking threads but they all talk about picking objects with the mouse.

Can anyone help me with this? I'm using the latest processing 3.3.5.

Answers

  • edited August 2017

    Thank you for the links. I've read through them but I'm having a conceptual problem. As I said above the examples of 3d picking in the forum deal with clicking from the camera point of view.

    What I need is more like shooting a bullet out of a "gun" in 3d space. The gun is in the user's hand.

  • So the index finger is your gun

    In which form do you have its orientation?

    Two angles? Spherical angle?

  • I'm creating a line between handpoint and index fingerpoint. It seems to be a good aproximation of where you are pointing.

  • So you don't have angles yet but 2 3D points?

    Correct?

  • Yes. With those 2 3D points I can draw a 3D line basically.

  • And now we need to make the line longer and longer until it hits a target - like eg. your lamp

    This is a vector calculation

    But is the position of the lamp known as a 3d position

  • First of all we need to define the position of the lamp.

    The way I'm doing it now is I'm walking up to the lamp and touching it. Where my hand is I create a sphere (X,Y,Z and a radius).

    Now how do I know I'm pointing to that virtual sphere regardless of where I am in the room?

    The way I draw my line right now:

    I have my A Pvector that points to my fingerXYZ I have my B Pvector that points to my handXYZ

    L=PVector.sub(A,B);

    This gives me a Pvector L that points from my hand to my finger.

    Now what? If I just set the magnitude of L arbitrarily I won't know exactly when the tip hits the sphere. It might just go outside it.

  • It has to do with map

    Some of the gurus like quark needed here....

  • @Chrisir If you want quark's attention, you should tag him: @quark :-B

    So your kinect unit is fixed in space, right? You teach your unit where the lamp is. Can you get an xyz position using the kinect data?

    Now, you are saying that you get points for your hand and your finger. Is that true? Can you distinguish between your hand and your fingers? If you can, then half of the problem is solved... However I don't think it is as trivial as it sounds. There is another challenge and it is related to resolution from your depth data. You need to have a high resolution depth to calculate the L vector as you defined it above. If the resolution is coarse then, when you project your distance as indicated by the L vector, any error in the measurement is proportionally amplified over distance. If you are working within a standard size room, you can fight the error by making the sphere defining the lamp big.

    Lastly, the solution of your problem (finger pointing to lamp) is based on the perpendicular distance of a point to a line in a 3D environment: https://www.google.ca/search?q=perpendicular+distance+between+a+point+and+a+line+in+3d&rlz=1C1CHWA_enCA688CA688&oq=perpendicular+distance+between+a+point+and+a+&aqs=chrome.2.0j69i57j0l4.15166j0j8&sourceid=chrome&ie=UTF-8

    Kf

  • Answer ✓

    In a 'traditional' 3D game then every object surface, no matter what it looks like is constructed from triangles in 3D space. To detect collisions you project a ray through 3D space from a known position (end of the gun, eye position etc.) and find the nearest triangle that it intersects and from that the actual object (e.g. lamp).

    Your situation is different so the following approach is likely to suit your needs.

    If you represent the target (lamp or whatever) being a single point in space then you can simply calculate the closest distance the ray passes the point and compare it to the radius of some collision-sphere.

    As @kfrager points out the accuracy of calculating the closest distance very much depends on the accuracy of the ray direction and the distance to the target. An alternative approach is to measure the angle between the ray and the line passing through the target, or even some combination of these two measurements.

    The sketch below shows how you can calculate these two values.

    PVector gun = new PVector(1, 1, 2);
    PVector dir = new PVector(5, 0, 1);
    PVector target = new PVector(51, 1, 15);
    
    void setup() {
      float distance = getClosestApproach(target, gun, dir);
      float angle = getAngleOffset(target, gun, dir);
      println(distance, angle);
    }
    
    /**
     Calculates the closest approach a ray makes with a point in 3D space.
     p = the point in space
     pol = any point on the line
     dir = the direction the ray travels in 3D space
    
     dir must not be a zero length vector!
     returns the distance
     */
    float getClosestApproach(PVector p, PVector pol, PVector dir) {
      PVector d0 = PVector.sub(pol, p);
      PVector cp = d0.cross(dir);
      return cp.mag()/dir.mag();
    }
    
    /**
     Calculates the angle between by a ray starting from a known position
     and a point in 3D space.
    
     p = the point in space
     sp = ray start position
     dir = the direction the ray travels in 3D space
    
     p and sp must not be the same 3D position
     dir must not be a zero length vector!
     returns the angle offset (radians)
     */
    float getAngleOffset(PVector p, PVector sp, PVector dir) {
      PVector d0 = PVector.sub(p, sp);
      return PVector.angleBetween(d0, dir);
    }
    
  • Thanks for helping out!

    As always.....

  • It worked! thank you very much!

  • Wow!

    This is an innovation!

    Can you like describe your setup hardware wise a little more? How does it work?

    Do you use it in your everyday work?

    A small tutorial?

  • You mean me or someone else in this thread?

  • I mean you - you are the expert now

    And I think it's interesting if you have a computer with a kinetic running all the time you can have gestures for your room.

    I am just curious

    ;-)

  • :)) If I'm the expert everyone is screwed.

    I don't run it all the time. I just connected some arduinos and I'm sending them a simple serial command turn the light on and off. The code is a mess but it boils down to:

    phase 1: detect the skeleton. if mouseclick is detected spawn a sphere at the location of the right hand.

    switch to phase 2 when 'K' is pressed

    phase 2: assign an arduino to each sphere

    switch to phase 3 when 'K' is pressed

    phase 3: detect where skeleton is pointing using the techniques above. Check distances to all spheres. If distance<radius of sphereN send trigger to arduino N.

    Right now I have it connected to two arduinos. One beeps, the other one turns on an LED on. It's fairly unimpressive.

  • edited September 2017

    No, seriously, this is a startup idea.

    You know the sonos room speakers or the amazon echo / alexa etc.

    But now instead imagine you had your arduino and kinect as a nice slim box in your living room.

    It could detect your gestures and you could switch on lights, television or music just by pointing. Amazing! I mean, I would buy it. It's more natural than speaking to the amazon echo.

    It would have a teach mode where you teach it lamp locations once and for all and then use it throughout (always on).

  • No way. I'm heavily dependent on Microsoft with all the Kinect comes boxed in.

    Secondly it's fucking twitchy as hell and the setup is really complicated.

    Thirdly

    A loud clatter of gunk music flooded through the Heart of Gold cabin as Zaphod searched the sub-etha radio wave bands for news of himself. The machine was rather difficult to operate. For years radios had been operated by means of pressing buttons and turning dials; then as the technology became more sophisticated the controls were made touch-sensitive--you merely had to brush the panels with your fingers; now all you had to do was wave your hand in the general direction of the components and hope. It saved a lot of muscular expenditure, of course, but meant that you had to sit infuriatingly still if you wanted to keep listening to the same program.

    If you want to develop this as a startup be my guest.

  • Douglas Adams is the best!

    ;-)

Sign In or Register to comment.