Howdy, Stranger!

We are about to switch to a new forum software. Until then we have removed the registration on this forum.

In this Discussion

Kinect for Windows V2 Library for Processing

Hey.

I just started to developing a Kinect One library for processing. The version uses the Kinect one SDK beta (K2W2), so it only works for windows ): .

You can get the current version is still beta.

https://github.com/ThomasLengeling/KinectPV2

screen-1852

I have only tested on my machine, so please send me your comments and suggestions.

It currently only support color image capture, depth and infrared capture. In the coming weeks I'll be adding features like skeleton tracking, points cloud, user tracking. Also the K2W2 is still on beta form, so I will be updating the library in the next couple of weeks.

Thomas

Tagged:
«13

Comments

  • Thats awesome that you are doing this, it doesn't seem to work for me though, It doesn't seem to let me get past kinect = new KinectV2(this);

    it says that " A library relies on native code that's not available."

    any suggestions?

  • edited July 2014

    mmm weird, do you have a 64 bits with 3.0 USB?, have you tried the examples from the SDK?

  • edited July 2014

    I just updated the git

  • Yes, It's a Surface Pro, the Kinect Studio examples work, but not the processing sketches, I've also tried Cinder and OF but in those cases its probably just incompitence because it was my first time trying to use it just for the kinect. I'm going to try to re-install the Kinect SDK.

    Is there anything else you think I should try to do? Really bummed I can't do anything with it.

  • edited July 2014

    Nope, still not working. :/

    full error code is like this, it highlights the kinect = new KinectPV2(this);

    Then I guess the summary error code says "A library used by this sketch is not installed properly." but how many ways can you install a library in processing? You just have to put it in the library folder right? edit:** I checked and the library folder is called KinectPV2 just like the .jar in the library folder**

    Then it says in the console:

    64 windows 8 A library relies on native code that's not available. Or only works properly when the sketch is run as a 32-bit application.

  • amm, could you conform that you are using a 64bits processing version.

    Yeah you should just copy the folder kinectPV2 into your libraries sketch folder The library prints out the bit version of you computer and the OS "64 windows 8"

  • yea I am, I've tried both versions, they alternate with the message that is only works properly when sketch is run as 64 bit version/ 32-bit version.

  • edited July 2014

    and there is an error in the import line in point cloud It says

     import KinectPV2*.;
     but I think should be.
     import KinectPV2.*;
    

    another error message that comes up with the point cloud, :

    64 windows 8
    Exception in thread "AWT-EventQueue-0" java.lang.UnsatisfiedLinkError: C:\Users\psanches\Documents\Processing\libraries\KinectPV2\library\KinectPV2.dll: Can't find dependent libraries
        at java.lang.ClassLoader$NativeLibrary.load(Native Method)
        at java.lang.ClassLoader.loadLibrary1(Unknown Source)
        at java.lang.ClassLoader.loadLibrary0(Unknown Source)
        at java.lang.ClassLoader.loadLibrary(Unknown Source)
        at java.lang.Runtime.loadLibrary0(Unknown Source)
        at java.lang.System.loadLibrary(Unknown Source)
        at KinectPV2.Device.<clinit>(Device.java:18)
        at PointCloudOGL.setup(PointCloudOGL.java:37)
        at processing.core.PApplet.handleDraw(PApplet.java:2361)
        at processing.opengl.PJOGL$PGLListener.display(PJOGL.java:862)
        at jogamp.opengl.GLDrawableHelper.displayImpl(GLDrawableHelper.java:665)
        at jogamp.opengl.GLDrawableHelper.display(GLDrawableHelper.java:649)
        at javax.media.opengl.awt.GLCanvas$10.run(GLCanvas.java:1289)
        at jogamp.opengl.GLDrawableHelper.invokeGLImpl(GLDrawableHelper.java:1119)
        at jogamp.opengl.GLDrawableHelper.invokeGL(GLDrawableHelper.java:994)
        at javax.media.opengl.awt.GLCanvas$11.run(GLCanvas.java:1300)
        at javax.media.opengl.Threading.invoke(Threading.java:193)
        at javax.media.opengl.awt.GLCanvas.display(GLCanvas.java:541)
        at javax.media.opengl.awt.GLCanvas.paint(GLCanvas.java:595)
        at sun.awt.RepaintArea.paintComponent(Unknown Source)
        at sun.awt.RepaintArea.paint(Unknown Source)
        at sun.awt.windows.WComponentPeer.handleEvent(Unknown Source)
        at java.awt.Component.dispatchEventImpl(Unknown Source)
        at java.awt.Component.dispatchEvent(Unknown Source)
        at java.awt.EventQueue.dispatchEventImpl(Unknown Source)
        at java.awt.EventQueue.access$200(Unknown Source)
        at java.awt.EventQueue$3.run(Unknown Source)
        at java.awt.EventQueue$3.run(Unknown Source)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.security.ProtectionDomain$1.doIntersectionPrivilege(Unknown Source)
        at java.security.ProtectionDomain$1.doIntersectionPrivilege(Unknown Source)
        at java.awt.EventQueue$4.run(Unknown Source)
        at java.awt.EventQueue$4.run(Unknown Source)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.security.ProtectionDomain$1.doIntersectionPrivilege(Unknown Source)
        at java.awt.EventQueue.dispatchEvent(Unknown Source)
        at java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source)
        at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown Source)
        at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
        at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
        at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
        at java.awt.EventDispatchThread.run(Unknown Source)
    java.lang.RuntimeException: java.lang.NoClassDefFoundError: Could not initialize class KinectPV2.KinectPV2
        at com.jogamp.common.util.awt.AWTEDTExecutor.invoke(AWTEDTExecutor.java:58)
        at jogamp.opengl.awt.AWTThreadingPlugin.invokeOnOpenGLThread(AWTThreadingPlugin.java:103)
        at jogamp.opengl.ThreadingImpl.invokeOnOpenGLThread(ThreadingImpl.java:206)
        at javax.media.opengl.Threading.invokeOnOpenGLThread(Threading.java:172)
        at javax.media.opengl.Threading.invoke(Threading.java:191)
        at javax.media.opengl.awt.GLCanvas.display(GLCanvas.java:541)
        at processing.opengl.PJOGL.requestDraw(PJOGL.java:688)
        at processing.opengl.PGraphicsOpenGL.requestDraw(PGraphicsOpenGL.java:1651)
        at processing.core.PApplet.run(PApplet.java:2256)
        at java.lang.Thread.run(Unknown Source)
    Caused by: java.lang.NoClassDefFoundError: Could not initialize class KinectPV2.KinectPV2
        at PointCloudOGL.setup(PointCloudOGL.java:37)
        at processing.core.PApplet.handleDraw(PApplet.java:2361)
        at processing.opengl.PJOGL$PGLListener.display(PJOGL.java:862)
        at jogamp.opengl.GLDrawableHelper.displayImpl(GLDrawableHelper.java:665)
        at jogamp.opengl.GLDrawableHelper.display(GLDrawableHelper.java:649)
        at javax.media.opengl.awt.GLCanvas$10.run(GLCanvas.java:1289)
        at jogamp.opengl.GLDrawableHelper.invokeGLImpl(GLDrawableHelper.java:1119)
        at jogamp.opengl.GLDrawableHelper.invokeGL(GLDrawableHelper.java:994)
        at javax.media.opengl.awt.GLCanvas$11.run(GLCanvas.java:1300)
        at java.awt.event.InvocationEvent.dispatch(Unknown Source)
        at java.awt.EventQueue.dispatchEventImpl(Unknown Source)
        at java.awt.EventQueue.access$200(Unknown Source)
        at java.awt.EventQueue$3.run(Unknown Source)
        at java.awt.EventQueue$3.run(Unknown Source)
        at java.security.AccessController.doPrivileged(Native Method)
        at java.security.ProtectionDomain$1.doIntersectionPrivilege(Unknown Source)
        at java.awt.EventQueue.dispatchEvent(Unknown Source)
        at java.awt.EventDispatchThread.pumpOneEventForFilters(Unknown Source)
        at java.awt.EventDispatchThread.pumpEventsForFilter(Unknown Source)
        at java.awt.EventDispatchThread.pumpEventsForHierarchy(Unknown Source)
        at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
        at java.awt.EventDispatchThread.pumpEvents(Unknown Source)
        at java.awt.EventDispatchThread.run(Unknown Source)
    
  • I means that the library has no been installed properly, try installing it again http://wiki.processing.org/w/How_to_Install_a_Contributed_Library

    I just did some change on the repository.

    thomas

  • I build the library using java jdk_1.8.0_05 could be the source of the problem,

  • Ahh still not working, :( Thanks a lot anyways, If I figure it out I'll try to post it.

  • Hey I just solve the issue, you can download the new version on github.

    There was a issue with the .dlls for windows 8.1, I Think I solve it, I tried a fresh install on a windows 8.1, and it work.

    Thomas

  • Ah man your a genius! it works!

  • Hey to get individual joints, I'm trying this but its says the field x is not visible

     for(int i = 0; i < skeleton.length; i++){
          if(skeleton[i].isTracked()){
          Joint[] joints = skeleton[i].getJoints();
          ellipse(joints[skeleton[i].JointType_Head].x,50,50,50);
            skeleton[i].drawBody();
            skeleton[i].drawHandStates();
          }
    
  • edited July 2014

    ah, forgot the put the getters, for the positions (x,y, z), type and states, sorry about that

    you can try the repository again.

    so now you can do something like this

    with getX(), getY(), getZ(), getState(), getType()

      for (int i = 0; i < skeleton.length; i++) {
        if (skeleton[i].isTracked()) {
          Joint[] joints = skeleton[i].getJoints();
          float x =  joints[kinect.JointType_Head].getX();
          float y =  joints[kinect.JointType_Head].getY();
          ellipse(x, y, 50, 50);
          skeleton[i].drawBody();
          skeleton[i].drawHandStates();
        }
      }
    

    thomas

  • Hey, I uploaded a video of the point could using the library, I dont know why but the video render reduced the point cloud qualities, but live it looks good.

    Still need to fix things and add more features

    thomas

  • Hey Just posting the issues I'm getting just in case you want to hear them, feel free to ignore any of it.

    I'm using kinect and Box2d and the class joint clashes with the two because i think they both have that class, is there an easy work around? Maybe you could call it KJoint or something

  • No problem, I could change the class to another name like KJoint, I'll update it soon

  • OK I changed the class name, so it should be working now, Thanks!

  • Hi Thomas, thank you a lot for your work. I have two questions: 1) Does the old getCom(), the center of mass, is implemented? 2) What about the z coordinate in the skeleton? joints[kinect.JointType_Head].getZ() is always 0.0 according with the value in the library skeleton.java, maybe it isn't implemented yet. Thank you Enzo

  • Hey thanks for asking, I still don't implement that, right now it just a simple skeleton, I'll implement that in the next week ok!

  • Great! Thank you very much Thomas.

  • haha I was just about to ask about getZ() thanks a lot Thomas, if you ever need help with anything also let me know, I don't know much, yep just let me know.

  • Hey Thomas, I was wondering, In the skeleton example, how come sometimes it sees more then one person, sometimes not, even though on the top left you always see them?

  • Hello ! I'm using your library everyday since I received my Kinect. Thanks a lot for your work ! It works perfectly !

  • Actually, I have an issue right now... I would like to know if the mouth of the body is open or not (I'm trying to build a tool for a tetraplegic). I know KinectV2 support this feature but when I try to process Kjoint.getState() with the head-joint, I always get 2.

    Any idea ?

    Thanks by advance !

  • hey, I updated the skeleton tracking and fixed some bugs and added new ones, so the skeleton is now working, I have only tried with one user.

    12600

    For the face tracking, I'll implement that in the next couple of days, the new SDK just updated that feature.

    Thomas

  • added a new example, SkeletonMask

  • You are a boss ! Thank you for your time !

  • I tryed your last example ! It works perfectly ! Much better than before (and I was already impresssed) I only work with the head and neck joint, and there was some bugs sometimes on the previous version (nothing dramatic but...) but now it's perfectly stable (the head/neck at least).

    Thank you again !

  • Sorry for the basic question but I am just looking at this library and am wondering what device(s) it (will) supports. Is it only Kinect One that comes with Xbox One or Kinect for Windows 2 also or the forthcoming Kinect One standalone (or all of those)? I consider purchasing one of those but would not like to get something that is not supported….

    Also, can the new SDK and this library coexist with SimpleOpenNI and Xbox 360 that I do have working?

    Cheers.

  • Hello !

    I don't know about "Kinect for Xbox-one" but it works with "Kinect v2 for windows"

    http://www.microsoft.com/en-us/kinectforwindows/

  • Yeah is the same Kinect one or Kinect for windows 2, if you have the kinect one you need some adapters, that they don't sale...

    I haven't tried with simpleopeNI only with the kinect SDK for v1 (Xbox) and they work together, if you have a chance try the SimpleOpenNI with this lib tell me plz.

    Just check that you chipset support USB 3.0 and have a video card.

    Thomas

  • Hello Thomas, I too thank you for the important work you are doing. I was wondering if the getZ() of the joints has made ​​progress with the new version that you have uploaded. Using your last example (SkeletonMask.pde) I noted that the value of getZ() seems to be always 0 ... maybe it's still in development? Thanks a lot, Paolo

  • No it doesn't.

  • it doesn't but actually you can extrapolate the z value from the squeleton data without the Z positions of the joints.

    You only have to compare the distance between 2 joint (head and neck for example). The first value you compute become your reference value, then if the distance between the 2 joints is increasing,it means that the user is coming closer to the kinect ; if the distance decrease, then the user is going far from the kinect.

    It's not a perfect solution, but it works

  • hey,

    Well forgot to mention that the skeleton camera is mapped to the depth Image, so only two coordinates, the point in the skeleton matches the 2D point in the depth image, just like in the picture from above, so the depth Image should provide the z value.

    The next update I'll provide a 3d skeleton with 3d points and orientations.

    Thomas

  • Thank you fanthomas (in effect is the solution I have already adopted) and thanks to thomas for next updates. Best regards

  • Sure. I shall try with SimpleOpenNI and report back. But which adapters do you mean? I have just bought a Kinect One (it has not arrived yet) and now it sounds like it will not work without some additional hardware (that some quick Googling failed to locate)? Thanks.

  • You "only" need an USB3.0 input and windows 8 (or greater). You don't need very good graphic card. I only have windows 8 on my laptop and every examples (including pointcloud) runs at 59-60 FPS. I think all the computation are made inside the device (it's a very heavy object compared to kinect v1 - it has almost the same weight of my laptop actually - ), then you don't need a very expensive computer to work with it, but you need USB 3.0 and you need Win8 - because Microsoft need to sell it I bet -

  • edited September 2014

    @fanthomas Not sure if you were answering my question (thanks anyway). USB 3 and Win 8 are OK but from my research it appears that I shall not be able to connect "Kinect One" (that is the version for XBox) to my PC without a "Kinect for Windows v2" breakout box (do not see that for sale anywhere) or a physically hacked cable (which sounds too involved, without tools or experience). Is that right?

  • ahhhh ok !

    Yes, I confirm. Contrary to kinect v1 from Xbox that could be used with a computer, it's not possible with kinect v2 because the usb-wire from kinect contains (I think) USB3.0 and Power (electricity) in the same wire, then you need an adapter with a single USB 3.0 output.

    I have no XBoxOne then I'm not sure but I suppose that you can plug the kinect directly into it (with the xboxOne version). MS probably add a special connection on the XboxOne dedicated to kinect for being sure that coders will buy the windows version (they act exactly as Apple do with their devices...)

  • Right, as I worried. Then I suggest removing references to Kinect One here and on GitHub, calling it "Kintect for Windows v2" instead, given that Kinect One cannot be used with a PC out of the box. Here is how the adapters compare, for the interested:

    http://blogs.msdn.com/b/kinectforwindows/archive/2014/03/27/revealing-kinect-for-windows-v2-hardware.aspx

    People confirm that the other hardware is exactly the same and both Kinects will work with the right adapter.

  • I found an article in french. It confirms that kinect for XboxOne is not and will never be compatible with a computer. MS will not sell adapters, you need to buy a Kinect v2 for windows... :(

  • Hey.

    I change the github, to avoid that confusion, the kinect one and K4W2 is the same hardware, is just that kinect one is missing the adapters, don't know if windows is going to sell them any time soon.

  • Hi

    Thanks for sharing this nice library, in my laptop is working fine

    I have a question, ¿is posible to mask de color image with body and skeleton?

    thanks!

  • hey!,

    right now only depth, in next couple of days I'll update the lib, with the color image mapped to the skeleton, don't know if I can map the mask to the color Image, going to check that out.

    thomas

  • Hello Thomas,

    Thank you again for all your work !

    I'm sorry to insist a bit, but can you confirm it's possible to add the "head - states" ?

    I'm still working on a app for tetraplegic-people and it could be very helpfull to know if the mouth is open or not (to create a "click event" based on it). this is no hurry but I need to know if you're sure it will be available in Processing or if I should consider to (try to) do the work with visual basic ?

  • Hey,

    Well is possible, to get face information

    You could try the dev branch, I just updated a example with simple face tracking, It almost implemented, in the weekend I could finish implement the face information in processing, just like the example in the SDK

    https://github.com/ThomasLengeling/KinectPV2/tree/dev

Sign In or Register to comment.