The point cloud with rgb is almost finish, you can check it out int the dev branch on github. The audio recognition will come in a near future, is the next implementation step.
@tlecoz I do not understand. Do you know how take the heartbeat in realtime (with kinect2 or not)?
@thomas, nice to know all your improvement. I hope before possible of course :)
I think after you will improve the rgb deph, the audio and eventually the heartbeats, it will be a great library for anythings of course (ok).
So neither the muscular pressure too is not implement still?
Great job man, visual artist of processing will be so happy!!!
" I do not understand. Do you know how take the heartbeat in realtime (with kinect2 or not)?"
I do not know but I never know how to do something I never did before...
Then, no, I'm not 100% sure to know how to proceed, but yes I have an idea - and my idea is based on the analysis of small area from the HD-video, following a specific part of the skeleton to keep the focus on that specific area.
It works perfectly with head tracking, it's even possible to track the eyes from the HD video, following the head of the squeleton, then I'm almost sure it will work if you target the neck (for example) and if you track the small variation of brightness in those pixels...
One example uses the openCV library to findContours, in the sketch you can choose either to extract contours from the bodyIndex from or to extract from the depth with threshold.
Also there is a example in out to map the depth to color frame, there is a wired grid on top of the capture, going to try to fix it soon.
And the last one Point Cloud Color.
I'll be adding methods to be able to pass a custom depth Image to the mapping(point cloud, coordinate Mapper, depthToColor), so in the future you will be able to performed something like this:
PImage depthImg = kinect.getDepthImage();
//Do something with the Img, Shader process
PImage depthToColorImg = kinect.getMapDepthToColor(depthImg);
I still did not try your library because I am waiting for the kinect, but sure it look so good!!!
Thank you for all work you are doing!!
What do you think for work with shaders from pointclouds will be the best way?
Just received my kinect (its an xbox kinect with adaptor).
I use a dual-boot mac mini with OS X and win8.1 on it. Installed verything on the Win8.1 side. Everything runs without problems so far.
Great job thomas ! Thnx a lot!
knut
Hey Thomas,
great job for the depth-color mapping...appear that the calibration between the cameras (coefficients inside the Kinect SDK) is not perfect like in the kinect v1...i'll mention your work if i'll use it for a markerless gait analysis algorithm pubblication...hope to achieve it! thanks a lot. best regards
Great library ... thank you. So I had a question about the body index masking: I'm looking to use the bodyIndexMasks that you demonstrate in your MaskTest example, but i'm hoping to use the masks of each body detected independently. In your example you display kinect.getBodyTrackImage(), which composites each body. I'm wondering if these are exposed as individual masks anywhere in the library. I could definitely brute-force it from the composited image, but hoping to get them at a lower level.
Hey Thomas, first just wanted to thank you for this wonderful library. You're a savior.
Also I have a question, someone actually asked this last year but I assume the code has changed since then as I can't figure it out.
I'm looking to get the raw x, y, z, pointcloud data for the purposes of saving each frame out as an obj files, sequentially. Is this still possible given the changes to the code? Right now I seem to only be able to get the z coordinates.
Thanks!
The you can just use a .obj library to save the data. I did something similar a couple of days ago. I recorded times events of around 3 mins, so each frame I saved the point cloud in a temporary vector, so when the recording is done, all the objs are process and saved (around 180 * 30 frames). The points clouds are saved after the recording because it can really slow writing the objs in real time.
Wow that code sounds perfect. Would it be possible to get a peek at it, or to get it as an example in the repository?
I'm just stumbling my way through processing to get to my end result, which is more for the purposes of video. I'm using data from 3 kinects to create live-action 3d models that I hope to insert into Maya animated scenes. Here's an example using kinect v1s, as you can see the resolution was much too low:
I want to use your library to make a capture app to get obj and possibly color data (if I can figure out how to import colored obj's into maya) from the v2, but I'm pretty new to processing so I need as much help as I can get.
Also, encountering an "UnsupportedOperationException" error when trying to convert the floatbuffer to float[]. Investigating a little bit got me to the hasArray() function which returns false on pointCloudBuffer, meaning it's an inaccessible array, or read only.
Hey, yeah I've also gotten that problem, however, I haven't had time to fix it but i what to change, try using a "try and catch" between the code, strange that the method. array() didn't work, unfortunately I don't have access to a kinect v2, so can check it, I get back to you when I have it
I have a problem, i get windows 8.1 64-bits and processing 2.2.1 64-bits
and i drop the folder of KinectPV2 on the library folder, but it doesn't rectognice the library, im doing something wrong?
When I try to run examples from the KinectPV2 library, Processing throws this error :
This version of Processing only supports libraries and JAR files compiled for Java 1.6 or earlier.
A library used by this sketch was compiled for Java 1.7 or later,
and needs to be recompiled to be compatible with Java 1.6.
I'm running Processing 2.2.1 on win8.1 64bit with kinectSDK V2 and Java 1.8.0_45-b14 (all freshly installed today). Kinect is a 1520 (Xbox One model) with the adapter for Windows. Kinect Studio v2.0 works as it should as I get the depth image and body detection.
I'm clueless when it comes to compiling libraries...
Any help or insight would be greatly appreciated.
Thanks!
I'll try fix those changes in the following week, I am updating the library for the Google summer of code, but I just finish doing the implementation for Mac using libfreenect2, after a couple of test Ill get back to update the KinectPV2 library.
If you have change to test out the Kinect 2 for Mac, It would be really helpful.
@thomaslengeling
Which version of Processing 3.0 are you using? Tried it on 3.0a11 and its throwing errors for all the multiplication signs in the size() function.
Also still getting an "UnsupportedOperationException" using .array() on the FloatBuffer.
hey I just tested the library with processing3.0a11, they are a couple of examples that don't work with the new size method.
In some examples the window size is calculated with a multiplication, size(512*2, 424), so just change it to size(1024, 424).
I'll fix this soon.
if you want to print out the values from the point cloud you should be able to do this.
//get the points in 3d space
//depth point to camera position
FloatBuffer pointCloudBuffer = kinect.getPointCloudDepthPos();
for(int i = 0; i < kinect.WIDTHDepth * kinect.HEIGHTDepth; i++){
float valueX = pointCloudBuffer.get(i*3 + 0);
float valueY = pointCloudBuffer.get(i*3 + 1);
float valueZ = pointCloudBuffer.get(i*3 + 2);
}
If you want to obtain the depth values/distance in millimeters (0 -4500) you need to access the raw depth data, see example depthTest
void setup() {
size(512, 424);
kinect = new KinectPV2(this);
kinect.enableDepthImg(true);
kinect.init();
}
void draw() {
background(0);
image(kinect.getDepthImage(), 0, 0);
//raw Data int valeus from [0 - 4500]
int [] rawData = kinect.getRawDepthData();
}
Edit: I'll upload a record point cloud example this week
Hi Thomas,
I'm new to your library [and not an expert with processing] and there are some things I don't understand compared to kinect 1 to access the X & Y pos of the "usermap".
I wrote a simple program I needed 6 months ago with K1:
- calculate "usermap"
- draw particles in the scene
- if the distance between the silhouette [userMapx & userMapy] and a particle is less than 10, particles color is green, else particles color is red.
if (tracking) {
userMap = kinect.userMap();
for (int i = 0; i < userMap.length; i += 8) {
// if the pixel is part of the user
if (userMap[i] >= 1) {
float userMapX = (i%width) + 1;
float userMapY = int(i/height);
for (int j = 0; j < particleSystem.size(); j++) {
distance2 = dist(particleSystem.get(j).x, particleSystem.get(j).y, userMapX, userMapY);
if (distance2 < = myDist) {
fill(0,255,0);
}
}
}
}
}
Thing is I don't where to start to achieve it with Kinect 2 with the new features we haver thanks to your work, and I don't know which one would be the easiest to use while removing the background.
My first approach was to get the int [] rawData = kinect.getRawBodyTrack(); that I consider as the "usermap", but I don't understand how to get the position X & Y thanks to this code, with something like "getX". But I don't know if it's a good solution because these are values from 0-255 so it doesn't give me clear informations on position.
Second idea was to use the point cloud, and calculate the distance between particles of the scene and the points. But, once again, I don't get how to get the X & Y of each point of the user and compare the distances. I understood it better with K1 and usermap.
Hello Thomas,
Thanks for writing this wonderful library.
Was trying to import your library. But it was shown as incompatible on Processing 3.0. Please help.
Hi I am having the same trouble that psanches had at the beginning. I seem to be doing everything right. The only difference is I'm running on windows 10.
I'm having similar point cloud issues as people mentioned above. In 0.7.5, none of the point cloud examples would run at all. Now, with 0.7.7, they run, but all use OGL shaders to render the point cloud, and any attempt to access the raw X, Y, Z data of the cloud, like using .array() on a buffer created with getPointCloudDepthPos() , just crashes Java.
Does anybody know how to get joint / bone orientation??
I'm struggling with this one for a while. I can get 2d rotation by comparing two joints positions but having 3d rotation information would be great.
I'm trying to get this lib working in Eclipse but i get the following error, here's what I've done:
Added the KinecPV2.jar to libraries
Referenced the lib folder as a native library location
Added the lib folder and everything within it to my project folder
Any ideas?
64 windows 10
Loading KinectV2
java.lang.NoClassDefFoundError: com/jogamp/common/nio/Buffers
at KinectPV2.Device.<init>(Device.java:130)
at KinectPV2.KinectPV2.<init>(KinectPV2.java:38)
at UsingProcessing.setup(UsingProcessing.java:18)
at processing.core.PApplet.handleDraw(PApplet.java:2393)
at processing.awt.PSurfaceAWT$12.callDraw(PSurfaceAWT.java:1540)
at processing.core.PSurfaceNone$AnimationThread.run(PSurfaceNone.java:316)
Caused by: java.lang.ClassNotFoundException: com.jogamp.common.nio.Buffers
at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:335)
at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
... 6 more
Comments
Is it possible link yet pointclouds with rgb color? What about the audio recognition area, and speech?
Sorry for all these questions, but I have need to design a new project.
The point cloud with rgb is almost finish, you can check it out int the dev branch on github. The audio recognition will come in a near future, is the next implementation step.
Thomas
"The audio recognition will come in a near future, is the next implementation step."
Great !!!
@tlecoz I do not understand. Do you know how take the heartbeat in realtime (with kinect2 or not)?
@thomas, nice to know all your improvement. I hope before possible of course :) I think after you will improve the rgb deph, the audio and eventually the heartbeats, it will be a great library for anythings of course (ok).
So neither the muscular pressure too is not implement still?
Great job man, visual artist of processing will be so happy!!!
" I do not understand. Do you know how take the heartbeat in realtime (with kinect2 or not)?"
I do not know but I never know how to do something I never did before... Then, no, I'm not 100% sure to know how to proceed, but yes I have an idea - and my idea is based on the analysis of small area from the HD-video, following a specific part of the skeleton to keep the focus on that specific area.
It works perfectly with head tracking, it's even possible to track the eyes from the HD video, following the head of the squeleton, then I'm almost sure it will work if you target the neck (for example) and if you track the small variation of brightness in those pixels...
But maybe it will not work, it's just an idea....
hey.
I added a couple of examples to the library.
One example uses the openCV library to findContours, in the sketch you can choose either to extract contours from the bodyIndex from or to extract from the depth with threshold.
Also there is a example in out to map the depth to color frame, there is a wired grid on top of the capture, going to try to fix it soon.
And the last one Point Cloud Color.
I'll be adding methods to be able to pass a custom depth Image to the mapping(point cloud, coordinate Mapper, depthToColor), so in the future you will be able to performed something like this:
Thomas
Grandeeee Thomas!!!
I still did not try your library because I am waiting for the kinect, but sure it look so good!!! Thank you for all work you are doing!! What do you think for work with shaders from pointclouds will be the best way?
Just received my kinect (its an xbox kinect with adaptor). I use a dual-boot mac mini with OS X and win8.1 on it. Installed verything on the Win8.1 side. Everything runs without problems so far. Great job thomas ! Thnx a lot! knut
Wonder if it work with linux ...
No Chance. You need win8/8.1
Hey Thomas, great job for the depth-color mapping...appear that the calibration between the cameras (coefficients inside the Kinect SDK) is not perfect like in the kinect v1...i'll mention your work if i'll use it for a markerless gait analysis algorithm pubblication...hope to achieve it! thanks a lot. best regards
I've been looking at the HD Face Vertex example.
I'm wondering what the best way to try and pull that area from the rgb image? ie to extract the face
So far I have increased the size of the dots and made them white to create a solid white area which I can multiply by the rgb image.
Its fairly inaccurate though - I'm thinking I will try to improve on it by using depth information.
But I'd be happy for any suggestions!
Hi Thomas -
Great library ... thank you. So I had a question about the body index masking: I'm looking to use the bodyIndexMasks that you demonstrate in your MaskTest example, but i'm hoping to use the masks of each body detected independently. In your example you display kinect.getBodyTrackImage(), which composites each body. I'm wondering if these are exposed as individual masks anywhere in the library. I could definitely brute-force it from the composited image, but hoping to get them at a lower level.
Thank you
Hey Thomas, first just wanted to thank you for this wonderful library. You're a savior.
Also I have a question, someone actually asked this last year but I assume the code has changed since then as I can't figure it out.
I'm looking to get the raw x, y, z, pointcloud data for the purposes of saving each frame out as an obj files, sequentially. Is this still possible given the changes to the code? Right now I seem to only be able to get the z coordinates. Thanks!
Steve
hey!, yeah you can obtain the raw x, y, z point cloud from the depth frame by using
FloatBuffer pointCloudBuffer = kinect.getPointCloudDepthPos();
you can extract the values as an array.
float [] values = pointCloudBuffer.array()
check out the example "PointCloudDepth"
The you can just use a .obj library to save the data. I did something similar a couple of days ago. I recorded times events of around 3 mins, so each frame I saved the point cloud in a temporary vector, so when the recording is done, all the objs are process and saved (around 180 * 30 frames). The points clouds are saved after the recording because it can really slow writing the objs in real time.
Tell me how it goes, if you need more help.
Thomas
Wow that code sounds perfect. Would it be possible to get a peek at it, or to get it as an example in the repository?
I'm just stumbling my way through processing to get to my end result, which is more for the purposes of video. I'm using data from 3 kinects to create live-action 3d models that I hope to insert into Maya animated scenes. Here's an example using kinect v1s, as you can see the resolution was much too low:
I want to use your library to make a capture app to get obj and possibly color data (if I can figure out how to import colored obj's into maya) from the v2, but I'm pretty new to processing so I need as much help as I can get.
Also, encountering an "UnsupportedOperationException" error when trying to convert the floatbuffer to float[]. Investigating a little bit got me to the hasArray() function which returns false on pointCloudBuffer, meaning it's an inaccessible array, or read only.
Any idea how to get around this?
Hey Thomas, Still can't get the .array() method working. I also tried .get() to move the floats into a float array but it crashes java.
Am I missing something? Can't seem to get past the Float buffer.
Hey, yeah I've also gotten that problem, however, I haven't had time to fix it but i what to change, try using a "try and catch" between the code, strange that the method. array() didn't work, unfortunately I don't have access to a kinect v2, so can check it, I get back to you when I have it
Awesome! look forward to hearing from you.
I have a problem, i get windows 8.1 64-bits and processing 2.2.1 64-bits and i drop the folder of KinectPV2 on the library folder, but it doesn't rectognice the library, im doing something wrong?
Hi,
thank you Thomas for your awesome work!
When I try to run examples from the KinectPV2 library, Processing throws this error :
This version of Processing only supports libraries and JAR files compiled for Java 1.6 or earlier. A library used by this sketch was compiled for Java 1.7 or later, and needs to be recompiled to be compatible with Java 1.6.
I'm running Processing 2.2.1 on win8.1 64bit with kinectSDK V2 and Java 1.8.0_45-b14 (all freshly installed today). Kinect is a 1520 (Xbox One model) with the adapter for Windows. Kinect Studio v2.0 works as it should as I get the depth image and body detection.
I'm clueless when it comes to compiling libraries... Any help or insight would be greatly appreciated. Thanks!
Found the same issue : https://github.com/ThomasLengeling/KinectPV2/issues/15
This branch works like a charm : https://github.com/ThomasLengeling/KinectPV2/tree/revert-14-kirk-fix-rawdepthstream
I get this error on my windows 8.1 64 bit machine for simple face tracking code
64 windows 8 Loading KinectV2 Creating Kinect object ... ENABLE COLOR FRAME ENABLE INFRARED FRAME ENABLE SKELETON SETTING FACE TRACKING Done init Kinect v2 Version: 0.7.2 EXIT Clossing kinect V2 #
A fatal error has been detected by the Java Runtime Environment:
#
EXCEPTION_ACCESS_VIOLATION (0xc0000005) at pc=0x00007ffa7cd86009, pid=30700, tid=28272
#
JRE version: Java(TM) SE Runtime Environment (7.0_40-b43) (build 1.7.0_40-b43)
Java VM: Java HotSpot(TM) 64-Bit Server VM (24.0-b56 mixed mode windows-amd64 compressed oops)
Problematic frame:
C [KinectPV2.dll+0x6009]
#
Failed to write core dump. Minidumps are not enabled by default on client versions of Windows
#
An error report file with more information is saved as:
D:\Design\Assignments\Sem 6\TUI\India HCI\processing-2.2.1-windows64\processing-2.2.1\hs_err_pid30700.log
#
If you would like to submit a bug report, please visit:
http://bugreport.sun.com/bugreport/crash.jsp
The crash happened outside the Java Virtual Machine in native code.
See problematic frame for where to report the bug.
# Could not run the sketch (Target VM failed to initialize). For more information, read revisions.txt and Help ? Troubleshooting.
please help @thomaslengeling
Hey Thomas, any progress made on fixing the issue with the .array function? I'm still hung up in the same spot
cheers
hey Steve
I'll try fix those changes in the following week, I am updating the library for the Google summer of code, but I just finish doing the implementation for Mac using libfreenect2, after a couple of test Ill get back to update the KinectPV2 library. If you have change to test out the Kinect 2 for Mac, It would be really helpful.
https://github.com/shiffman/OpenKinect-for-Processing
Thomas
hey SteveCutler.
try out this brach.
https://github.com/ThomasLengeling/KinectPV2/tree/0.7.3
The Point Cloud seems to work with processing 3.0
I'll merge it soon just that two examples are not working.
thoas
@thomaslengeling Great news! I'll test both out this weekend and let you know how it goes.
@thomaslengeling Which version of Processing 3.0 are you using? Tried it on 3.0a11 and its throwing errors for all the multiplication signs in the size() function.
Also still getting an "UnsupportedOperationException" using .array() on the FloatBuffer.
I haven't tried the library with the new 3.0a11 only with 3.0a10, so try it with that one.
Going to test it on the processing.
thomas
hey I just tested the library with processing3.0a11, they are a couple of examples that don't work with the new size method. In some examples the window size is calculated with a multiplication, size(512*2, 424), so just change it to size(1024, 424). I'll fix this soon.
if you want to print out the values from the point cloud you should be able to do this.
If you want to obtain the depth values/distance in millimeters (0 -4500) you need to access the raw depth data, see example depthTest
Edit: I'll upload a record point cloud example this week
thomas
there is a new Record Point cloud example in the library, which records the depth point cloud and saves each frame in a .OBJ format.
thomas
Hi Thomas, I'm new to your library [and not an expert with processing] and there are some things I don't understand compared to kinect 1 to access the X & Y pos of the "usermap".
I wrote a simple program I needed 6 months ago with K1: - calculate "usermap" - draw particles in the scene - if the distance between the silhouette [userMapx & userMapy] and a particle is less than 10, particles color is green, else particles color is red.
Thing is I don't where to start to achieve it with Kinect 2 with the new features we haver thanks to your work, and I don't know which one would be the easiest to use while removing the background.
My first approach was to get the int [] rawData = kinect.getRawBodyTrack(); that I consider as the "usermap", but I don't understand how to get the position X & Y thanks to this code, with something like "getX". But I don't know if it's a good solution because these are values from 0-255 so it doesn't give me clear informations on position.
Second idea was to use the point cloud, and calculate the distance between particles of the scene and the points. But, once again, I don't get how to get the X & Y of each point of the user and compare the distances. I understood it better with K1 and usermap.
Do you have any tips on this ?
Thanks, Charles -
Hello Thomas, Thanks for writing this wonderful library. Was trying to import your library. But it was shown as incompatible on Processing 3.0. Please help.
Regards, Munish
Hi I am having the same trouble that psanches had at the beginning. I seem to be doing everything right. The only difference is I'm running on windows 10.
Hi guys! Is this library working on WINDOWS 10 and Kinect v1? Thanks!!
In case anyone else is struggling - to get MapDepthToColor example working put
kinect.enablePointCloud(true);
beforekinect.init();
should it be a problem for the library using directx 12?
I'm having similar point cloud issues as people mentioned above. In 0.7.5, none of the point cloud examples would run at all. Now, with 0.7.7, they run, but all use OGL shaders to render the point cloud, and any attempt to access the raw X, Y, Z data of the cloud, like using
.array()
on a buffer created withgetPointCloudDepthPos()
, just crashes Java.Hi guys,
Does anybody know how to get joint / bone orientation?? I'm struggling with this one for a while. I can get 2d rotation by comparing two joints positions but having 3d rotation information would be great.
Regards, Sebastian
I'm trying to get this lib working in Eclipse but i get the following error, here's what I've done:
Any ideas?