Reduce PointCloud Resolution in KinectPV2

edited August 2017 in Kinect

Anyone have a solution to reducing the number of points read+rendered in Lengeling's fantastic KinectPV2 library/sketches? Love to have 10% of the native rez/points. Using Kinect v2 in sketch based on his PointCloudOGL. Ideally not having to convert data to x,y,z points for efficiency, but whatever it takes. Losing my mind but not the points. -hanx!



  • It is hard to suggest anything without seeing any code. anyways, if you have a container of points, you can randomly select 10% of your points. This is assuming all the points in your storage/container are relevant.


  • edited August 2017

    this here is the documentation for glVertexAttribPointer, which is what that code uses...

    i think you can change the stride so that it hops over points in the buffer. ie

    x0 y0 z0 x1 y1 z1 x2 y2 z2 x3 y3 z3...

    normally that'll use x0, y0, z0 as the first point, x1, y1, z1 as the second etc. but by changing the stride it can do x0, y0, z0, skip the next couple, do x3, y3, z3 etc like the STEP clause in a BASIC FOR loop.

    whether the offset is in bytes or items or what i don't know. you'll have to experiment with it (i can't, don't have the hardware). you may also need to change the size variable appropriately.

  • also, i've no idea how the points in that buffer are arranged. it could be that by skipping 9 points out of 10 you end up with the closest 10% of points, or the lowest, or whatever. or it might be completely random.

  • thank you for the feedback and contributions. for clarity: i'm looking for a uniform decimation of the points. digging thru the KinectSDK forum gave a deflating answer. The Q: 'Is anyone aware of a way to dynamically scale the number of points in the point cloud that Kinect...?' The A by MSFT staff: 'There is not support for this since the depth information is generated by the IR data that is acquired from the sensor. Depth is generated by the runtime and to ensure minimal latency, this requires a GPU to process that amount of IR data. For that type of embedded system you may want to look into a structured light sensor(Occipital Structure, Kinect v1) or some other depth sensor for that type of system.' Shaders are far above my abilities so I'll have to keep digging or hope someone smarter solves this issue. thanks again!

  • edited August 2017

    Is the goal increased speed / performance, or is the goal a low resolution screen effect? If a resolution effect, you could coerce the full point set onto a grid, e.g. rounding. Multiple points may draw to the same point.

    Really need to know a lot more to understand your problem and what you mean by "uniform" -- evenly distributed in space? Does frame-to-frame point removal matter -- can they jump around? For large collections of points the random approach suggested by @kfrajer should work quite well.

    BTW, "decimation" means 90%, not 10%.

  • the shaders in that code are the defaults. they don't do anything special, barely anything at all.

    and, thinking about it, another way of reducing the amount of work that glVertexAttribPointer does is just to reduce the size parameter. try it, it might do what you want.

  • (and, if they do what i think they do, shaders won't help you reduce the set in any coherent way - they shaders will be called in parallel, with one point each. they won't, by default, even know about the other points in the set)

  • edited August 2017

    Thanks generous ones. Goal is fewer points per unit of area - for aesthetic reasons only. More negative space between points for a more ethereal look ad allowance for blurs, trails, etc. But they need to be selected to keep actor's form identifiable. (The sketches/shaders are amazingly efficient with the native 300k points.) I'll noodle around with shader settings and report any findings. Obtaining xyz and arraying with rounding sounds promising if can keep performance. Thanks again.

Sign In or Register to comment.