We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hello! Im trying to visualise and animate massive point clouds (scanned LiDAR data) - hopefully i would be able to manipulate 60million points. I did some research (processing one forum) and it seems OpenGL is the way to go:
http://forum.processing.org/one/topic/point-cloud-library.html https://discussions.zoho.com/processing/topic/heatmaps-point-clouds-and-big-data-in-processing https://discussions.zoho.com/processing/topic/getting-started-with-glgraphics
I try to check every example i find, but these were using GLGraphics or older way of accessing OpenGL, which is not needed for Processing 2.1 [i need guidance here badly, even beginGL >> beginPGL were surprising :) ]. And it seems that 2.1 would be faster and allow larger datasets…
Here is sample point cloud (modest ~56000 points for a start, *.xyz file format):
https://www.dropbox.com/s/cy2zdcioggerjxs/hts_cc_crop.xyz
Any advice how to tackle this in 2.1 would be appreciated (mr. jimthree, mr.andres ?)
many thanks, chris
Answers
You could start by trying to use a PShape object to store all your points. Internally, this uses OpenGL's vertex buffer objects that are pretty fast when dealing with static geometry, but you don't need to do use any low-level opengl calls. The example in Demos|Performance|StaticParticlesRetained should be useful.
i checked that example, changed PShape to POINTS and managed to squeeze 1.000.000 particles rotating at 18fps. this method takes almost 10GB of RAM for the process, so its not usable - I'm aiming at around 12million (lighter scans) to 60mil (heavy scans). Any ideas where to save ram and gain fps?
many thanks!
The memory footprint of PShape should be smaller, I have plans to refactor the PShapeOpenGL class for the 2.2 release, and hopefully this will also involve some resource optimization.
For the time being, one way to reduce the memory usage and increase performance would be to use square points:
Square points require 4 vertices per point. Otherwise, the points will be created as little circles, consisting of potentially much more than 4 vertices. If the size of the points is small, there no visible difference between square and circle points. However, if you want them to look circular, you could use the trick in the StaticParticlesRetained: the points are quads, but textured with a circular sprite.
Still, even with this tricks you might not be able to have low enough memory usage...
You could avoid the PShape overhead altogether by implementing your own VBO handling code using low-level opengl. If you are ok with using shaders, you could do:
with the following GSLS shaders:
vert.glsl:
frag.glsl:
You can notice that we didn't need to use the JOGL calls, since PGL exposes all the functions in the OpenGL ES 2.0 spec, which are enough to render using the programmable pipeline. You can still use the older, fixed-function (no shaders) pipeline in Processing since by default the P2D/P3D renderers create an OpenGL 2 context on the desktop...
An equivalent, fixed-function pipeline version of the previous code would be (2.1+ only):
Notice that here you need to obtain the GL2 interface from JOGL, because the vertex and color attributes need to be enabled and configured with the glEnableClientState(), glVertexPointer() and glColorPointer(), which are not part of the GLES 2.0 spec exposed by PGL.
thank you, really helpful, going to test it out and report back!
after doing some tests, i can see this method is far more memory efficient (compared to PShape). also some interesting performance clues:
is openGL "less accelerated" on osx? (10.8.5)
also, on osx i can't go over 18million -> this happens: https://www.dropbox.com/s/0om0izf98a5op7r/openGL_20m.png
There are some known performance issues on OSX (see these issues: 1776, 1089), which were partially taken care of in recent releases of Processing 2. However, very large sketch resolutions might still cause unusually low fps. I'm working on a solution for that, but is not ready yet. What resolution are you using in your renderings?
its only 1440x800 (retina lcd resolution 1920x1200), so nothing serious.
and whats happening here: https://www.dropbox.com/s/0om0izf98a5op7r/openGL_20m.png
cheers!
not sure, as I don't know how it is supposed to look. The problem is that some points look cyan, or that some appear to be contained in a plane? Do you get any gl error in the console?
that screenshot shows what happens when i go above 18million points > the change from being randomly scattered white particles to cyan, some yellowish line appears and whole array flattens to planes (no console errors). [that happens on osx only, win7 is ok to go over 18mil]
that's weird, the gtx650 on your mac is more capable than the gtx260 on the PC. It seems more like a driver issue. Unfortunately, the OpenGL drivers cannot be updated manually on Mac, you have to wait fro Apple to do it. What version of OSX are you on?
Osx 10.8.5
Has anybody been able to get this to work on Processing 3.X?
The second example from codeanticode works well using Processing 3 on a macbook pro, 10.11.5. 10 million points with decent performance.
Is there any chance someone has gotten something like the VBO example above to work on a raspberry pi? It's working for me in processing 3.2.1 on my mac and linux boxes, but all I get on the raspberry pi is a black window.
Thanks!
That 3rd post is using PShape child for every point, which I guess is what uses all the memory. They could all be in one single PShape.
That might be good enough for an rpi
:/ I can'g get that one to work at all. It complains about not being able to find
javax.media
.The 3rd post doesn't mention any imports...
You should just be able to use the pde menu to include the correct imports.
Will have a look later when the TV isn't being used as a TV...
The one I'm talking about is here: https://forum.processing.org/two/discussion/comment/3171/#Comment_3171 . At the very top it has two import lines.
so when i said "that 3rd post is using pshape" you started counting from post #4? 8)
anyway, have spent the last two hours trying to get my rpi working with 2 different tvs using 2 different linuxes and getting none of them to work. "No Signal" or "No Support". which is especially annoying as i know they both used to work. so i can't help there.
as for the imports, my notes say
so use the top line and the relevant one from the bottom two. jogl moved company with the new version, hence the new domain.
It's entirely possible that I can't count ;)
Oh wow, that's really anoying. Is the "No Signal" or "No Support" message showing on your tv? You might need to edit your config.txt ( https://www.raspberrypi.org/documentation/configuration/config-txt.md ) to get the video output to match what your TV can do.
I'm at work right now, so I can't try switching that out right now, but I will when I get a chance. Thanks!