You can put your interest on HE_Mesh library to build custom stl or obj from array vertices.
See documentation about a mesh structure:
it stores vertices
it stores edges : connexion in two vertices
each edges store half edges
so if we organise vertices in a group we build a face :a three vertices group make a triangle face, four vertices group make a quad face etc...
you build a vertices array of x y z coordinates ,
2d array of n vertices:
float[][] vertices = new float[n][];
vertices[n][0] = x vertice coordinate;
vertices[n][1] = y vertice coordinate;
vertices[n][2] = z vertice coordinate;
after the array build with your vertices location you need to organise them in face group.
faces = new int[n-int(sqrt(n))][4] for quad vertices faces
faces = new int[(n-int(sqrt(n)))*2] for tri faces
complexity is only how you will indexate your vertice array : where you put connexions (edges) between which or which vertice???
if you take a quad example : quad is possible if : a vertice v gets:
-a width neighbour, an height neighbour and a width-height neighbour
it means
for a vertice v located at position (x,y) if we gonna find a vertice with a position (x+1,y)
and if we gonna find a vertice at position (x+1,y+1) and we gonna find a vertice at position (x,y+1)
we get a face composed of theses four vertices
you build an array and check for vertices get Wneigbour, Hneighbour, and WHneighbour you put them in the array face
face[f][4] is an array of f faces storing the index of the four vertices building the face.
you need to determines your 2D boundaries of your surface
sceneMap is a SimpleOpenNi method from context class to track user pixels
smap[] = context.sceneMap();
smap is an array of width * height values of 0 or 1 : 1 = a userpixel , 0 = a background pixel
dmap[] = context.depthMap();
dmap is an array of depth value. dmap is indexed by the 1d location of a pixel ( dmap = new float[width*height];)
for(int i = 0 ; i < width*height ; i++){
if(smap[i]>0) { userlenght++; }
}
for(int x=0; x < width ; x++){
for(int y = 0; y<height ; y++){
if(smap[x+y*width]>0){ vertices[x+y*width][0] = x;
vertices[x+y*width][1] = y;
vertices[x+y*width][2] = dmap[x+y*width];
}
// if we get a widthneighbour
if(vertices[(x+y*width)+1][0]-vertices[x+y*width][0]<1&&(vertices[(x+y*width)+1][0]-vertices[x+y*width][0]>0){hasWnext++;}
// and you now how much connextion to x + 1 you will get with your vertices.
but to get the heightNeighbour is a bit more of pain
you need to check for each line how many vertices you get because
in the case of we got on x and y every pixel recognized as a user pixel like a filled grid it would be ok :
the HeightNieghbour would be fixed : it would be vertices[(x+y*width)+width] but we don't know how much will be the heightgap (indexgap) between two vertices !! the space in this case (a human tracked) is not regular like a grid or a basic mesh!
so the solution will be to count also empty pixels (background pixels) and in each line of the Height determines the values Indexed in. etc etc
i know it's an onerous way to solve the problem , for sure it exist more elegant and more proper method to solve the problem
but in a first point you need to index seriously and precisely your mesh with a mesh datastructure:
Edges Linked by Vertices, Vertices linked by Faces Halfedges in edges to determine if a face is filled or empty etc etc
}
}