The screen?() functions are tremendously useful for getting screen coords, but they don't solve the problem of getting JUST the world coordinates from local coordinates given the current transform. Nor is the modelview matrix particularly helpful since it starts as a camera matrix instead of identity.
What's missing is a set of routines that, if they existed, would perhaps be called something like worldX(), worldY() and worldZ(). For example: float worldx = worldX(localx,localy,localz); et cetera.
That is, after establishing some complex series of translations, rotations and scalings how can you determine the resulting world coordinate that would be produced by a call such as point(x,y,z)? Such that if that world coordinate were plotted in the *absence* of the transforms, you'd get the same screen coordinate as when the local coordinate were plotted *with* the transforms.
Alternatively, the modelview approach would be fine IF there were a way to completely remove any camera portions of the transform. Does such a matrix exist in there somewhere?
Probably a poor description, how about code instead:
Quote:
size(200,200,P3D);
// coord as array of floats makes using the matrix mult easier
float [] local = {0,0,0};
float [] world = {0,0,0};
// plot a white point, given some transform
background(0);
camera();
rotateZ(PI/4);
translate(100,0,0);
stroke(255);
// transform in effect, plot *LOCAL* coordinate
point(local[0],local[1],local[2]);
// doing the math by hand suggests we just plotted at 100/sqrt(2),100/sqrt(2),0 or approx 70.7,70.7,0
//=============================================================
// NOW, GIVEN THE EXISTING TRANSFORM, HOW DO WE ASK PROCESSING:
// "WHAT IS THE WORLD COORDINATE OF LOCAL COORDINATE 0,0,0?"
//=============================================================
// one thought is to use the modelview matrix:
g.modelview.mult3(local, world);
println("world coords via modelview = " + world[0] + ", " + world[1] + ", " + world[2]);
// gives: -29.289322, -29.289322, -173.20508
// err, NOPE, that doesn't work, and large -Z looks a lot like a camera value
// (and examining the source reveals why: modelview starts as camera instead of identity)
// so we need a "clean" matrix to acccumulate JUST our local-to-world
// transform WITHOUT any of the camera stuff:
PMatrix mat = new PMatrix();
// mimic the above transform:
mat.reset();
mat.rotateZ(PI/4);
mat.translate(100,0,0);
// now let's see what that gives us:
mat.mult3(local, world);
println("world coords via our matrix = " + world[0] + ", " + world[1] + ", " + world[2]);
// gives: 70.71068, 70.71068, 0.0
// aha, YES! that's what we want
// so let's reset the camera and all transforms
// and replot over that white point with a transparent red,
// turning it pink as proof that we got the correct world coord
camera();
stroke(255,0,0,128);
// no transform in effect, plot *WORLD* coordinate
point(world[0], world[1], world[2]);
Any thoughts?