Porting memo akten's quad warping vertex shader to P5 (GLSL experts help!)
in
Programming Questions
•
4 months ago
Hello everyone, I'm trying to implement a perspective correction (keystone correction) function in my sketch, the idea is that you select the region of the image or texture you want to "correct" by placing 4 vertices(this 4 vertices represent the corners of a quad) and then you take these vertices as uv coords of the texture(you are basically mapping a region of the texture onto a quad), finally you draw the quad with the desired vertices and your selected texture region. The problem with this approach is that it causes some serious distortion, the texture ends up looking warped diagonally (see
this thread). Googling around for a solution to this problem I quickly came across a quartz composer/vdmx patch written by memo (who was a user of this forum), his solution is to simply use a quad grid(10x10 or higher) instead of a single quad, the thing is you also have to accurately place the rest of the grid vertices (i.e. the non corner vertices), this he solved by using a vertex shader, as he explains next:
Now I do not yet know shader programming or GLSL but from what I understand this could be easily done in a normal function without the need to use a shader which seems to be kinda overkill (and yes, I know processing 2.0 now has native support for shaders, but for now that is not what I'm looking for, if it doesn't work any other way then I'll do it the shader way).
The theory on 4 point warping is explained quite well on Paul Heckbert's Master's Thesis "Fundamentals of Texture Mapping and Image Warping" (www.cs.cmu.edu/~ph/texfund/texfund.pdf). It's a bit dated (1989!) but still relevant.
To cut a long story short: for any point in a normalized quad (0,0 - 1,1), first interpolate along the bottom edge of the new corners to find the bottom position. Then interpolate along the top edge of the new corners to find the top position. Then interpolate between those two points to find the transformed position of the point. If the quad isn't initally a normalized quad (i.e. like the QC coordinate system), it needs to be normalized (a linear map takes care of that).
I implemented this method using a vertex shader by moving a grids vertices depending on the input 4 coordinates for the new corner positions.
uniform vec2 BL, BR, TL, TR;uniform vec2 renderSize;void main() {// transform from QC object coords to 0...1
vec2 p = (vec2(gl_Vertex.x, gl_Vertex.y) + 1.) * 0.5;// interpolate bottom edge x coordinate
vec2 x1 = mix(BL, BR, p.x);// interpolate top edge x coordinate
vec2 x2 = mix(TL, TR, p.x);// interpolate y position
p = mix(x1, x2, p.y);// transform from 0...1 to QC screen coords
p = (p - 0.5) * renderSize;gl_Position = gl_ModelViewProjectionMatrix * vec4(p, 0, 1);gl_FrontColor = gl_Color;gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;}
Now I do not yet know shader programming or GLSL but from what I understand this could be easily done in a normal function without the need to use a shader which seems to be kinda overkill (and yes, I know processing 2.0 now has native support for shaders, but for now that is not what I'm looking for, if it doesn't work any other way then I'll do it the shader way).
At the moment Im trying to understand the shader in order to translate it to plain P5 syntax, please help me understand the following:
the
BL, BR, TL, TR variables are passed by the main program to the shader right?? these are the 4 corner positions, in P5 they would be PVectors
the following functions:
vec2 x1 = mix(BL, BR, p.x);
vec2 x1 = mix(BL, BR, p.x);
p = mix(x1, x2, p.y);
assuming p is a PVector for the vertex position it could be rewritten as:
- PVector x1 = PVector.lerp(TL, TR, p.x);
- PVector x2 = PVector.lerp(BL, BR, p.x);
- p = PVector.lerp(x1, x2, p.y);
now he also mentions that the quads have to be normalized, I imagine this to be because the third parameter of the GLSL function "mix" takes values from 0.0 to 1.0. I imagine this function is exactly like P5's lerp function, whose third argument must also go from 0.0 to 1.0, so in order for this to work you would have to first do something like
- p.set(norm(p.x, 0, width), norm(p.y, 0, height));
and do the same for the other PVectors.
As expected my sketch isn't exactly doing what it is supposed to be doing, here's a simplified version of my sketch that deals only with adjusting the vertices and replicating the vertex shader behavior:
- int grid_width = 3;
- int grid_height = 3;
- PVector[] grid = new PVector[grid_width * grid_height];
- void setup() {
- size(320, 240);
- for (int y = 0; y < grid_height; y++)
- for (int x = 0; x < grid_width; x++)
- grid[x + y * grid_width] = new PVector((x*30)+30, (y*30)+30); // lets shift them a little towards the center of the screen
- }
- void draw() {
- background(255);
- // draw the grid
- for (int i = 0; i < grid.length; i++) {
- noFill();
- if (i == 0 || i == 2 || i == 6 || i == 8) // in a 3x3 grid, these values of i are where the corners are located
- fill(0);
- ellipse(grid[i].x, grid[i].y, 20, 20);
- }
- // move the corners
- if (mousePressed) {
- for (int i = 0; i < grid.length; i++) {
- if (i == 0 || i == 2 || i == 6 || i == 8) {
- if (dist(mouseX, mouseY, grid[i].x, grid[i].y) < 20)
- grid[i].set(mouseX, mouseY);
- }
- }
- }
- }
- void keyPressed() { // rearrenge the rest of the vertices. The following code tries to mimick what memo's vertex shader does
- // Try this code first, then comment it and uncomment the following block and try that
- PVector TL = new PVector(grid[0].x, grid[0].y);
- PVector TR = new PVector(grid[2].x, grid[2].y);
- PVector BL = new PVector(grid[6].x, grid[6].y);
- PVector BR = new PVector(grid[8].x, grid[8].y);
- for (int i = 0; i < grid.length; i++) {
- if (i == 0 || i == 2 || i == 6 || i == 8)
- continue;
- PVector x1 = PVector.lerp(TL, TR, norm(grid[i].x, 0, width));
- PVector x2 = PVector.lerp(BL, BR, norm(grid[i].x, 0, width));
- grid[i] = PVector.lerp(x1, x2, norm(grid[i].y, 0, height));
- }
- /* // normalizing everything gives the same goddamn results as the previous block
- PVector normalized_TL = new PVector(norm(grid[0].x, 0, width), norm(grid[0].y, 0, height));
- PVector normalized_TR = new PVector(norm(grid[2].x, 0, width), norm(grid[2].y, 0, height));
- PVector normalized_BL = new PVector(norm(grid[6].x, 0, width), norm(grid[6].y, 0, height));
- PVector normalized_BR = new PVector(norm(grid[8].x, 0, width), norm(grid[8].y, 0, height));
- for (int i = 0; i < grid.length; i++) {
- if (i == 0 || i == 2 || i == 6 || i == 8)
- continue;
- PVector normalized_x1 = PVector.lerp(normalized_TL, normalized_TR, norm(grid[i].x, 0, width));
- PVector normalized_x2 = PVector.lerp(normalized_BL, normalized_BR, norm(grid[i].x, 0, width));
- PVector normalized_p = PVector.lerp(normalized_x1, normalized_x2, norm(grid[i].y, 0, height));
- grid[i].set(map(normalized_p.x, 0, 1, 0, width), map(normalized_p.y, 0, 1, 0, height));
- }*/
- }
I do not know what is wrong, any pointers, anyone????
thanks.
1