I'd like to render basic 3D shapes without any aliasing/smoothing with a PGraphics instance using the P3D renderer, but noSmooth() doesn't seem to work.

In OF I remember calling `setTextureMinMagFilter(GL_NEAREST,GL_NEAREST);`

on a texture.

What would be the equivalent in Processing ?

Thank you, George

]]>Back in Processing 2 I was able to do this with a pair of fragment shaders without trouble:

```
PShader shader1;
PShader shader2;
void setup() {
size(256, 256, P2D);
fill(128);
noStroke();
shader1 = loadShader("shader1.glsl");
shader2 = loadShader("shader2.glsl");
}
void draw() {
shader(shader1);
shader(shader2);
rect(0, 0, width, height);
}
```

Where shader1 is as follows:

```
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
uniform sampler2D textureSampler;
varying vec4 vertColor;
varying vec4 vertTexCoord;
void main()
{
vec3 pos = vertTexCoord.xyz;
gl_FragColor = vec4(pos.x, pos.y, pos.z, 1.0);
}
```

And shader2 is as follows:

```
#ifdef GL_ES
precision mediump float;
#endif
uniform sampler2D texture;
varying vec4 vertColor;
varying vec4 vertTexCoord;
void main () {
gl_FragColor = texture2D(texture, vertTexCoord.xy);
}
```

No matter what I do in Processing 3, this always results in a black image when shader2 is applied, but works fine with just shader1. I have tried all sorts of things, including different OpenGL version directives, passing the resolution to shader2 and dividing the frag coord by it before sampling, and re-arranging the shader calls. I have tried both P2D and P3D mode.

In fact, many of the examples I have tried from the documentation fail to work at all in Processing 3.3.

Any idea what's going on? If it matters, I'm on a GTX 980 Ti with the latest drivers.

]]>I'm sure I'll run into more problems though, so if you're competent in glsl and have time for short "freelance" kinds of projects

Would be nice to have someone to contact next time(might be soon) I have a problem/question.

I have not set an amount, but if you can offer help relevant to my problem I'm sure we can negotiate it.

I'm not good at explaining problems, and maybe what I'm saying doesn't make sense, but I'll try my best.

I have a problem trying to make a 2D fragment shader using GLSL

Anyways, I want template for doing one pass per frame, do the next pass on the previous output and so on.

It doesn't need to do anything graphically fancy, it's having this "persistent" canvas that's important.

An analogous thing is not using the background() function,

and thus not having to redraw things every frame for them not to disappear.

It seems in GLSL some things are read-only and some things reset after it's done it work, kind of like local variables in a function.

This could possibly be done with frame buffer object, or pgraphics but I'm not familiar with these.

So you give the shader a "texture" input once, and it works on that input.

And the next frame it works on the output it generated, rather than passing it to glfragcolor and flushing it.

If your pass is rotating a small amount you get a rotating image.

You **could** get a similar result by rotating by an increasing angle but that's not the point.
If it's a post-processing anti-aliasing, you'd get an image that get's blurrier and blurrier.

I hope you understand that it's not doing rotation or anti-aliasing I'm asking for, they're jst dumb examples..
What I wan't help with is this persistent/gradual thing, not doing multiple passes per frame, just once per frame.

But is it possible?

Could you help me with this?

How much do expect in return for your help?

I have tried combining them but of course it's not easy at it seem for a beginner like me, they either won't work at all or they don't work like the should.

This is not the only code that I am having trouble doing this with so I feel like I might be missing something. Any help with the code or just pointing me to the right direction would be greatly appreciated!

Code I want to stay in the back but still be interactive:

```
PShader sh;
String uid;
void setup() {
PImage img = loadImage("test.jpg");
size(img.width,img.height,P2D);
background(0);
// load and compile shader
sh = loadShader("lens.glsl");
// upload texture to graphics card
sh.set("texture",img);
}
void draw() {
// normalize mouse position
float inpx = mouseX/(float)width;
float inpy = mouseY/(float)height;
// set shader variable
sh.set("inp",inpx,inpy);
// run shader
shader(sh);
// fill whole window
rect(0,0,width,height);
}
```

And the text code:

```
import geomerative.*;
import ddf.minim.*;
Minim mySound; //CREATE A NEW SOUND OBJECT
AudioInput in;
RFont font;
String myText = "TEST";
void setup() {
size(1500, 400);
background(255);
smooth();
RG.init(this);
font = new RFont("FreeSans.ttf", 100, CENTER);
mySound = new Minim(this);
in = mySound.getLineIn(Minim.STEREO,512);
}
void draw() {
background(255);
strokeWeight(2);
stroke(255, 0, 0);
noFill();
translate(width/2, height/1.5);
float soundLevel = in.mix.level(); //GET OUR AUDIO IN LEVEL
RCommand.setSegmentLength(soundLevel*9000);
RCommand.setSegmentator(RCommand.UNIFORMLENGTH);
RGroup myGoup = font.toGroup(myText);
RPoint[] myPoints = myGoup.getPoints();
beginShape(QUAD_STRIP);
for (int i=0; i<myPoints.length; i++) {
vertex(myPoints[i].x, myPoints[i].y);
}
endShape();
}
```

]]>The problem I'm having is already stated by someone else here: https://forum.processing.org/two/discussion/1842/opengl-mapping-texture-and-perspective

One of the suggested solutions is to divide the surface into triangle strips to reduce the effect. I tried that solution, it works and it is not what I am interested in, so I'm writing this post. What I'm interested in is the other suggested solution - to get OpenGL to use QUADS, how do I do that exactly?

I'm using Processing 3 and I know I have to do this:

```
import com.jogamp.opengl.GL;
import com.jogamp.opengl.GL2ES2;
```

and something like this somewhere:

```
PJOGL pgl = (PJOGL) beginPGL();
GL2ES2 gl = pgl.gl.getGL2ES2();
```

but I'm not sure how to use that to achieve what I want.

Cheers,

Paolo

]]>That is

Just a flat picture of

The fragments has to know their positions on screen and from that information be able to map it to a color in some "shared storage" and also be able to lookup neighboring pixels colors from it.. And this is very important

So the output on screen gets blurrier and blurrier as time go by, rather than a static single-pass processed picture as if the shader was starting over from the same unprocessed input every frame.

And not because you doing incrementally more work/passes every frame, but because what the shader is working on is "persistent" even after it's shown on screen.

So why would I wan't to do this?

Well it's not the most exciting thing, but it's something I could learn something from and tweak to do exciting things if it works.

And if it's impossible to do this with GLSL then I can leave any thoughts about endevours in it aside.

It only happens with P3D.

after a background() function happens that the shape that I working always draw in the back and not in the front as it was being draw at first. The weird thing is that is only the body NOT THE STROKE of the shape.

This is my code.

```
boolean isbg = false;
void setup(){
size(1200,600,P3D);
background(0);
}
void draw(){
if (isbg){
background(0);
}
ellipse(mouseX,mouseY,100,100);
}
void keyPressed(){
if (key == 'd'){
isbg = !isbg;
}
}
```

This happen EVERYTIME after a background() function runs.

this just destroyed an entire soft that I was working with.

Happens even if is an ellipse, a rect or a personal shape.

I ´VE TRYED EVERYTHING

It is always AFTER the background function runs.

]]>```
tileShader.set("time", millis() / 1000.0);
```

If I try to modify this so that the tile will move at a speed according to my mouse,

```
float t =millis() / 1000.0;
float pctX = map (mouseX, 0, width, 0, 1);
tileShader.set("time", t*pctX);
```

the shader will do this 'scrubbing' effect where the entire image moves very quickly, but then once you stop moving the mouse, it will go at the speed you want. Is there any way to avoid this scrubbing effect? What I'm trying to accomplish are smooth increases in speed of the tile movement.

]]>This question is about running examples out of diewald_fluid library with option to use GPU (GLSL) context. I have downloaded GLGraphics zip package and moved files around so ~/libraries/codeanticode is seen by P3. I seem to crash on the execution of size() even after setting specific values for width and height. (The third argument remains as GLConstants.GLGRAPHICS.)

Any ideas on proper environment/setup to make this work?

]]>Was hoping anybody here could help me with this... For a project I want to pack data into a texture and use it in a fragment shader because it can potentially be a lot of data. Thing is... I need the data as floats in my shader code but I can't seem to get it right.

I wrote a little test case below which illustrates my problem. The goal of this sketch is to make the left half of the screen white and the right half of the screen black. That would be the case if the data passed via the texture would be interpreted correctly.

Tried a lot of methods to pack/unpack data, but I'm thinking I got it wrong on the processing side of things. I also noticed that when using println(pixels[0]), the value that gets printed is NOT the same as Float.floatToIntBits gives... so something get's changed.

Anyway, if anybody knows the correct way of passing float values to a shader via a texture and unpacking them from vec4 to float I would be really grateful. Thanks!

Processing sketch:

```
PShader shader;
PImage sampler;
void setup() {
size(1280, 720, P2D);
shader = loadShader("shader.glsl");
shader.set("resolution",1280,720);
// no texture interpolation
((PGraphicsOpenGL)g).textureSampling(3);
sampler = createImage(2,1,ARGB);
sampler.loadPixels();
sampler.pixels[0]=Float.floatToIntBits(5); // left half of screen < 1
sampler.pixels[1]=Float.floatToIntBits(15); // right half of screen > 10
sampler.updatePixels();
shader.set("sampler",sampler);
}
void draw() {
filter(shader);
}
```

Shader code:

```
uniform sampler2D sampler;
uniform vec2 resolution;
const vec4 bitEnc = vec4(1.,255.,65025.,16581375.);
const vec4 bitDec = 1./bitEnc;
float DecodeFloatRGBA (vec4 v) {
return dot(v, bitDec);
}
void main(){
vec4 test = texture2D(sampler,gl_FragCoord.xy/resolution);
float testfloat = DecodeFloatRGBA(test);
if (testfloat<10){
gl_FragColor=vec4(0,0,0,1);
}
else{
gl_FragColor=vec4(1,1,1,1);
}
}
```

]]>Cannot link Shader program: Vertex info: 0(45) error C7623: implicit narrowing of type vec4 to float

I have no idea how I can fix this as I do not understand the error in this program. Thanks for help!

```
#define PROCESSING_LIGHT_SHADER
#define NUM_LIGHTS 8
uniform mat4 modelview;
uniform mat4 transform;
uniform mat3 normalMatrix;
uniform int lightCount;
uniform vec4 lightPosition[8];
// focal factor for specular highlights (positive floats)
uniform vec4 AmbientContribution;
in vec4 vertex;//attribute same as "in"
in vec4 color;
in vec3 normal;
varying vec4 vertColor;
void main(){
//Vertex normal direction
float light;
for (int i = 0; i < lightCount; i++){
gl_Position = transform*vertex;
vec3 vertexCamera = vec3(modelview * vertex);
vec3 transformedNormal = normalize(normalMatrix * normal);
light = 0.0f;
vec3 dir = normalize(lightPosition[i].xyz - vertexCamera); //Vertex to light direction
float amountDiffuse = max(0.0, dot(dir, transformedNormal));
// calculate the vertex position in eye coordinates
vec3 vertexViewDir = normalize(-vertexCamera);
// calculate the vector corresponding to the light source reflected in the vertex surface.
// lightDir is negated as GLSL expects an incoming (rather than outgoing) vector
vec3 lightReflection = reflect(-dir, transformedNormal);
//color=clamp(color,0.0f,1.0f);
// calculate actual light intensity
light += AmbientContribution;
light += 0.25 * amountDiffuse;
}
vertColor = vec4(light, light, light, 1) * color;
}
```

```
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
varying vec4 vertColor;
void main() {
gl_FragColor = vertColor;
}
```

]]>I have an issue regarding the updating of the colorfill when using a beginShape() / endShape() function.

I have a plane ( x & y grid) which has a colorfill that changes during time (updates). It doesnt update though. When i use random(255) it does change, though when using a variable it only takes the1st (or last). the colorgrid (static looks as such:

Do i need to include a command that allows updtating fill colors with changing values on the x-y grid. code with example of variable that changes the grid ( in real it will be an array with data for each gridpoint (vertex) that refreshes ech iteration):

]]>`void setup() { size(900, 700, P3D); //noStroke(); } void draw() { background(0); translate(width/4, height/1.2); int fillvariable= 0; for(int w=0; w< 400; w+= 25){ for(int h=0; h> -400; h-= 25){ beginShape(QUADS); normal(0, 0, 1); fill(w/2, h/2, 200); vertex(w, h); fill(fillvariable, h/2, 200); vertex(w+25, h); fill(w/2, h/2, 200); vertex(w+25, h-25); fill(w/2, h/2, 200); vertex(w, h-25); endShape(); fillvariable += 20; if( fillvariable >255) { fillvariable=0; } println(fillvariable); } } }`

Is this way of filling a grid the smartest way (basically its making an colored isosurface)? Might be that this way takes much memory or cpu in the way its looping. I am trying to read into shaders and how + when to use them, but at the beginning of the gpu to cpu difference and how processing uses such.

Kf

]]>Are shaders typically applied over the entire screen, or is it possible to just apply a shader locally to , let's say, a sprite which is drawn using a. an image or b. some combination of vector drawings like ellipses or custom shapes or an svg file?

How might you prevent it from looking like just a rectangle over the image location? Is there a way to apply shaders only to that particular sprite? Would you just make sure that every non 'important' pixel in the image has some alpha value of 0? Just curious as most of the shader examples i've seen always fill up the sketch area.

Things that I'd like to explore would be making glowing effects and such on every projectile, or trails. Thank you.

]]>- Load a video file.
- Apply a GLSL Shader to each frame.
- Save the processed video as a new file, complete with audio track?

Is Processing suitable for this? Can GLSL in processing use GPU acceleration?

Can anybody point me to an example project where video is loaded, GLSL processed and saved again?

Thanks!

]]>Thank you.

]]>I'm trying to figure out how to increase the velocity of these lines based on the mouse y position. However, when I do this, it seems to be making this strange "scrubbing" type of effect where as i move the mouse it speeds up (or slows down ). and then only does it set the velocity correctly. what i would prefer to have is each of these lines just moving at a speed that is proportional to the y position of the mouse. I've been trying to tweak things here and there, but can't seem to figure out how to make this happen. have been having simlar problems with other sketches. so i'm wondering what it is i'm not understanding.

```
// Author @patriciogv - 2015
// Title: Ikeda Data Stream
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
uniform float u_spd;
float random (in float x) {
return fract(sin(x)*1e4);
}
float random (in vec2 st) {
return fract(sin(dot(st.xy, vec2(12.9898,78.233)))* 43758.5453123);
}
float pattern(vec2 st, vec2 v, float t) {
vec2 p = floor(st+v);
return step(t, random(100.+p*.000001)+random(p.x)*0.5 );
}
void main() {
vec2 st = gl_FragCoord.xy/u_resolution.xy;
st.x *= u_resolution.x/u_resolution.y;
vec2 grid = vec2(100.0,50.);
st *= grid;
vec2 ipos = floor(st); // integer
vec2 fpos = fract(st); // fraction
//float spdfactor=u_mouse.y/u_resolution.y;
float spdfactor=u_spd;// does the same thing.
//vec2 vel = vec2(u_time*2.*max(grid.x,grid.y)); // time
vec2 vel = vec2(u_time*spdfactor*2.*max(grid.x,grid.y)); // time
//vec2 vel = vec2(2.*max(grid.x,grid.y));
//vel *= vec2(-1.,0.0) * random(1.0+ipos.y); // direction // Assign a random value base on the integer coord
//vel *= vec2(-1.,0.0) *random(1.0+ipos.y); // direction
vel *= vec2(-1.,0.0) * (ipos.y+1)/100; // direction // Assign a random value base on the integer coord
//offset means shift the rgb channels a bit.
vec2 offset = vec2(0.6,0.);
//vec2 offset = vec2(ipos.x,0.);//interesting effect but not what is wnated.
vec3 color = vec3(0.);
color.r = pattern(st+offset,vel,0.5+u_mouse.x/u_resolution.x);
color.g = pattern(st,vel,0.5+u_mouse.x/u_resolution.x);
color.b = pattern(st-offset,vel,0.5+u_mouse.x/u_resolution.x);
//color.r = pattern(st+offset,vel,u_time+0.5+u_mouse.x/u_resolution.x);
//color.g = pattern(st,vel,u_time+0.5+u_mouse.x/u_resolution.x);
//color.b = pattern(st-offset,vel,u_time+0.5+u_mouse.x/u_resolution.x);
// Margins
color *= step(0.5,fpos.y);
//gl_FragColor = vec4(1.0-color,1.0);
gl_FragColor = vec4(color,1.0);
}
```

and here is the pde code PShader shader;

```
void setup() {
size(640, 360, P2D);
noStroke();
shader = loadShader("shader7.frag"); // one thru 13
}
float m;
void draw() {
shader.set("u_resolution", float(width), float(height));
shader.set("u_mouse", float(mouseX), float(mouseY));
shader.set("u_time", millis() / 1000.0);
shader.set("u_spd", m);
shader(shader);
rect(0, 0, width, height);
m=map(mouseY, 0, height, 0, 1);
}
```

]]>There does not seem to be any logical explanation for the draw order being used, can someone please explain how to fix this?

]]>I'd be happy if I could get this to work with a simple cube, or even a single planar face. Could expand from there...

Best, Drew Hamilton

]]>The code is essentially made up of a nebula portion and stars portion. I have questions on the nebula portion and so have commented out the stars portion.

I'm only looking at just one of the nebula that is created (there are actually two in the original example). I stopped the rotation and have been adjusting some of the parameters. I'm trying to find out what is it that seems to be making the nebula zoom in and out. I don't want it to do that. However, I am having trouble pinpointing what is it that is making this zoom effect. Any help would be appreciated.

```
//Utility functions
vec3 fade(vec3 t) {
return vec3(1.0,1.0,1.0);//t*t*t*(t*(t*6.0-15.0)+10.0);
}
vec2 rotate(vec2 point, float rads) {
float cs = cos(rads);
float sn = sin(rads);
return point * mat2(cs, -sn, sn, cs);
}
vec4 randomizer4(const vec4 x)
{
vec4 z = mod(x, vec4(5612.0));
z = mod(z, vec4(3.1415927 * 2.0));
return(fract(cos(z) * vec4(56812.5453)));
}
// Fast computed noise
// http://www.gamedev.net/topic/502913-fast-computed-noise/
const float A = 1.0;
const float B = 57.0;
const float C = 113.0;
const vec3 ABC = vec3(A, B, C);
const vec4 A3 = vec4(0, B, C, C+B);
const vec4 A4 = vec4(A, A+B, C+A, C+A+B);
float cnoise4(const in vec3 xx)
{
vec3 x = mod(xx + 32768.0, 65536.0);
vec3 ix = floor(x);
vec3 fx = fract(x); // returns fractional part of x.
//vec3 wx = vec3(0,0,0);
//vec3 wx = vec3(0.0,-1.,1.0);
//vec3 wx = fx; wx.y=0;
vec3 wx = fx*fx*(3.0-2.0*fx);
//vec3 wx = fx*fx*fx;
//wx.x=0.0;
float nn = dot(ix, ABC);
vec4 N1 = nn + A3;
vec4 N2 = nn + A4;
vec4 R1 = randomizer4(N1);
vec4 R2 = randomizer4(N2);
vec4 R = mix(R1, R2, wx.x);
float re = mix(mix(R.x, R.y, wx.y), mix(R.z, R.w, wx.y), wx.z);
return 1.0 - 2.0 * re;
}
float surface3 ( vec3 coord, float frequency ) {
float n = 0.0;
n += 1.0 * abs( cnoise4( coord * frequency ) );
n += 0.5 * abs( cnoise4( coord * frequency * 2.0 ) );
n += 0.25 * abs( cnoise4( coord * frequency * 4.0 ) );
n += 0.125 * abs( cnoise4( coord * frequency * 8.0 ) );
n += 0.0625 * abs( cnoise4( coord * frequency * 16.0 ) );
return n;
}
void main( void ) {
float rads = radians(time*3.15);
vec2 position = gl_FragCoord.xy / resolution.xy;
//position += rotate(position, rads); // rotates everything slowly.
//float n = surface3(vec3(position*sin(time*0.1), time * 0.05)*mat3(1,0,0,0,.8,.6,0,-.6,.8),2.0); //.9 //layer one
//float n = surface3(vec3(position*sin(time*0.1), time * 0.05) ,2.0); //.9 //layer one
float n2 = surface3(vec3(position*cos(time*0.1), time * 0.04)*mat3(1,0,0,0,.8,.6,0,-.6,.8),6.0); // .8 // layer two
//vec2 test = position*cos(time*0.1);
//float n2 = surface3(vec3(test, time * 0.04) ,6.0); // .8 // layer two
//float n2 = surface3(vec3(position*cos(time*0.1), 0) ,6); // .8 // layer two . z not affected
//float n2 = test.x;
//float lum = length(n);
float lum2 = length(n2); //length of a scalar float is abs value.
//vec3 tc = pow(vec3(1.0-lum),vec3(sin(position.x)+cos(time)+4.0,8.0+sin(time)+4.0,8.0)); // layer one
vec3 tc2 = pow(vec3(1.1-lum2),vec3(5.0,position.y+cos(time)+7.0,sin(position.x)+sin(time)+2.0)); // layer 2
//vec3 tc2 = pow(vec3(1.1-lum2,1.1-lum2,1.1-lum2),vec3(5.0,position.y+cos(time)+7.0,sin(position.x)+sin(time)+2.0)); // layer 2
//vec3 tc2 = pow(vec3(1.1-lum2,1.1-lum2,1.1-lum2),vec3(5.0,7, 2.0)); // layer 2
vec3 curr_color = /*(tc*0.8) */+ (tc2*0.5);
//curr_color.z=0;
curr_color =vec3(lum2, lum2, lum2); //makes black n white
///////////////////////////////////////////////end nebula...//////////////////////////////
//Let's draw some stars
float scale = sin(0.3 * time) + 20.0;
vec2 position2 = (((gl_FragCoord.xy / resolution) - 0.5) * scale);
float gradient = 0.0;
//vec3 color = vec3(100.0); // not used
//float fade = 0.0; // this fade is not used.
float z = 0.0;
vec2 centered_coord = position2;// - vec2(sin(time*0.1),sin(time*0.1)); //what does this do?
//vec2 centered_coord = position2 - vec2(sin(time*0.4),sin(time*0.4)); // it makes you move along x y plane back n forth.
//centered_coord = rotate(centered_coord, rads); // rotates about z.
for (float i=1.0; i<=100.0; i++)
{
//vec2 star_pos = vec2(sin(i) * 250.0, sin(i*i*i) * 250.0);
vec2 star_pos = vec2(sin(i) * 1000.0, sin(i*i*i) * 1000.0);
//vec2 star_pos = vec2(sin(i) * 250.0, sin(i*i*i) * 1000.0);
//vec2 star_pos = vec2( sin(i)*250.0, i );
//float z = mod(i*i - 50.0*time, 256.0); // x *time is speed.
//float z = mod(i - 50.0*time, 256.0); // i is ?? but if not there..it will .
//float z = mod(i*i*i*i*i - 10.0*time, 256.0); // i is ?? but if not there..it will .
//float z = mod(i- 10.0*time, 1024.0); // i is ?? but if not there..it will .
float z = mod(i*i- 10.0*time, 256.0); // lava lamp also happens here..
float fade = (256.0 - z) /256.0;
//float fade = (1024.0 - z) /256.0; // cool effect almost lava lamp
vec2 blob_coord = star_pos / z;
gradient += ((fade / 384.0) / pow(length(centered_coord - blob_coord), 1.5)) * ( fade);
}
//curr_color = vec3(0,0,0);// only does stars
//curr_color += gradient; // if comment out, only does nebula
gl_FragColor = vec4(curr_color, 1.0); // might be alpha
//gl_FragColor = vec4(gradient, 1.0); // does not work
}
```

also providing the pde code

```
/**
* Nebula.
*
* From CoffeeBreakStudios.com (CBS)
* Ported from the webGL version in GLSL Sandbox:
* http://glsl.heroku.com/e#3265.2
*/
PShader nebula;
void setup() {
fullScreen( P2D);
//size(500,500, P2D);
noStroke();
nebula = loadShader("nebula.glsl");
nebula.set("resolution", float(width), float(height));
}
void draw() {
nebula.set("time", millis() / 500.0);
shader(nebula);
// This kind of raymarching effects are entirely implemented in the
// fragment shader, they only need a quad covering the entire view
// area so every pixel is pushed through the shader.
rect(0, 0, width, height);
resetShader();
text("fr: " + frameRate, width/2, 10);
}
```

]]>This is on android btw.

```
precision mediump float;
uniform sampler2D inputImageTexture;
varying vec2 textureCoordinate;
uniform float resX;
uniform float resY;
void main(){
vec3 color = texture2D(inputImageTexture, textureCoordinate).rgb;
float a = 0.0;
if(textureCoordinate.x < 0.0){
a = (1.0 - (textureCoordinate.x * -1.0)) * (resX/2.0);
}else{
a = (textureCoordinate.x * (resX/2.0)) + (resX/2.0);
}
float yCoord = floor(a);
float modX = mod(yCoord, 20.0);
if(modX < 1.1){
color = vec3(0.4);
}
gl_FragColor = vec4(color, 1.0);
}
```

]]>Best regards Florian

]]>```
PImage iceCream;
PImage waffleCone;
PShape cone;
PShape cream;
PShader texlightShader;
PShader shader2;
PShader toon;
float angle;
boolean button = false;
//boolean button2 = false;
//boolean button3 = false;
void setup() {
size(800, 800, P3D);
iceCream = loadImage("cream.jpg");
waffleCone = loadImage("waffle.jpg");
texlightShader = loadShader("texlightfrag.glsl", "texlightvert.glsl");
shader2 = loadShader("lightfrag.glsl", "lightvert.glsl");
toon = loadShader("frag.glsl", "vert.glsl");
toon.set("fraction", 1.0);
//noLoop();
frameRate(30);
}
void draw() {
background(0);
lights();
translate(width / 2, height / 1.5);
////cameras
rotateY(angle);
//rotateX(map(mouseX, 0, width, 0, PI));
//rotateY(map(mouseX, 0, width, 0, PI));
rotateZ(map(height, 0, height, 0, -PI));
////lights
//pointLight(255, 255, 255, width/2, height, 600);
directionalLight(255,255,255, -1, 0, 0);
//float dirY = (mouseY / float(height) - 0.5) * 2;
//float dirX = (mouseX / float(width) - 0.5) * 2;
//directionalLight(204, 204, 204, -dirX, -dirY, -3);
noStroke();
fill(0, 0, 255);
translate(0, -40, 0);
///buttons to be implemented later
//if(!button) {shader(toon);} else {resetShader();}
//if (!button2){noStroke();}else { stroke(0);
if(!button) {
//shader(toon); not working must troubleshoot
resetShader();
drawCylinder_noTex(10, 75, 250, 16);}
else {
shader(texlightShader);
cone = drawCylinder(10, 75, 250, 16, waffleCone); }
//cone = drawCylinder(10, 75, 250, 16, waffleCone);
//drawCylinder_noTex(10, 75, 250, 16);
angle += 0.01;
}
PShape drawCylinder(float topRadius, float bottomRadius, float tall, int sides, PImage tex) {
textureMode(NORMAL);
PShape sh = createShape();
sh.beginShape(QUAD_STRIP);
//sh.noStroke();
sh.texture(tex);
for (int i = 0; i < sides + 1; ++i) {
float angle = 0;
float angleIncrement = TWO_PI / sides;
sh.vertex(topRadius*cos(angle), 0, topRadius*sin(angle), 0);
sh.vertex(bottomRadius*cos(angle), tall, bottomRadius*sin(angle), 100);
angle += angleIncrement;
}
sh.endShape();
return sh;
/*
pushMatrix();
//ice cream
translate(0,height/3);
sphere(75);
popMatrix();
*/
}
void drawCylinder_noTex(float topRadius, float bottomRadius, float tall, int sides) {
textureMode(NORMAL);
createShape();
float angle = 0;
float angleIncrement = TWO_PI / sides;
beginShape(QUAD_STRIP);
//sh.texture(tex);
for (int i = 0; i < sides + 1; ++i) {
vertex(topRadius*cos(angle), 0, topRadius*sin(angle));
vertex(bottomRadius*cos(angle), tall, bottomRadius*sin(angle));
angle += angleIncrement;
}
endShape();
/*
pushMatrix();
//ice cream
translate(0,height/3);
sphere(75);
popMatrix();
*/
}
void keyPressed()
{
if (key == 'b' || key == 'B') {button = !button;}
//if (key == 'n' || key == 'N') {button2 = !button2;} //future implementation
//if (key == 'm' || key == 'M') {button3 = !button3;} //future implementation
}
```

]]>For an installation, I use a kinect V2 to obtain a deep map of a space. I need to calculate some blob centroid from that deep map. I try with BlobDetection, blobscanner or openCV libraries and it was very slow, not usable. At the moment the best solution is to send 2 preprocessed depth image to Isadora via Syphon, use 2 Eyes++ actors in Isadora (blob detector actors) and send the result via OSC to processing to finally use fragment shader to process the final image.

It's complicated but it works at 30 f/s in processing and Isadora with very acceptable lag.

I search a way to do the blob decoding in GLSL shader via processing.

Is any one with an idea how to do that?

Thank you in advance. Jacques

]]>I am trying to translate that to PGL, but looking at the Processing low level gl samples, i really need some help / tipps where to start

]]>Processing Code:

```
PShader shader;
float a = 0.0;
void setup() {
size(600, 600, P3D);
noStroke();
shader = loadShader("OBfrag1.glsl","OBver1.glsl");
shader.set("BrickColor", 0.5, 0.1, 0.1);
shader.set("MortarColor", 0.5, 0.5, 0.5);
shader.set("BrickSize", 0.1, 0.1);
shader.set("BrickPct", 0.9, 0.9);
}
void draw() {
background(255);
shader(shader);
pointLight(255, 255, 255, width/2, height/2, 500);
translate(width/2, height/2);
rotateY(a);
fill(255);
sphere(200);
a += 0.01;
}
```

Vertex Shader ( OBver1.glsl ):

```
#define PROCESSING_LIGHT_SHADER
uniform mat4 modelview;
uniform mat4 transform;
uniform mat3 normalMatrix;
uniform vec4 lightPosition;
const float SpecularContribution = 0.2;
const float DiffuseContribution = 1.0 - SpecularContribution;
attribute vec4 vertex;
attribute vec4 color;
attribute vec3 normal;
varying vec4 vertColor;
varying float LightIntensity;
varying vec2 MCposition;
void main() {
vec3 ecPosition = vec3(modelview * vertex);
vec3 tNorm = normalize(normalMatrix * normal);
vec3 lightVec = normalize(lightPosition.xyz - ecPosition);
vec3 reflectVec = reflect(-lightVec, tNorm);
vec3 viewVec = normalize(-ecPosition);
float diffuse = max(dot(lightVec, tNorm), 0.0);
float spec = 0.0;
if (diffuse > 0.0) {
spec = max(dot(reflectVec, viewVec), 0.0);
spec = pow(spec, 16.0);
}
LightIntensity = DiffuseContribution * diffuse +
SpecularContribution * spec;
MCposition = tNorm.xy;
vertColor = vec4(LightIntensity, LightIntensity, LightIntensity, 1) * color;
gl_Position = transform * vertex;
}
```

Fragment Shader ( OBfrag1.glsl ):

```
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
uniform vec3 BrickColor, MortarColor;
uniform vec2 BrickSize;
uniform vec2 BrickPct;
varying vec4 vertColor;
varying float LightIntensity;
varying vec2 MCposition;
void main() {
vec3 color;
vec2 position, useBrick;
position = MCposition / BrickSize;
if (fract(position.y * 0.5) > 0.5) {
position.x += 0.5;
}
position = fract(position);
useBrick = step(position, BrickPct);
color = mix(MortarColor, BrickColor, useBrick.x * useBrick.y);
color *= LightIntensity;
gl_FragColor = vec4(color, 1.0) * vertColor;
}
```

]]>I'm trying to port these Shadertoy fragment shaders to Processing using PShader: https://www.shadertoy.com/view/XsG3z1#

I'm not sure I understood how to correctly do the multipass.

Here's my attempt so far:

BufA.frag:

// Reaction-diffusion pass. // // Here's a really short, non technical explanation: // // To begin, sprinkle the buffer with some initial noise on the first few frames (Sometimes, the // first frame gets skipped, so you do a few more). // // During the buffer loop pass, determine the reaction diffusion value using a combination of the // value stored in the buffer's "X" channel, and a the blurred value - stored in the "Y" channel // (You can see how that's done in the code below). Blur the value from the "X" channel (the old // reaction diffusion value) and store it in "Y", then store the new (reaction diffusion) value // in "X." Display either the "X" value or "Y" buffer value in the "Image" tab, add some window // dressing, then repeat the process. Simple... Slightly confusing when I try to explain it, but // trust me, it's simple. :) // // Anyway, for a more sophisticated explanation, here are a couple of references below: // // Reaction-Diffusion by the Gray-Scott Model - http://www.karlsims.com/rd.html // Reaction-Diffusion Tutorial - http://www.karlsims.com/rd.html uniform vec2 resolution; uniform float time; uniform int frame; uniform sampler2D iChannel0; // Cheap vec3 to vec3 hash. Works well enough, but there are other ways. vec3 hash33(in vec2 p){ float n = sin(dot(p, vec2(41, 289))); return fract(vec3(2097152, 262144, 32768)*n); } // Serves no other purpose than to save having to write this out all the time. I could write a // "define," but I'm pretty sure this'll be inlined. vec4 tx(in vec2 p){ return texture2D(iChannel0, p); } // Weighted blur function. Pretty standard. float blur(in vec2 p){ // Used to move to adjoining pixels. - uv + vec2(-1, 1)*px, uv + vec2(1, 0)*px, etc. vec3 e = vec3(1, 0, -1); vec2 px = 1./resolution.xy; // Weighted 3x3 blur, or a cheap and nasty Gaussian blur approximation. float res = 0.0; // Four corners. Those receive the least weight. res += tx(p + e.xx*px ).x + tx(p + e.xz*px ).x + tx(p + e.zx*px ).x + tx(p + e.zz*px ).x; // Four sides, which are given a little more weight. res += (tx(p + e.xy*px ).x + tx(p + e.yx*px ).x + tx(p + e.yz*px ).x + tx(p + e.zy*px ).x)*2.; // The center pixel, which we're giving the most weight to, as you'd expect. res += tx(p + e.yy*px ).x*4.; // Normalizing. return res/16.; } // The reaction diffusion loop. // void main(){ vec2 uv = gl_FragCoord.xy/resolution.xy; // Screen coordinates. Range: [0, 1] // vec2 uv = (gl_FragCoord.xy * 2.0 - resolution.xy) / resolution.y; vec2 pw = 1./resolution.xy; // Relative pixel width. Used for neighboring pixels, etc. // The blurred pixel. This is the result that's used in the "Image" tab. It's also reused // in the next frame in the reaction diffusion process (see below). float avgReactDiff = blur(uv); // The noise value. Because the result is blurred, we can get away with plain old static noise. // However, smooth noise, and various kinds of noise textures will work, too. vec3 noise = hash33(uv + vec2(53, 43)*time)*.6 + .2; // Used to move to adjoining pixels. - uv + vec2(-1, 1)*px, uv + vec2(1, 0)*px, etc. vec3 e = vec3(1, 0, -1); // Gradient epsilon value. The "1.5" figure was trial and error, but was based on the 3x3 blur radius. vec2 pwr = pw*1.5; // Use the blurred pixels (stored in the Y-Channel) to obtain the gradient. I haven't put too much // thought into this, but the gradient of a pixel on a blurred pixel grid (average neighbors), would // be analogous to a Laplacian operator on a 2D discreet grid. Laplacians tend to be used to describe // chemical flow, so... Sounds good, anyway. :) // // Seriously, though, take a look at the formula for the reacion-diffusion process, and you'll see // that the following few lines are simply putting it into effect. // Gradient of the blurred pixels from the previous frame. vec2 lap = vec2(tx(uv + e.xy*pwr).y - tx(uv - e.xy*pwr).y, tx(uv + e.yx*pwr).y - tx(uv - e.yx*pwr).y);// // Add some diffusive expansion, scaled down to the order of a pixel width. uv = uv + lap*pw*3.0; // Stochastic decay. Ie: A differention equation, influenced by noise. // You need the decay, otherwise things would keep increasing, which in this case means a white screen. float newReactDiff = tx(uv).x + (noise.z - 0.5)*0.0025 - 0.002; // Reaction-diffusion. newReactDiff += dot(tx(uv + (noise.xy-0.5)*pw).xy, vec2(1, -1))*0.145; // Storing the reaction diffusion value in the X channel, and avgReactDiff (the blurred pixel value) // in the Y channel. However, for the first few frames, we add some noise. Normally, one frame would // be enough, but for some weird reason, it doesn't always get stored on the very first frame. if(frame > 9) gl_FragColor.xy = clamp(vec2(newReactDiff, avgReactDiff/.98), 0., 1.); else gl_FragColor = vec4(noise, 1.); }

shader.frag:

// Reaction Diffusion - 2 Pass // https://www.shadertoy.com/view/XsG3z1# /* Reaction Diffusion - 2 Pass --------------------------- Simple 2 pass reaction-diffusion, based off of "Flexi's" reaction-diffusion examples. It takes about ten seconds to reach an equilibrium of sorts, and in the order of a minute longer for the colors to really settle in. I'm really thankful for the examples Flexi has been putting up lately. From what I understand, he's used to showing his work to a lot more people on much bigger screens, so his code's pretty reliable. Reaction-diffusion examples are temperamental. Change one figure by a minute fraction, and your image can disappear. That's why it was really nice to have a working example to refer to. Anyway, I've done things a little differently, but in essense, this is just a rehash of Flexi's "Expansive Reaction-Diffusion" example. I've stripped this one down to the basics, so hopefully, it'll be a little easier to take in than the multitab version. There are no outside textures, and everything is stored in the A-Buffer. I was originally going to simplify things even more and do a plain old, greyscale version, but figured I'd better at least try to pretty it up, so I added color and some very basic highlighting. I'll put up a more sophisticated version at a later date. By the way, for anyone who doesn't want to be weighed down with extras, I've provided a simpler "Image" tab version below. One more thing. Even though I consider it conceptually impossible, it wouldn't surprise me at all if someone, like Fabrice, produces a single pass, two tweet version. :) Based on: // Gorgeous, more sophisticated example: Expansive Reaction-Diffusion - Flexi https://www.shadertoy.com/view/4dcGW2 // A different kind of diffusion example. Really cool. Gray-Scott diffusion - knighty https://www.shadertoy.com/view/MdVGRh */ uniform sampler2D iChannel0; uniform vec2 resolution; uniform float time; /* // Ultra simple version, minus the window dressing. void main(){ gl_FragColor = 1. - texture2D(iChannel0, gl_FragCoord.xy/resolution.xy).wyyw + (time * 0.); } //*/ //* void main(){ // The screen coordinates. vec2 uv = gl_FragCoord.xy/resolution.xy; // vec2 uv = (gl_FragCoord.xy * 2.0 - resolution.xy) / resolution.y; // Read in the blurred pixel value. There's no rule that says you can't read in the // value in the "X" channel, but blurred stuff is easier to bump, that's all. float c = 1. - texture2D(iChannel0, uv).y; // Reading in the same at a slightly offsetted position. The difference between // "c2" and "c" is used to provide the highlighting. float c2 = 1. - texture2D(iChannel0, uv + .5/resolution.xy).y; // Color the pixel by mixing two colors in a sinusoidal kind of pattern. // float pattern = -cos(uv.x*0.75*3.14159-0.9)*cos(uv.y*1.5*3.14159-0.75)*0.5 + 0.5; // // Blue and gold, for an abstract sky over a... wheat field look. Very artsy. :) vec3 col = vec3(c*1.5, pow(c, 2.25), pow(c, 6.)); col = mix(col, col.zyx, clamp(pattern-.2, 0., 1.) ); // Extra color variations. //vec3 col = mix(vec3(c*1.2, pow(c, 8.), pow(c, 2.)), vec3(c*1.3, pow(c, 2.), pow(c, 10.)), pattern ); //vec3 col = mix(vec3(c*1.3, c*c, pow(c, 10.)), vec3(c*c*c, c*sqrt(c), c), pattern ); // Adding the highlighting. Not as nice as bump mapping, but still pretty effective. col += vec3(.6, .85, 1.)*max(c2*c2 - c*c, 0.)*12.; // Apply a vignette and increase the brightness for that fake spotlight effect. col *= pow( 16.0*uv.x*uv.y*(1.0-uv.x)*(1.0-uv.y) , .125)*1.15; // Fade in for the first few seconds. col *= smoothstep(0., 1., time/2.); // Done. gl_FragColor = vec4(min(col, 1.), 1.); } //*/

and the sketch:

//Reaction Diffusion - 2 Pass // https://www.shadertoy.com/view/XsG3z1 PShader bufA,shader; void setup(){ size(640,480,P2D); noStroke(); bufA = loadShader("BufA.frag"); bufA.set("resolution",(float)width,(float)height); bufA.set("time",0.0); shader = loadShader("shader.frag"); shader.set("resolution",(float)width,(float)height); } void draw(){ bufA.set("iChannel0",get()); bufA.set("time",frameCount * .1); bufA.set("frame",frameCount); shader(bufA); background(0); rect(0,0,width,height); //2nd pass //resetShader(); shader.set("iChannel0",get()); shader.set("time",frameCount * .1); shader(shader); rect(0,0,width,height); }

The shaders compile and run, but the output is different from what I see on shadertoy: The Processing version gets stable quite fast and it doesn't look like the feedback works.

]]>