I have an issue regarding the updating of the colorfill when using a beginShape() / endShape() function.

I have a plane ( x & y grid) which has a colorfill that changes during time (updates). It doesnt update though. When i use random(255) it does change, though when using a variable it only takes the1st (or last). the colorgrid (static looks as such:

Do i need to include a command that allows updtating fill colors with changing values on the x-y grid. code with example of variable that changes the grid ( in real it will be an array with data for each gridpoint (vertex) that refreshes ech iteration):

]]>`void setup() { size(900, 700, P3D); //noStroke(); } void draw() { background(0); translate(width/4, height/1.2); int fillvariable= 0; for(int w=0; w< 400; w+= 25){ for(int h=0; h> -400; h-= 25){ beginShape(QUADS); normal(0, 0, 1); fill(w/2, h/2, 200); vertex(w, h); fill(fillvariable, h/2, 200); vertex(w+25, h); fill(w/2, h/2, 200); vertex(w+25, h-25); fill(w/2, h/2, 200); vertex(w, h-25); endShape(); fillvariable += 20; if( fillvariable >255) { fillvariable=0; } println(fillvariable); } } }`

Is this way of filling a grid the smartest way (basically its making an colored isosurface)? Might be that this way takes much memory or cpu in the way its looping. I am trying to read into shaders and how + when to use them, but at the beginning of the gpu to cpu difference and how processing uses such.

Cannot link Shader program: Vertex info: 0(45) error C7623: implicit narrowing of type vec4 to float

I have no idea how I can fix this as I do not understand the error in this program. Thanks for help!

```
#define PROCESSING_LIGHT_SHADER
#define NUM_LIGHTS 8
uniform mat4 modelview;
uniform mat4 transform;
uniform mat3 normalMatrix;
uniform int lightCount;
uniform vec4 lightPosition[8];
// focal factor for specular highlights (positive floats)
uniform vec4 AmbientContribution;
in vec4 vertex;//attribute same as "in"
in vec4 color;
in vec3 normal;
varying vec4 vertColor;
void main(){
//Vertex normal direction
float light;
for (int i = 0; i < lightCount; i++){
gl_Position = transform*vertex;
vec3 vertexCamera = vec3(modelview * vertex);
vec3 transformedNormal = normalize(normalMatrix * normal);
light = 0.0f;
vec3 dir = normalize(lightPosition[i].xyz - vertexCamera); //Vertex to light direction
float amountDiffuse = max(0.0, dot(dir, transformedNormal));
// calculate the vertex position in eye coordinates
vec3 vertexViewDir = normalize(-vertexCamera);
// calculate the vector corresponding to the light source reflected in the vertex surface.
// lightDir is negated as GLSL expects an incoming (rather than outgoing) vector
vec3 lightReflection = reflect(-dir, transformedNormal);
//color=clamp(color,0.0f,1.0f);
// calculate actual light intensity
light += AmbientContribution;
light += 0.25 * amountDiffuse;
}
vertColor = vec4(light, light, light, 1) * color;
}
```

```
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
varying vec4 vertColor;
void main() {
gl_FragColor = vertColor;
}
```

]]>Kf

]]>Are shaders typically applied over the entire screen, or is it possible to just apply a shader locally to , let's say, a sprite which is drawn using a. an image or b. some combination of vector drawings like ellipses or custom shapes or an svg file?

How might you prevent it from looking like just a rectangle over the image location? Is there a way to apply shaders only to that particular sprite? Would you just make sure that every non 'important' pixel in the image has some alpha value of 0? Just curious as most of the shader examples i've seen always fill up the sketch area.

Things that I'd like to explore would be making glowing effects and such on every projectile, or trails. Thank you.

]]>- Load a video file.
- Apply a GLSL Shader to each frame.
- Save the processed video as a new file, complete with audio track?

Is Processing suitable for this? Can GLSL in processing use GPU acceleration?

Can anybody point me to an example project where video is loaded, GLSL processed and saved again?

Thanks!

]]>Thank you.

]]>I'm trying to figure out how to increase the velocity of these lines based on the mouse y position. However, when I do this, it seems to be making this strange "scrubbing" type of effect where as i move the mouse it speeds up (or slows down ). and then only does it set the velocity correctly. what i would prefer to have is each of these lines just moving at a speed that is proportional to the y position of the mouse. I've been trying to tweak things here and there, but can't seem to figure out how to make this happen. have been having simlar problems with other sketches. so i'm wondering what it is i'm not understanding.

```
// Author @patriciogv - 2015
// Title: Ikeda Data Stream
#ifdef GL_ES
precision mediump float;
#endif
uniform vec2 u_resolution;
uniform vec2 u_mouse;
uniform float u_time;
uniform float u_spd;
float random (in float x) {
return fract(sin(x)*1e4);
}
float random (in vec2 st) {
return fract(sin(dot(st.xy, vec2(12.9898,78.233)))* 43758.5453123);
}
float pattern(vec2 st, vec2 v, float t) {
vec2 p = floor(st+v);
return step(t, random(100.+p*.000001)+random(p.x)*0.5 );
}
void main() {
vec2 st = gl_FragCoord.xy/u_resolution.xy;
st.x *= u_resolution.x/u_resolution.y;
vec2 grid = vec2(100.0,50.);
st *= grid;
vec2 ipos = floor(st); // integer
vec2 fpos = fract(st); // fraction
//float spdfactor=u_mouse.y/u_resolution.y;
float spdfactor=u_spd;// does the same thing.
//vec2 vel = vec2(u_time*2.*max(grid.x,grid.y)); // time
vec2 vel = vec2(u_time*spdfactor*2.*max(grid.x,grid.y)); // time
//vec2 vel = vec2(2.*max(grid.x,grid.y));
//vel *= vec2(-1.,0.0) * random(1.0+ipos.y); // direction // Assign a random value base on the integer coord
//vel *= vec2(-1.,0.0) *random(1.0+ipos.y); // direction
vel *= vec2(-1.,0.0) * (ipos.y+1)/100; // direction // Assign a random value base on the integer coord
//offset means shift the rgb channels a bit.
vec2 offset = vec2(0.6,0.);
//vec2 offset = vec2(ipos.x,0.);//interesting effect but not what is wnated.
vec3 color = vec3(0.);
color.r = pattern(st+offset,vel,0.5+u_mouse.x/u_resolution.x);
color.g = pattern(st,vel,0.5+u_mouse.x/u_resolution.x);
color.b = pattern(st-offset,vel,0.5+u_mouse.x/u_resolution.x);
//color.r = pattern(st+offset,vel,u_time+0.5+u_mouse.x/u_resolution.x);
//color.g = pattern(st,vel,u_time+0.5+u_mouse.x/u_resolution.x);
//color.b = pattern(st-offset,vel,u_time+0.5+u_mouse.x/u_resolution.x);
// Margins
color *= step(0.5,fpos.y);
//gl_FragColor = vec4(1.0-color,1.0);
gl_FragColor = vec4(color,1.0);
}
```

and here is the pde code PShader shader;

```
void setup() {
size(640, 360, P2D);
noStroke();
shader = loadShader("shader7.frag"); // one thru 13
}
float m;
void draw() {
shader.set("u_resolution", float(width), float(height));
shader.set("u_mouse", float(mouseX), float(mouseY));
shader.set("u_time", millis() / 1000.0);
shader.set("u_spd", m);
shader(shader);
rect(0, 0, width, height);
m=map(mouseY, 0, height, 0, 1);
}
```

]]>There does not seem to be any logical explanation for the draw order being used, can someone please explain how to fix this?

]]>I'd be happy if I could get this to work with a simple cube, or even a single planar face. Could expand from there...

Best, Drew Hamilton

]]>The code is essentially made up of a nebula portion and stars portion. I have questions on the nebula portion and so have commented out the stars portion.

I'm only looking at just one of the nebula that is created (there are actually two in the original example). I stopped the rotation and have been adjusting some of the parameters. I'm trying to find out what is it that seems to be making the nebula zoom in and out. I don't want it to do that. However, I am having trouble pinpointing what is it that is making this zoom effect. Any help would be appreciated.

```
//Utility functions
vec3 fade(vec3 t) {
return vec3(1.0,1.0,1.0);//t*t*t*(t*(t*6.0-15.0)+10.0);
}
vec2 rotate(vec2 point, float rads) {
float cs = cos(rads);
float sn = sin(rads);
return point * mat2(cs, -sn, sn, cs);
}
vec4 randomizer4(const vec4 x)
{
vec4 z = mod(x, vec4(5612.0));
z = mod(z, vec4(3.1415927 * 2.0));
return(fract(cos(z) * vec4(56812.5453)));
}
// Fast computed noise
// http://www.gamedev.net/topic/502913-fast-computed-noise/
const float A = 1.0;
const float B = 57.0;
const float C = 113.0;
const vec3 ABC = vec3(A, B, C);
const vec4 A3 = vec4(0, B, C, C+B);
const vec4 A4 = vec4(A, A+B, C+A, C+A+B);
float cnoise4(const in vec3 xx)
{
vec3 x = mod(xx + 32768.0, 65536.0);
vec3 ix = floor(x);
vec3 fx = fract(x); // returns fractional part of x.
//vec3 wx = vec3(0,0,0);
//vec3 wx = vec3(0.0,-1.,1.0);
//vec3 wx = fx; wx.y=0;
vec3 wx = fx*fx*(3.0-2.0*fx);
//vec3 wx = fx*fx*fx;
//wx.x=0.0;
float nn = dot(ix, ABC);
vec4 N1 = nn + A3;
vec4 N2 = nn + A4;
vec4 R1 = randomizer4(N1);
vec4 R2 = randomizer4(N2);
vec4 R = mix(R1, R2, wx.x);
float re = mix(mix(R.x, R.y, wx.y), mix(R.z, R.w, wx.y), wx.z);
return 1.0 - 2.0 * re;
}
float surface3 ( vec3 coord, float frequency ) {
float n = 0.0;
n += 1.0 * abs( cnoise4( coord * frequency ) );
n += 0.5 * abs( cnoise4( coord * frequency * 2.0 ) );
n += 0.25 * abs( cnoise4( coord * frequency * 4.0 ) );
n += 0.125 * abs( cnoise4( coord * frequency * 8.0 ) );
n += 0.0625 * abs( cnoise4( coord * frequency * 16.0 ) );
return n;
}
void main( void ) {
float rads = radians(time*3.15);
vec2 position = gl_FragCoord.xy / resolution.xy;
//position += rotate(position, rads); // rotates everything slowly.
//float n = surface3(vec3(position*sin(time*0.1), time * 0.05)*mat3(1,0,0,0,.8,.6,0,-.6,.8),2.0); //.9 //layer one
//float n = surface3(vec3(position*sin(time*0.1), time * 0.05) ,2.0); //.9 //layer one
float n2 = surface3(vec3(position*cos(time*0.1), time * 0.04)*mat3(1,0,0,0,.8,.6,0,-.6,.8),6.0); // .8 // layer two
//vec2 test = position*cos(time*0.1);
//float n2 = surface3(vec3(test, time * 0.04) ,6.0); // .8 // layer two
//float n2 = surface3(vec3(position*cos(time*0.1), 0) ,6); // .8 // layer two . z not affected
//float n2 = test.x;
//float lum = length(n);
float lum2 = length(n2); //length of a scalar float is abs value.
//vec3 tc = pow(vec3(1.0-lum),vec3(sin(position.x)+cos(time)+4.0,8.0+sin(time)+4.0,8.0)); // layer one
vec3 tc2 = pow(vec3(1.1-lum2),vec3(5.0,position.y+cos(time)+7.0,sin(position.x)+sin(time)+2.0)); // layer 2
//vec3 tc2 = pow(vec3(1.1-lum2,1.1-lum2,1.1-lum2),vec3(5.0,position.y+cos(time)+7.0,sin(position.x)+sin(time)+2.0)); // layer 2
//vec3 tc2 = pow(vec3(1.1-lum2,1.1-lum2,1.1-lum2),vec3(5.0,7, 2.0)); // layer 2
vec3 curr_color = /*(tc*0.8) */+ (tc2*0.5);
//curr_color.z=0;
curr_color =vec3(lum2, lum2, lum2); //makes black n white
///////////////////////////////////////////////end nebula...//////////////////////////////
//Let's draw some stars
float scale = sin(0.3 * time) + 20.0;
vec2 position2 = (((gl_FragCoord.xy / resolution) - 0.5) * scale);
float gradient = 0.0;
//vec3 color = vec3(100.0); // not used
//float fade = 0.0; // this fade is not used.
float z = 0.0;
vec2 centered_coord = position2;// - vec2(sin(time*0.1),sin(time*0.1)); //what does this do?
//vec2 centered_coord = position2 - vec2(sin(time*0.4),sin(time*0.4)); // it makes you move along x y plane back n forth.
//centered_coord = rotate(centered_coord, rads); // rotates about z.
for (float i=1.0; i<=100.0; i++)
{
//vec2 star_pos = vec2(sin(i) * 250.0, sin(i*i*i) * 250.0);
vec2 star_pos = vec2(sin(i) * 1000.0, sin(i*i*i) * 1000.0);
//vec2 star_pos = vec2(sin(i) * 250.0, sin(i*i*i) * 1000.0);
//vec2 star_pos = vec2( sin(i)*250.0, i );
//float z = mod(i*i - 50.0*time, 256.0); // x *time is speed.
//float z = mod(i - 50.0*time, 256.0); // i is ?? but if not there..it will .
//float z = mod(i*i*i*i*i - 10.0*time, 256.0); // i is ?? but if not there..it will .
//float z = mod(i- 10.0*time, 1024.0); // i is ?? but if not there..it will .
float z = mod(i*i- 10.0*time, 256.0); // lava lamp also happens here..
float fade = (256.0 - z) /256.0;
//float fade = (1024.0 - z) /256.0; // cool effect almost lava lamp
vec2 blob_coord = star_pos / z;
gradient += ((fade / 384.0) / pow(length(centered_coord - blob_coord), 1.5)) * ( fade);
}
//curr_color = vec3(0,0,0);// only does stars
//curr_color += gradient; // if comment out, only does nebula
gl_FragColor = vec4(curr_color, 1.0); // might be alpha
//gl_FragColor = vec4(gradient, 1.0); // does not work
}
```

also providing the pde code

```
/**
* Nebula.
*
* From CoffeeBreakStudios.com (CBS)
* Ported from the webGL version in GLSL Sandbox:
* http://glsl.heroku.com/e#3265.2
*/
PShader nebula;
void setup() {
fullScreen( P2D);
//size(500,500, P2D);
noStroke();
nebula = loadShader("nebula.glsl");
nebula.set("resolution", float(width), float(height));
}
void draw() {
nebula.set("time", millis() / 500.0);
shader(nebula);
// This kind of raymarching effects are entirely implemented in the
// fragment shader, they only need a quad covering the entire view
// area so every pixel is pushed through the shader.
rect(0, 0, width, height);
resetShader();
text("fr: " + frameRate, width/2, 10);
}
```

]]>This is on android btw.

```
precision mediump float;
uniform sampler2D inputImageTexture;
varying vec2 textureCoordinate;
uniform float resX;
uniform float resY;
void main(){
vec3 color = texture2D(inputImageTexture, textureCoordinate).rgb;
float a = 0.0;
if(textureCoordinate.x < 0.0){
a = (1.0 - (textureCoordinate.x * -1.0)) * (resX/2.0);
}else{
a = (textureCoordinate.x * (resX/2.0)) + (resX/2.0);
}
float yCoord = floor(a);
float modX = mod(yCoord, 20.0);
if(modX < 1.1){
color = vec3(0.4);
}
gl_FragColor = vec4(color, 1.0);
}
```

]]>Best regards Florian

]]>It only happens with P3D.

after a background() function happens that the shape that I working always draw in the back and not in the front as it was being draw at first. The weird thing is that is only the body NOT THE STROKE of the shape.

This is my code.

```
boolean isbg = false;
void setup(){
size(1200,600,P3D);
background(0);
}
void draw(){
if (isbg){
background(0);
}
ellipse(mouseX,mouseY,100,100);
}
void keyPressed(){
if (key == 'd'){
isbg = !isbg;
}
}
```

This happen EVERYTIME after a background() function runs.

this just destroyed an entire soft that I was working with.

Happens even if is an ellipse, a rect or a personal shape.

I ´VE TRYED EVERYTHING

It is always AFTER the background function runs.

]]>```
PImage iceCream;
PImage waffleCone;
PShape cone;
PShape cream;
PShader texlightShader;
PShader shader2;
PShader toon;
float angle;
boolean button = false;
//boolean button2 = false;
//boolean button3 = false;
void setup() {
size(800, 800, P3D);
iceCream = loadImage("cream.jpg");
waffleCone = loadImage("waffle.jpg");
texlightShader = loadShader("texlightfrag.glsl", "texlightvert.glsl");
shader2 = loadShader("lightfrag.glsl", "lightvert.glsl");
toon = loadShader("frag.glsl", "vert.glsl");
toon.set("fraction", 1.0);
//noLoop();
frameRate(30);
}
void draw() {
background(0);
lights();
translate(width / 2, height / 1.5);
////cameras
rotateY(angle);
//rotateX(map(mouseX, 0, width, 0, PI));
//rotateY(map(mouseX, 0, width, 0, PI));
rotateZ(map(height, 0, height, 0, -PI));
////lights
//pointLight(255, 255, 255, width/2, height, 600);
directionalLight(255,255,255, -1, 0, 0);
//float dirY = (mouseY / float(height) - 0.5) * 2;
//float dirX = (mouseX / float(width) - 0.5) * 2;
//directionalLight(204, 204, 204, -dirX, -dirY, -3);
noStroke();
fill(0, 0, 255);
translate(0, -40, 0);
///buttons to be implemented later
//if(!button) {shader(toon);} else {resetShader();}
//if (!button2){noStroke();}else { stroke(0);
if(!button) {
//shader(toon); not working must troubleshoot
resetShader();
drawCylinder_noTex(10, 75, 250, 16);}
else {
shader(texlightShader);
cone = drawCylinder(10, 75, 250, 16, waffleCone); }
//cone = drawCylinder(10, 75, 250, 16, waffleCone);
//drawCylinder_noTex(10, 75, 250, 16);
angle += 0.01;
}
PShape drawCylinder(float topRadius, float bottomRadius, float tall, int sides, PImage tex) {
textureMode(NORMAL);
PShape sh = createShape();
sh.beginShape(QUAD_STRIP);
//sh.noStroke();
sh.texture(tex);
for (int i = 0; i < sides + 1; ++i) {
float angle = 0;
float angleIncrement = TWO_PI / sides;
sh.vertex(topRadius*cos(angle), 0, topRadius*sin(angle), 0);
sh.vertex(bottomRadius*cos(angle), tall, bottomRadius*sin(angle), 100);
angle += angleIncrement;
}
sh.endShape();
return sh;
/*
pushMatrix();
//ice cream
translate(0,height/3);
sphere(75);
popMatrix();
*/
}
void drawCylinder_noTex(float topRadius, float bottomRadius, float tall, int sides) {
textureMode(NORMAL);
createShape();
float angle = 0;
float angleIncrement = TWO_PI / sides;
beginShape(QUAD_STRIP);
//sh.texture(tex);
for (int i = 0; i < sides + 1; ++i) {
vertex(topRadius*cos(angle), 0, topRadius*sin(angle));
vertex(bottomRadius*cos(angle), tall, bottomRadius*sin(angle));
angle += angleIncrement;
}
endShape();
/*
pushMatrix();
//ice cream
translate(0,height/3);
sphere(75);
popMatrix();
*/
}
void keyPressed()
{
if (key == 'b' || key == 'B') {button = !button;}
//if (key == 'n' || key == 'N') {button2 = !button2;} //future implementation
//if (key == 'm' || key == 'M') {button3 = !button3;} //future implementation
}
```

]]>For an installation, I use a kinect V2 to obtain a deep map of a space. I need to calculate some blob centroid from that deep map. I try with BlobDetection, blobscanner or openCV libraries and it was very slow, not usable. At the moment the best solution is to send 2 preprocessed depth image to Isadora via Syphon, use 2 Eyes++ actors in Isadora (blob detector actors) and send the result via OSC to processing to finally use fragment shader to process the final image.

It's complicated but it works at 30 f/s in processing and Isadora with very acceptable lag.

I search a way to do the blob decoding in GLSL shader via processing.

Is any one with an idea how to do that?

Thank you in advance. Jacques

]]>I am trying to translate that to PGL, but looking at the Processing low level gl samples, i really need some help / tipps where to start

]]>Processing Code:

```
PShader shader;
float a = 0.0;
void setup() {
size(600, 600, P3D);
noStroke();
shader = loadShader("OBfrag1.glsl","OBver1.glsl");
shader.set("BrickColor", 0.5, 0.1, 0.1);
shader.set("MortarColor", 0.5, 0.5, 0.5);
shader.set("BrickSize", 0.1, 0.1);
shader.set("BrickPct", 0.9, 0.9);
}
void draw() {
background(255);
shader(shader);
pointLight(255, 255, 255, width/2, height/2, 500);
translate(width/2, height/2);
rotateY(a);
fill(255);
sphere(200);
a += 0.01;
}
```

Vertex Shader ( OBver1.glsl ):

```
#define PROCESSING_LIGHT_SHADER
uniform mat4 modelview;
uniform mat4 transform;
uniform mat3 normalMatrix;
uniform vec4 lightPosition;
const float SpecularContribution = 0.2;
const float DiffuseContribution = 1.0 - SpecularContribution;
attribute vec4 vertex;
attribute vec4 color;
attribute vec3 normal;
varying vec4 vertColor;
varying float LightIntensity;
varying vec2 MCposition;
void main() {
vec3 ecPosition = vec3(modelview * vertex);
vec3 tNorm = normalize(normalMatrix * normal);
vec3 lightVec = normalize(lightPosition.xyz - ecPosition);
vec3 reflectVec = reflect(-lightVec, tNorm);
vec3 viewVec = normalize(-ecPosition);
float diffuse = max(dot(lightVec, tNorm), 0.0);
float spec = 0.0;
if (diffuse > 0.0) {
spec = max(dot(reflectVec, viewVec), 0.0);
spec = pow(spec, 16.0);
}
LightIntensity = DiffuseContribution * diffuse +
SpecularContribution * spec;
MCposition = tNorm.xy;
vertColor = vec4(LightIntensity, LightIntensity, LightIntensity, 1) * color;
gl_Position = transform * vertex;
}
```

Fragment Shader ( OBfrag1.glsl ):

```
#ifdef GL_ES
precision mediump float;
precision mediump int;
#endif
uniform vec3 BrickColor, MortarColor;
uniform vec2 BrickSize;
uniform vec2 BrickPct;
varying vec4 vertColor;
varying float LightIntensity;
varying vec2 MCposition;
void main() {
vec3 color;
vec2 position, useBrick;
position = MCposition / BrickSize;
if (fract(position.y * 0.5) > 0.5) {
position.x += 0.5;
}
position = fract(position);
useBrick = step(position, BrickPct);
color = mix(MortarColor, BrickColor, useBrick.x * useBrick.y);
color *= LightIntensity;
gl_FragColor = vec4(color, 1.0) * vertColor;
}
```

]]>I'm trying to port these Shadertoy fragment shaders to Processing using PShader: https://www.shadertoy.com/view/XsG3z1#

I'm not sure I understood how to correctly do the multipass.

Here's my attempt so far:

BufA.frag:

// Reaction-diffusion pass. // // Here's a really short, non technical explanation: // // To begin, sprinkle the buffer with some initial noise on the first few frames (Sometimes, the // first frame gets skipped, so you do a few more). // // During the buffer loop pass, determine the reaction diffusion value using a combination of the // value stored in the buffer's "X" channel, and a the blurred value - stored in the "Y" channel // (You can see how that's done in the code below). Blur the value from the "X" channel (the old // reaction diffusion value) and store it in "Y", then store the new (reaction diffusion) value // in "X." Display either the "X" value or "Y" buffer value in the "Image" tab, add some window // dressing, then repeat the process. Simple... Slightly confusing when I try to explain it, but // trust me, it's simple. :) // // Anyway, for a more sophisticated explanation, here are a couple of references below: // // Reaction-Diffusion by the Gray-Scott Model - http://www.karlsims.com/rd.html // Reaction-Diffusion Tutorial - http://www.karlsims.com/rd.html uniform vec2 resolution; uniform float time; uniform int frame; uniform sampler2D iChannel0; // Cheap vec3 to vec3 hash. Works well enough, but there are other ways. vec3 hash33(in vec2 p){ float n = sin(dot(p, vec2(41, 289))); return fract(vec3(2097152, 262144, 32768)*n); } // Serves no other purpose than to save having to write this out all the time. I could write a // "define," but I'm pretty sure this'll be inlined. vec4 tx(in vec2 p){ return texture2D(iChannel0, p); } // Weighted blur function. Pretty standard. float blur(in vec2 p){ // Used to move to adjoining pixels. - uv + vec2(-1, 1)*px, uv + vec2(1, 0)*px, etc. vec3 e = vec3(1, 0, -1); vec2 px = 1./resolution.xy; // Weighted 3x3 blur, or a cheap and nasty Gaussian blur approximation. float res = 0.0; // Four corners. Those receive the least weight. res += tx(p + e.xx*px ).x + tx(p + e.xz*px ).x + tx(p + e.zx*px ).x + tx(p + e.zz*px ).x; // Four sides, which are given a little more weight. res += (tx(p + e.xy*px ).x + tx(p + e.yx*px ).x + tx(p + e.yz*px ).x + tx(p + e.zy*px ).x)*2.; // The center pixel, which we're giving the most weight to, as you'd expect. res += tx(p + e.yy*px ).x*4.; // Normalizing. return res/16.; } // The reaction diffusion loop. // void main(){ vec2 uv = gl_FragCoord.xy/resolution.xy; // Screen coordinates. Range: [0, 1] // vec2 uv = (gl_FragCoord.xy * 2.0 - resolution.xy) / resolution.y; vec2 pw = 1./resolution.xy; // Relative pixel width. Used for neighboring pixels, etc. // The blurred pixel. This is the result that's used in the "Image" tab. It's also reused // in the next frame in the reaction diffusion process (see below). float avgReactDiff = blur(uv); // The noise value. Because the result is blurred, we can get away with plain old static noise. // However, smooth noise, and various kinds of noise textures will work, too. vec3 noise = hash33(uv + vec2(53, 43)*time)*.6 + .2; // Used to move to adjoining pixels. - uv + vec2(-1, 1)*px, uv + vec2(1, 0)*px, etc. vec3 e = vec3(1, 0, -1); // Gradient epsilon value. The "1.5" figure was trial and error, but was based on the 3x3 blur radius. vec2 pwr = pw*1.5; // Use the blurred pixels (stored in the Y-Channel) to obtain the gradient. I haven't put too much // thought into this, but the gradient of a pixel on a blurred pixel grid (average neighbors), would // be analogous to a Laplacian operator on a 2D discreet grid. Laplacians tend to be used to describe // chemical flow, so... Sounds good, anyway. :) // // Seriously, though, take a look at the formula for the reacion-diffusion process, and you'll see // that the following few lines are simply putting it into effect. // Gradient of the blurred pixels from the previous frame. vec2 lap = vec2(tx(uv + e.xy*pwr).y - tx(uv - e.xy*pwr).y, tx(uv + e.yx*pwr).y - tx(uv - e.yx*pwr).y);// // Add some diffusive expansion, scaled down to the order of a pixel width. uv = uv + lap*pw*3.0; // Stochastic decay. Ie: A differention equation, influenced by noise. // You need the decay, otherwise things would keep increasing, which in this case means a white screen. float newReactDiff = tx(uv).x + (noise.z - 0.5)*0.0025 - 0.002; // Reaction-diffusion. newReactDiff += dot(tx(uv + (noise.xy-0.5)*pw).xy, vec2(1, -1))*0.145; // Storing the reaction diffusion value in the X channel, and avgReactDiff (the blurred pixel value) // in the Y channel. However, for the first few frames, we add some noise. Normally, one frame would // be enough, but for some weird reason, it doesn't always get stored on the very first frame. if(frame > 9) gl_FragColor.xy = clamp(vec2(newReactDiff, avgReactDiff/.98), 0., 1.); else gl_FragColor = vec4(noise, 1.); }

shader.frag:

// Reaction Diffusion - 2 Pass // https://www.shadertoy.com/view/XsG3z1# /* Reaction Diffusion - 2 Pass --------------------------- Simple 2 pass reaction-diffusion, based off of "Flexi's" reaction-diffusion examples. It takes about ten seconds to reach an equilibrium of sorts, and in the order of a minute longer for the colors to really settle in. I'm really thankful for the examples Flexi has been putting up lately. From what I understand, he's used to showing his work to a lot more people on much bigger screens, so his code's pretty reliable. Reaction-diffusion examples are temperamental. Change one figure by a minute fraction, and your image can disappear. That's why it was really nice to have a working example to refer to. Anyway, I've done things a little differently, but in essense, this is just a rehash of Flexi's "Expansive Reaction-Diffusion" example. I've stripped this one down to the basics, so hopefully, it'll be a little easier to take in than the multitab version. There are no outside textures, and everything is stored in the A-Buffer. I was originally going to simplify things even more and do a plain old, greyscale version, but figured I'd better at least try to pretty it up, so I added color and some very basic highlighting. I'll put up a more sophisticated version at a later date. By the way, for anyone who doesn't want to be weighed down with extras, I've provided a simpler "Image" tab version below. One more thing. Even though I consider it conceptually impossible, it wouldn't surprise me at all if someone, like Fabrice, produces a single pass, two tweet version. :) Based on: // Gorgeous, more sophisticated example: Expansive Reaction-Diffusion - Flexi https://www.shadertoy.com/view/4dcGW2 // A different kind of diffusion example. Really cool. Gray-Scott diffusion - knighty https://www.shadertoy.com/view/MdVGRh */ uniform sampler2D iChannel0; uniform vec2 resolution; uniform float time; /* // Ultra simple version, minus the window dressing. void main(){ gl_FragColor = 1. - texture2D(iChannel0, gl_FragCoord.xy/resolution.xy).wyyw + (time * 0.); } //*/ //* void main(){ // The screen coordinates. vec2 uv = gl_FragCoord.xy/resolution.xy; // vec2 uv = (gl_FragCoord.xy * 2.0 - resolution.xy) / resolution.y; // Read in the blurred pixel value. There's no rule that says you can't read in the // value in the "X" channel, but blurred stuff is easier to bump, that's all. float c = 1. - texture2D(iChannel0, uv).y; // Reading in the same at a slightly offsetted position. The difference between // "c2" and "c" is used to provide the highlighting. float c2 = 1. - texture2D(iChannel0, uv + .5/resolution.xy).y; // Color the pixel by mixing two colors in a sinusoidal kind of pattern. // float pattern = -cos(uv.x*0.75*3.14159-0.9)*cos(uv.y*1.5*3.14159-0.75)*0.5 + 0.5; // // Blue and gold, for an abstract sky over a... wheat field look. Very artsy. :) vec3 col = vec3(c*1.5, pow(c, 2.25), pow(c, 6.)); col = mix(col, col.zyx, clamp(pattern-.2, 0., 1.) ); // Extra color variations. //vec3 col = mix(vec3(c*1.2, pow(c, 8.), pow(c, 2.)), vec3(c*1.3, pow(c, 2.), pow(c, 10.)), pattern ); //vec3 col = mix(vec3(c*1.3, c*c, pow(c, 10.)), vec3(c*c*c, c*sqrt(c), c), pattern ); // Adding the highlighting. Not as nice as bump mapping, but still pretty effective. col += vec3(.6, .85, 1.)*max(c2*c2 - c*c, 0.)*12.; // Apply a vignette and increase the brightness for that fake spotlight effect. col *= pow( 16.0*uv.x*uv.y*(1.0-uv.x)*(1.0-uv.y) , .125)*1.15; // Fade in for the first few seconds. col *= smoothstep(0., 1., time/2.); // Done. gl_FragColor = vec4(min(col, 1.), 1.); } //*/

and the sketch:

//Reaction Diffusion - 2 Pass // https://www.shadertoy.com/view/XsG3z1 PShader bufA,shader; void setup(){ size(640,480,P2D); noStroke(); bufA = loadShader("BufA.frag"); bufA.set("resolution",(float)width,(float)height); bufA.set("time",0.0); shader = loadShader("shader.frag"); shader.set("resolution",(float)width,(float)height); } void draw(){ bufA.set("iChannel0",get()); bufA.set("time",frameCount * .1); bufA.set("frame",frameCount); shader(bufA); background(0); rect(0,0,width,height); //2nd pass //resetShader(); shader.set("iChannel0",get()); shader.set("time",frameCount * .1); shader(shader); rect(0,0,width,height); }

The shaders compile and run, but the output is different from what I see on shadertoy: The Processing version gets stable quite fast and it doesn't look like the feedback works.

]]>In particular I cannot get a point shader to take custom attributes. Is that possible?

1) I tried the attrib() function which works fine with triangles and QUADS but when i set the shape to be POINTS type then it stops working.

2) I tried to pass some data using the default vertex attributes such us 'uv' or 'normal' but it does not work either.

3) So I tried to have a solution based on QUADS instead of using a point shader, but then, when I do like in the Static Particles Retained example and use createShape(PShape.GROUP) again custom attributes stop working and I get a null Pointer Exception :

`

```
particles = createShape(PShape.GROUP) ;
sprite = loadImage("sprite.png");
for (int n = 0; n < npartTotal; n++) {
float cx = random(-500, +500);
float cy = random(-500, +500);
float cz = random(-500, +500);
PShape part = createShape();
part.beginShape(QUAD);
part.noStroke();
part.tint(255);
part.texture(sprite);
part.normal(0, 0, 1);
part.attrib("custom",1.0); //this does not work
part.vertex(cx - partSize/2, cy - partSize/2, cz, 0, 0);
part.vertex(cx + partSize/2, cy - partSize/2, cz, sprite.width, 0);
part.vertex(cx + partSize/2, cy + partSize/2, cz, sprite.width, sprite.height);
part.vertex(cx - partSize/2, cy + partSize/2, cz, 0, sprite.height);
part.endShape();
particles.addChild(part);`
```

4) So I decided to use normal QUADS, without groups or anything, and seems to work, but at that point I'm stuck because I can't get each quad to act like a billboard and always face the screen. Is there a way to have quads always facing the Camera?

It would be very practical if point shaders support custom attributes.

5) I can't get point shader to use strokeWeight attribute per vertex, like line shader does. So what is the correct way to render each point with a different radius?

Last question! Taking inspiration from the advanced openGL example I was able to render a buffer of vertices with the pointSprite feature of openGL where each vertex corresponds to a point. I was able to pass textures and custom attributes such as uv, size, etc, but at that point I was just using low level calls and processing was not very useful.

So, Is there a way to render each vertex as a point sprite using just the processing API?

]]>First, this is in a file called pTest.glsl:

```
#version 410
out vec4 fragColor;
void main() {
fragColor = vec4(1.0,0,0,1.0);
}
```

And then this in the sketch:

```
PShader pTest;
void setup() {
size(512,512,P3D);
pTest = loadShader("pTest.glsl");
}
void draw() {
background(0);
shader(pTest,POINTS);
stroke(255);
strokeWeight(10);
point(256,256,0);
//filter(pTest);
}
```

What I am expecting it to do is render a red point in the middle of the screen. If you comment out the shader line it renders the expected point. Also if you comment out the shader line, and uncomment the filter line it renders the entire screen in red, so I know that the shader itself is valid. But, if I try to run the code as written above, it gives me the error "OpenGL error 1282 at bot endDraw(): invalid operation". Googling this shows that a lot of people used to get this if they needed to update their graphics drivers, but that isn't the case for me. Anyone have any ideas?

]]>I have found other posts about this issue on the processing forum: https://forum.processing.org/one/topic/transparent-png-issue.html And of this and similar issues on openGl forums: https://opengl.org/discussion_boards/showthread.php/167808-2D-texture-problem-lines-between-textures https://opengl.org/discussion_boards/showthread.php/178038-Black-lines-at-edges-of-textures

The post on the processing forum does not have a solution (and isn't really the same problem). It seems there might be a way to fix this by adjusting OpenGL settings, but I don't know how to do that with processing. I am using OpenGL because I have to render a very large number of pixels in 3D space every frame, and I doubt other renderers will work as well. Here are some images (I scaled some of these up to show the problem, the lines are all exactly 1 pixel in the original (they seem to actual be one pixel on my retina display, not one processing pixel):

The lines in the middle of these images are added by the renderer, they are at the edge of a partially transparent image.

This is the border between two images (each drawn on a textured polygon, with uv mapping). The black line in the middle is not part of either image. That line is not on the edge of where the image is being drawn, but on the edge of the opaque part of an image. This image also has a line on the very edge.

The borders between some textures without any transparency still get weird lines.

]]>```
PixelFlow v0.18 - http://www.thomasdiewald.com
-------------------------------------------------------------------
OPENGL_VENDOR: Intel
OPENGL_RENDERER: Intel(R) HD Graphics 530
OPENGL_VERSION: 4.4.0 - Build 20.19.15.4454
GLSLVersionString: #version 440 core
GLSLVersionNumber: 4.40.0
GLVersion: 4.4 (Core profile, arb, compat[ES2, ES3], FBO, hardware) - 4.4.0 - Build 20.19.15.4454
GLVendorVersionNumber: 20.19.15 (- Build 20.19.15.4454)
-------------------------------------------------------------------
```

Is it possible to use my GTX 960M?

]]>`Your shader needs to be of TEXTURE type to render this geometry properly, using default shader instead.`

Also I'd like to put it behind another image or text maybe, but it doesn't seem to work so far. Thanks :)

]]>I was trying to use "applyMatrix()" directly. BUT this is flattening everything, meaning object back faces and occluded faces becomes visible. Lights have no effect. stroke and fill get mixed.

```
PMatrix3D p;
void setup() {
size(600, 400, P3D);
}
void draw() {
float x = map(mouseX, 0, width, -10, 10);
float z = map(mouseY, 0, height, -10, 10);
p = new PMatrix3D(
5.400566, 0.519709, -4.3888016, 193.58757,
5.284709, -9.016302, 3.312224, 266.927,
0.012042404, 7.253584E-5, 0.0084899925, 1.0,
0, 0, 0, 1);
applyMatrix(p);
background(20);
translate(x,0,z);
fill(100);
box(10);
}
```

second attempt was to apply transformation on PGraphicsOpenGL, but I still couldn't figure out how to do it properly.

```
PMatrix3D p;
void setup() {
size(600, 400, P3D);
p = new PMatrix3D(
5.400566, 0.519709, -4.3888016, 193.58757,
5.284709, -9.016302, 3.312224, 266.927,
0.012042404, 7.253584E-5, 0.0084899925, 1.0,
0, 0, 0, 1);
p.invert();
}
void draw() {
float x = map(mouseX, 0, width, -200, 200);
float z = map(mouseY, 0, height, -150, 150);
//((PGraphicsOpenGL) g).modelview.set(p);
((PGraphicsOpenGL) g).camera.set(p);
//((PGraphicsOpenGL) g).projection.set(p);
//((PGraphicsOpenGL) g).projmodelview.set(p);
background(20);
lights();
translate(width/2, height/2);
translate(x, 0, z);
box(100);
}
```

from here: //https://github.com/processing/processing/issues/2904 I got the idea of making a shader but I have never done this before, so I am clueless so far...

]]>It seems I need to be using 16 bit render buffers - the way shadertoy apparently does. I use 8 bit PGraphics for the buffers.

So my question is - could someone tell me the code to use to pack / unpack 16 bit & 8 bit - and where to place this inside the glsl frag shader. Or maybe something could be done on the Processing side to convert?

Thanks, Glenn.

]]>I've tried hacking the DomeProjection example, but no joy.

Any help welcome!

Glenn.

]]>