I have to say that I didn't got understand code yet. But in principle I expect there are two things required which I dont know how to do with PShader in processing.
Usage of float textures (because RGBA texture 8-bit per channel is just not precise enought to make smooth fluid simulation)
Using the same texture as both - input and output array for the fragment shader
because these 2 things I need for some other things I want to do.
Currently I'm playing with JOCL by LWJGL and OpenCL by pyOpenCL, but still processing is my favourite tool for sketching numerical simulations, and I would be very happy to have OpenCL in processing.
Recently I found that OpenProcessing changed policy and want to upload the sketches in Processing.js format (which I don't like at all
, and don't see much reason to do that)
I would like to know what are the limitations of Processing.js in comparison to standard Java Processing ?
I tought that it is just much slower ( because it doesn't use JIT ? does it? ) ... but actually doesn't seen so slow as I expected (Maybe just because I didn't seen yet really computationaly intensive sketch writen in Processing.js).
I also found that probably GLSL shaders are not wokring in Processing.js ( I wasn't able to run even the examples from processing shelf. Or am I doing it wrong? )
I would like also to know how to convert sketch writen in Processing to Processing.js in order to upload it to OpenProcessing? I tought that in case when I dont use any processing library and GLSL shaders it is converted automatically. But it is not.
For example, this is my applet to demonstrate solution of planetary orbit by Runge-Kutta-Fehlberg integrator with adaptive step. I had to go back to processing 1.5 to export i as applet, because in Processing.js it is not working, I don't know why?
this code works OK with default render ( fills up the screen from top to down ), however with P2D it makes strange things (makes some strips like a comb). I think it is because of some incompactibility of set() function with P2D render (I guess something like bad synchronization of framebuffer operations...).
I use processing 2.0b6 (maybe in 2.0b7 it is already repaired, I don't know)
final int sz = 512;
final int sz2 = sz*sz;
final int n = 1000;
int npoints;
color c;
void setup(){
//size(sz,sz ); // this works OK
size(sz,sz,P2D); // this behave strange
background(255);
color c = color(random(255), random(255), random(255));
}
void draw(){
for (int i=0;i<n; i++){ // loop over some chunk of workitems
int id=npoints%sz2; // workitem id
if (id==0){ c=color(random(255), random(255), random(255)); } // new screen start here
int ix=id%sz; int iy=id/sz; // indexes of pixel
set( ix,iy, c );
npoints++;
}
}
NOTE - if you ask why the hell I tried such silly way of rastering over the whole screen, it is because I use processing to debug Kernels for OpenCL, and there varibale "id" is index of workitem
I was looking at shader examples in processing (especcially "landscape"), and its pretty cool. I already made shaders which raycast (or raymarch) big smooth 3D landscale generated on GPU from 2D texture using bicubic interpolation.
I would like to use power of GPU processing more. I have in mind 2 things which I want to program in future:
Rendering of volumetric data using raymarching algorithm (some sciencetific visualization of electron density)
Using GLSL for solving partial differential equations (like Fluid navier stokes example in processing)
For that I would probably need
3D and 1D texture. I know that in GLSL language is also sampler1D and sampler3D which I can in principle use to pass int[] and int[][][] data arrays into GLSL and than read numbers from it using texelFetch() however I don't know how to pass 1D or 2D texture into glsl from processing. Currently I pass all textures to shader as PImage - like myPShader.set("texData", myPImge ). Is there any general way how to do that? Does processing openGL interface implement all openGL features (up to which version 2.0 3.0 )?
For more precise computations I would prefer pass texture as float arrays (32 or 64 bit) rather than images (where each pixel stores 4x 8bit numbers R,G,B,A). Also I would like to simply pass rather float array float[], float[][], float[][][] rather than converting it to PImage ( using obstructions like loadPixels(); updatePixels();) and than convert it back inside the shader.
Before I spend to much time searching or figuring out how processing interface to OpenGL is working, I would like to know the limitations of current implementation. What is possible and what is not? And if there is possible any workarround in case of features are missing in interface but are in principle possible with the version of OpenGL which is supported by the libraries?
Hi, I'm trying to do modular physical simulation enviroment, and one of things I would like is to pass some user defined fuction to other functions as a parameters. As far as I know, java cannot pass fuction, but it can pass objects, which can have overrriden functions inside. I tryed this on simple example:
static version:
void arrayAdd( float [] a, float [] b, float [] out ){
it is slower than it would be if the function would be inlined in compile time (wich is in principle possible in this case, and I would like force Java JIT compiler to do that )
It is not very elegant to instantiate objects which contain no data (only a function), I would preffer to pass is as static, but I was unable to do that (it's not possible to pass Class or Static Object to function and use it's methods inside)
I messured speed. For both operations float.mult and float.add the static version is more than 2x faster
static 100000000 [ops] 406.0 [s] 4.06 [ns/op]
dynamic 100000000 [ops] 1188.0 [s] 11.88 [ns/op]
for float.Sqrt the difference is already very small
They recomand some thinks ( pass Class instead of it's instance and use "enums" ) however, it seems that neither of these things does work in processing 2.0b6
APENDIX - this is the code I used for meassuring speed
Hi, I'm potting map defined by values on 2D rectangular grid (in vertexes of the grid), for each rectangular tile (Quad) it should be linearly interpolated from the neighboring vertex values. If I do it by direct (immediate) method it works fine. However, if I try to use PShape (Retained rendering) to improve performance* it does not work. It seems that fill() command change color of the whole shape, instead of just one vertex.
Questins:
Is there any way how to do vertex shading** in PShape ?????
Is in Processing 2.0 still possible use Opengl RenderList ? How? Would it provide linearly interpolated vertex shading?
Or VertexBuffer ?
side question - even with vertex shading of quads, each quad is splitted to 2 triangles, so it is not interpolated correctly. I mean each triangle is interpolated independently, so there is a sharp diagonal seam in between. Is there any fast (GPU based) way how to render really bilinear interpolation between the 4 vertexes of the quad? I was thinking, maybe I can write some GLSL shader to do that?
* - I guess PShape works is something like OpenGL render list or VertexBuffer. I was insporated by examples in Demos>Performance>CubicGridRetained vs. CubicGridImmediate.
** - by vertex shading I mean something like in example Topics>Geometry>RGBcube
Simple example for just one quad:
void setup(){
size(800, 800, P3D);
noStroke();
colorMode(RGB, 1);
translate(400,400);
scale(50);
// using direct( Immediate rendering) - Correct linear interpolation :))
beginShape();
fill( 0x80000000 ); vertex( -1 , -1 );
fill( 0x8000FF00 ); vertex( -1 , +1 );
fill( 0x80FFFF00 ); vertex( +1 , +1 );
fill( 0x80FF0000 ); vertex( +1 , -1 );
endShape(CLOSE);
// using PShape (Retained rendering) - just fill it :((
this is weird, because casting from "Object" for example when I use ArrayList is possible
ArrayList L = new ArrayList();
L.add( new TIsoSurf(0,0,0,99.9) );
Object O = L.get(0);
TIsoSurf S1 = (TIsoSurf)O;
TSphere S2 = (TSphere)O;
there
TIsoSurf S1 = (TIsoSurf)O; is OK, but TSphere S2 = (TSphere)O; makes and error. How the hell the compiler knows that the "Object" is TIsoSurf and not TSphere ????
Motivation:
I just want to use shared inheriated constructor from TIsoSurf for all it's childs, I don't want to write it again and aragin for all the childs, if the data structure is the same.
I just managed to compile procesing library template in Eclipse, and I'm trying to move some N-dimensional vector operations to this library. I'm not very familier with pure java or eclipse (just with C and Fortran), so I'm a bit confused about thinks like static, final, abstract, interface .... and that everything in java must be inside class.
My problem:
I would like to reference function from my library without creating an instance of the class.
for example I have this simple library in Eclipse:
package template.library
import processing.core.*;
import java.lang.Math;
public class HelloLibrary {
public final double rndBell(){
double a = Math.random();
double b = Math.random();
double c = Math.random();
double r = (a+b+c)*0.33333333 -1.0;
return r;
}
}
I would like to call
rndBell() in processing like this:
import template.library.Hellolibrary.*;
println( rndBell() );
But to do this it would be necessary (probably) to make
HelloLibrary static (?) which Eclipse says is not possible (why?)
so I need before calling rndBell create instance of the class ( which I don't wan't ) like this:
Hi,
I'm trying to do opacity like in photoshop, where the stroke looks like homogenous object (individual cricles should nod be seen ). To do this I first render the whole stroke to PGraphics temp, and at the end of stroke I print it to Layer of image.
However, the it looks to be working only if I allocate new temp for each stroke. If I use "temp.background(0,0)" to clear and recycle temp, than it draws just the first storke, again and again.
It looks like Layer1.image(temp,0,0) does not know that temp has changed. Which is strange, bacause "image(temp,0,0)" on the applet directly paints correct (updated) temp.
Sure I can use createGraphics and let garbage collector do all the work, but this is not very nice solution.
Hi, I'm doing simple painting tool, where I wan't alpha-blend several layers. It's working quite well using GLGraphics, however because of compactibility with older computers (or not instaled drivers like on Linux) I wan't also to do it with standard Processing PGraphics.
However, the result is not what I want. Instead of blending new layer with background pixel color, the image()
blend layer with black (Completly opaque)
Hi I'm doing simple visualization where I want to plot one molecular geometry over the other where. I want to render first both molecules into some image buffer independently, and than do some image composition to overplay it.
I expected that this code should do the same, but it does not. Looks like that some features does not work in P3D offscreen rendering (like background() , material definitions, lightning definitions ...... )