Hi, I'm rendering a 3D scene from two different points of view into two GLGraphicsOffscreen object. One is show in the main window, but now I need to show the other offscreen buffer in a separate window, then, by using frame.setLocation(), I can place one window in the primary display and the other one on the secondary display of an extended desktop (which is a projector).
My first attemp has been to create another sketch from the main one using PApplet.runSketch(), which receives the resulting image through a setImage() method and displays it. I'm passing the buffer by rendering it into a PImage like this.
The auxiliary sketch just copys the PImage with get() and displays it. But the result has a frame rate around 5 fps
I guess this is due to all the memory copying involved, as both the PImage and the buffer are 1024x768.
Other methods I have found on the forum rely on creating a new PApplet as well, what I think would lead me to the same problem. Is there any method to draw to a new window by reading directly from the buffer object? Or any other idea on how to resolve this?
BTW, I'm working on Windows 8 and Processing 1.5.1. Thanks!
Using GLModelEffect this is rather easy, as in the xml file the uniform name and the texture unit are explicitly specified, but how can it be done with GLSLShader object?
for some time I have been thinking about making a spatial augmented reality game, a mixture of strategy table games, with its board and figures, but adding computer graphics into it and a first person view from the figure's perspective. Finally, I found some spare time and this is the very first prototype for it:
I'm trying to mimic proscene's camera movements without using the mouse but it seems I'm not undestanding it quite well. I want to rotate the camera to mimic look up, down, left and right.
I have a GLModel of quads with different colours and normals, and I would like to have the border of each quad drawn in a different colour, like having different stroke() and fill() colours. Something like this:
I have tried playing with stroke(), fill(), noFill() and model.setLineWidth() with no succes, is this possible at all with GLModel?
Another solution would be to create a GLMode lof LINES with just the borders and render it on top of the other GLModel. Would the modes LINE_STRIP or LINE_LOOP reduce the number of vertex, as with LINES I would need to duplicate lots of vertex?
I'm trying to compute a weighted mean algorithm in the GPU. I want to pass several images to the shader, via sampler2d uniforms, in a FIFO queue style, having a new one each frame by grabbing it from a camera. This is a simplified version using a PImage to illustrate whats happening:
PGraphics pg;
PShader kSmooth;
int texIndex = 0;
boolean ready = true;
PImage img;
public void setup()
{
size(640 , 480, P2D);
img = createImage(640,480,ARGB);
pg = createGraphics(640,480,P2D);
kSmooth = loadShader("weightedmean.glsl");
kSmooth.set("width", 640);
kSmooth.set("height", 480);
}
public void draw()
{
background(0);
kSmooth.set("tex"+texIndex, img.get());
kSmooth.set("index", texIndex);
pg.beginDraw();
pg.shader(kSmooth);
pg.rect(0,0,pg.width,pg.height);
pg.endDraw();
texIndex = (texIndex+1) % 5;
image(pg,0,0,pg.width,pg.height);
frame.setTitle("Frame "+frameCount);
}
And a simplified version of the fragment shader, where the weights are fixed instead of depeanding on the index uniform:
float result = (val1 + val2 + val3 + val4 + val5) / 15.0; // 15 is the sum of all weights
gl_FragColor = vec4(result, 0.0, 0.0, 1.0);
}
In my old windows xp machine
the memory used by the process keeps increasing until reaching the max memory available set in processing's preferences, then the program freezes for a few seconds, the memory is released and the process begins again, increasing the memory usage once again.
In my new windows 8 machine I get an OutOfMemoryError after 200 or so frames.
Is there a method to free the memory of previous textures no longer used?
Thanks.
Note: I asked first how should I do it in this post which was moved to the general discussion forum, I thought I should post a simplified code here to illustrate the problem I found while tryng a solution. Hope that's not rude.
I would like to speed up an algorithm by using the GPU. I want to find the mean value per pixel from a series of images coming from a FIFO queue, so in every frame there is a new image but the rest stays the same.
Passing every image as a sampler2D seems not very efficient, any suggestions to point me in the right direction?
I'm trying to add some basic fog effect through the new PShader object. Found
this fairly simple glsl shader and adapted to my purposes. I would like to replace only the fragment shader and keep using the "default" vertex shader, but I couldn't find how to do it, so I made a basic vertex shader using the minimum attributes, which fall in the FLAT shader type.
Here is my test code:
Copy code
PShader fog;
void setup() { size(640, 360, P3D); fog = loadShader("fogF.glsl", "fogV.glsl");
The FLAT type shader only works for filled untextured geometry, so, do I need to write another one using the LINES type attributes to apply the fog effect to lines and another one for text? It feels very convoluted for a simple effect, is there a way to simplify it?
I'm trying the p5_sc / erase library to communicate with supercollider. In the library's source the loopback address is harcoded, has anyone tried to change it to communicate with another machine running supercollider on the same local network?
I have tried processing.js for the first time (nice!!) and i'm finding a little performance problem.
I used the metaball demo effect to iluminate a background image, you can see it here,
www.fernandaramos.com. It was really nice and easy, besides the strange detail that I had to change tha values of the constants which determine the movement of the metaballs because their behaviour were very diferent when running in a browser than that in a processing sketch(maybe a different behaviour of the function noise()?).
The problem is that when I tried it in different smartphones (iphone 4 and htc desire s) I get frame rates lower than 1 fps. I tried lowering the resolution and getting rid of the metaball effect, using just rendered images of the metaballs. After several tests I came to the conclusion that image.blend() is to heavy for a mobile to handle (I'm trying it with a 300x300 image and a 50x50 PGraphics)
This is the original code, mobile browser would be redirected to the lighter version:
/** * Metaflies by Nacho Cossio and Fernanda Ramos * based on Metaball Demo Effect by luis2048, which uses Jim Blinn's technique * for rendering */ /* @pjs preload="tatto600.jpg"; */ PImage back; int numBlobs = 10; int[] blogPx = new int [numBlobs]; int[] blogPy = new int [numBlobs]; float scaleFactorX, scaleFactorY; PGraphics pg; PImage aux; int[][] vy, vx; float increment = 0.04; float[] seed = new float [numBlobs]; void setup() { size(600, 600); //noCursor(); noiseDetail(8,0.5); back = loadImage("tatto600.jpg"); pg = createGraphics(100, 100, JAVA2D); aux = createImage(back.width, back.height, RGB); vy = new int[numBlobs][pg.height]; vx = new int[numBlobs][pg.width]; for (int i=0; i<numBlobs;i++) { seed[i] = random(0, 1000); } scaleFactorX= width/pg.width; scaleFactorY= height/pg.height; } void draw() { frameRate(20); //background(0); for (int i=0; i<numBlobs; ++i) { float incX = (4*(mouseX/scaleFactorX - blogPx[i])/pg.width) + (noise(seed[i],0)-0.5)*44; float incY = (4*(mouseY/scaleFactorY - blogPy[i])/pg.height) + (noise(0,seed[i])-0.5)*44; blogPx[i] += incX; blogPy[i] += incY; seed[i]+=increment; for (int x = 0; x < pg.width; x++) { vx[i][x] = int(sq(blogPx[i]-x)); } for (int y = 0; y < pg.height; y++) { vy[i][y] = int(sq(blogPy[i]-y)); } } pg.beginDraw(); pg.background(0, 0, 0); pg.loadPixels(); for (int y = 0; y < pg.height; y++) { for (int x = 0; x < pg.width; x++) { int m = 1; for (int i = 0; i < numBlobs; i++) { // Increase this number to make your blobs bigger m += 1200/(vy[i][y] + vx[i][x]+1); } pg.pixels[x+y*pg.width] = color(m, m, 0.68*(m)); } } pg.updatePixels(); pg.endDraw(); aux = back.get(); aux.blend(pg, 0, 0, pg.width, pg.height, 0, 0, aux.width, aux.height, HARD_LIGHT); image(aux, 0, 0, width, height); }
Is this correct? Is there any workaround to get something similar to what blend() does in this particular case?
I'm playing around with moving particles, which are drawn as lines from the previous position to the new one. The problem comes when I zoom in, getting too close to these lines, as the turn from small lines moving inside a 3D space to lines crossing the display from one side to the other, and thus looking like lines in a 2D plane.
After looking into the forum, I thought it would be related to the near z plane clipping distance, so I tried setting a higher distance with perspective(). But it doesnt matter how high I set this distance, it is always looking the same, even setting a near plane distance very close to the far plane distance, all the particles are drawn.
I was using scale() for zooming, so I decided trying camera() instead (just trying here), but Im getting the same problem.