I whipped up a little proof of concept sketch in Processing. It's a drawing program that uses Processing's mask() function.
Works fine in Java mode, not fine in Javascript mode. I can see one of the images, but not the one effected by the mask functionality. Is it possible that I can't use that filter? Or any filter, for that matter? I've been having trouble running many different examples in Processing.js, so I'm not really sure what the limitations are.
Tested on newest version of Processing, 2.0.3, on Win8 in Chrome and IE.
I have been working on a simple drawing program that lets you dynamically draw image masks. However, I want to scale the images and center them in the screen, no matter what the screen size is. This throws off my mouseX and mouseY drawing functionality.
int scaledWidth=originalWidth*(height/originalHeight);
int scaledHeight=originalHeight*(width/originalWidth);
if (imageToScale.width > imageToScale.height) {
//if the image is wider than it is tall: make it as wide as the screen, and increase the height by the same factor
imageToScale.width=width;//the width of the stage
imageToScale.height=scaledHeight;
}
else {
//same approach for tall images
imageToScale.height=height;//the height of the stage
imageToScale.width=scaledWidth;
}
}
I was thinking of different ways to get around this. I tried implementing a PGraphics that would do all the drawing, then move and manipulate the PGraphics positioning. However, it seems like I can't do that with the masking shader. beginDraw and endDraw have to go one after another, and can't be "nested" within each other (if I'm interpreting the errors I received correctly).
I'm just not really sure of what the best strategy is for accomplishing this. Automatically scale to screen, be able to position the "drawing stage" wherever I want, and have the X and Y coordinates translate from mouse position to position on the image. Not sure how to properly "encapsulate" everything in a way that I can understand.
I would like to be able to save these images at higher resolution than the stage dimensions allow. saveFrame "flattens" the first image, the mask, and the second image to make an image. This is the functionality I would like, but it is at the resolution of the stage. It would be nice to be able to bring in photos with higher resolution than the stage (like a cellphone res), draw your mask and save everything at a higher resolution jpg.
Is there a simple "combining" or "flattening" functionality? Or will I need to go through the pixels on a new PImage and assign img or img2 to a particular pixel depending on the mask's pixel information (0 vs. 255)? Or maybe this is a job for shaders (which I have no experience with)?
I'm working on an image viewer that will be similar to a screen saver. What I'm trying to do is move x/y coordinates on a PImage and maybe resize the image very, very gradually. My problem is that when I try to use fractions for x/y values, play with the framerates, and even use motion tweening libraries, the animation is jittery because it is so slow.
How can I achieve slow, gradual motion that is also smooth?
I know there has been a few threads about this, but I'm having trouble wrapping my head around it.
I've made a simple image viewer that can pan up, down, left and right. It can zoom in and out. I'm using scale and translate to accomplish this.
Positioning isn't managed via mouse, but keyboard buttons in a "latch" type mode. Up, down, left and right are their respective keys. E is zoom in, W is zoom out. Space "stops" the current action.
When I zoom, the scaling does so in respect to the image's center, and not the center of the screen. I know I need to adjust the positioning when I zoom, but I haven't found the right formula to do so.
I have no idea what I am doing anymore and my brain hurts. What's the best way to translate the coordinates in relation to the scaling ratio? Any help is greatly appreciated.
Sorry, this is a total beginner question. Please excuse my ignorance.
I'm trying to process the pixels in an image. I want to be able to make my own filters that I can apply live. But the pushMatrix and popMatrix functions are giving me some guff.
So I have a series of images inside of an array. They are called in a slideshow type functionality, and then effected live by pressing keys. I'm trying to make a image mirror itself right down the middle when I press a button, and then let go when I let go of the key. The program is pretty big (and ugly), so I'm going to only show the relevant code.
pushMatrix();
images[imageIndex].loadPixels();
int imageWidth = images[imageIndex].width;
int imageHeight = images[imageIndex].height;
// Begin loop for width
for (int x = 0; x < images[imageIndex].width; x++) {
// Begin loop for height
for (int y = 0; y < images[imageIndex].height; y++) {
images[imageIndex].pixels[y*images[imageIndex].width+x] = images[imageIndex].pixels[(images[imageIndex].width - x - 1) + y*images[imageIndex].width]; // Reversing x to mirror the image
}
}
images[imageIndex].updatePixels();
popMatrix();
The problem is that the image takes effect the way I want it to, but it doesn't "undo" itself. I press the key, it performs the mirroring, and when I let go it stays. Then when the slideshow loops itself, the next time that image comes around it is still mirrored.
I thought that putting the matrix manipulation inside push and pop would make it "undo-able", but I clearly don't have a good grasp of how push and pop work.
What am I doing wrong? Any help is greatly appreciated.
I have some questions about using smooth(); and noSmooth(); if anyone can help.
I am loading images from flickr. Using smooth(); makes them look good.
When you type and press enter, it displays text and searches for a new batch of flickr photos. After a few moments, I initiate an animation that will clear the search field. The text goes down and fades out (color and alpha) simultaneously.
When smooth is on, the animation is slow and jittery. When smooth is off, the animation is great but the pictures look bad.
I've tried using smooth, and then noSmooth when the animation gets executed. But it doesn't just apply to the image, the entire screen is noSmooth, making the image blocky again.
My code is kind of a mess right now, so if you think seeing it will help you understand the problem I will post it. But for now I just wanted to know how people handle animating text, do so with multiple assets, use (or don't use) the smooth/noSmooth functions, and if those functions can be assigned to individual elements.
I did some work today on a simple mp3 player that will play back samples when you press keyboard keys. I got it to a basic, usable state.
The idea is that I want to use my old G1 Android phone to run the app. It has physical keys, and it would be cool to be able to use it as a music making accessory device, or just a cool canvas for Processing ideas.
Here are the issues:
Looking at how to start exporting sketches to Android...
"
Note that due to major changes by Google to the Android dev tools, Android mode in Processing 1.5.x no longer works."
Ok, I will just download Processing 2.
Looking at the Processing 2 changes file...
"We'll only be supporting Android 2.3 (Gingerbread) and later, starting with Processing 2.0a5."
Hmmm, I rooted my G1 and installed the last stable Cyanogen build for the G1. Which one was that? 6. CM6=Froyo. CM7=Gingerbread.
Anyone with Processing/Android experience would be greatly appreciated. I would rather hear from someone who has some knowledge in the area instead of tearing my hair out while re-inventing the wheel.
Should I try to find a way to get CM7 on my G1? It looks like it might be possible, but could push the G1 hardware to the limit.
It says specifically "starting with Processing 2.0a5". But I only see the a4 and a3 versions on the download page. Would these versions play fine with Froyo (CM6)?
In general, I'm interested in hearing about people's adventures with Processing on now-antiquated or low-power Android devices. Specifically, Is trying to get this G1 Processing-friendly just an exercise in futility?
Hi, I'm new to Processing. I'm going through the tutorials, and then trying to do different things with the base principles in order to test my knowledge. I've run into a problem I was hoping someone would help me with.
I wanted to make my own image filter. I understood the principles of doing that with brightness values, but then I wanted to make the same with with alpha values and I hit a brick wall.
I loaded an image from a file, took the pixel information and created a new image object. Displaying the new image works. The brightness filter from the tutorial works. Trying to change the alpha values on a per-pixel did not work. Just to make sure, I used tint() to adjust the alpha on my image object instead of the individual pixels. It did worked.
So it looks to me that:
pixels[i] = color(255, 102, 204);
Would seem correct, but then adding a fourth value:
pixels[i] = color(255, 102, 204, 127);
Did nothing at all. I did some reading, and I'm getting the impression that pixels[] simply won't take alpha values. Is this a correct assumption? Or am I screwing up royally?