I'm working on creating algorithmic game levels like the "island" below: everything is generated computationally using random walks and other parameters for different "growing" terrains (blue is water, tan is beach, etc) including the shape of the island.
However, I'd like to be able to remove the isolated pieces that cannot be reached by the player (who can't enter the water). Here's a detail of the upper-right corner of the image - the bits I'd like to remove are in the upper-righthand corner:
I can remove single pixels of beach that are surrounded by blue on all sides very easily, but I can't figure out an efficient (and not super confusing) way to get the connected pieces of beach that never touch another color.
My gut says the solution is probably something recursive? Any suggestions? Thanks!
I'm trying to load the samples from an audio recording into a float array, using the method suggested from the Minim library. However, I'm getting the error "Cannot find anything named 'BufferedAudio'".
Hi all, I'm trying to get a (probably dumb) thing to work in PBox2d - to get an arm to point towards the mouse. I have a basic box with a revolute joint and a motor. The rotation works find on it's own.
According to
a tutorial I found, I then need to add a mouse joint to direct the motor... but it's not working. My code is pretty lengthy, so perhaps there is someone out there with an example sketch?
Maybe I'm going crazy: I'm trying to find the most common colors in an image. Using filter(POSTERIZE, n) seems the easiest solution. However, if I put in a number of values, the result is always more than the # specified.
Many of these are grayscale values, which leads me to suspect that POSTERIZE limits to colors + grayscale? Any ideas on getting rid of those "extra" grayscale values but leave them if they are really the most dominant?
If it makes any difference, I'm posterizing the image, then loading the pixels into a HashSet to remove duplicates quickly, then spitting the results back out as a color array. Maybe something weird is happening there?
This works, but (as far as I can tell) measures the difference between the darkest and lightest values - not a very good overall idea of contrast. An all-gray, all-white, or all-black image turns out 0 contrast, but a super-contrasty image and a not-so-contrasty one but with a few very dark and light areas both return high contrast.
Hi all, after looking through the examples for Threads (and some really confusing Java examples), I can't seem to figure out how to load external data in a thread when a key is pressed
. There must be a way to only download data when a key is pressed (as opposed to automatically in set intervals with sleep). Second, I'd like to be able to be updated on that process in the sketch window.
Here's what I have so far - it does load data but I don't see the "loading" text at all.
I'm looking for suggestions on improving some code that currently runs
really slow (like 10 minutes or longer for a low-res image) - hopefully this description below makes sense! Full code is a bit long for the forum,
but posted here.
I'd like to start with "seed pixels" (hundreds to tens of thousands, depending on the image) and:
Store all neighboring pixels in a list (up, right, down, and left)
BUT, make sure that there are no duplicates in the resulting list of locations
To further complicate, I'd like to do this iteratively, expanding out from a previously gathered set over and over. This also means not gathering pixels that were gathered in previous states of the system.
Currently I iterate all seed pixels, finding their neighbors with +/- from their position in the pixel array. The duplication-check is being done with:
This seems to work fine and, when commented out, doesn't seem to be what's bogging down my code, though each iteration makes the system go slower than the last.
I've looked into Set and HashMap as alternatives to standard arrays, but I need to preserve the pixel's position and original color, making things a little tricker.
Ideas at this point:
Store a boolean array of previously traversed pixels - check the new positions against that (if the check is what's slowing things down)?
Some kind of image-mask applied to source - perhaps the underlying image-processing algorithms are optimized for this kind of 2d matrix?
Hi all, I've been thinking about 3d rotation using the mouse - normally I've mapped the mouse x/y coordinates to rotation values. However, this means that when you click on the screen the model jumps to that rotation.
Instead, I wanted to click-and-drag to incrementally rotate, more like in a 3d modeling program. Here's my working version:
int boxSize = 600; // size for 3d box
float inc = 0.05; // increment to rotate during drag
color orange = color(212, 124, 23); // fill color
float rx, ry; // keep track of rotation x/y
void setup() {
size(500,500, OPENGL);
strokeWeight(20);
// initial rotation
rx = radians(-25);
ry = radians(45);
}
void draw() {
background(0);
// rotate from center
translate(width/2, height/2, -boxSize*5);
rotateX(rx);
rotateY(ry);
// box!
stroke(0);
fill(orange);
box(boxSize);
// directional lines
pushMatrix();
translate(-boxSize*2, boxSize/2, boxSize/2);
stroke(255,0,0);
line(0,0,0, boxSize, 0, 0);
stroke(0,255,0);
line(0,0,0, 0, -boxSize,0);
stroke(0,0,255);
line(0,0,0, 0,0,-boxSize);
popMatrix();
}
// the magic happens here!
void mouseDragged() {
// normally we'd do something like this
// ry = map(mouseX, 0,width, 0,TWO_PI);
// rx = map(mouseY, 0,height, 0,TWO_PI);
// drag in x direction, rotate y
if (mouseX > pmouseX) {
ry += inc;
}
else if (mouseX < pmouseX) {
ry -= inc;
}
// ditto y
if (mouseY > pmouseY) {
rx -= inc;
}
else if (mouseY < pmouseY) {
rx += inc;
}
}
This seems to work ok, but I'm looking for suggestions to make this more natural. Ideas?
I've been working on training an object-detector with OpenCV 2 (using opencv_traincascade) and have had good success implementing the resulting XML file in Python. I was sad to find that installing OpenCV 2 broke the Processing library, so I tried installing the JavaCVPro library. While it works using an image of a face and one of the pre-built cascade files, it throws a weird error and quits when I try to use my XML file:
Rectangle[] found = cv.detect(1.2, 2, OpenCV.HAAR_DO_CANNY_PRUNING, 60,60);
println("Found: " + found.length);
stroke(255);
noFill();
for (Rectangle f : found) {
rect(f.x, f.y, f.width, f.height);
}
}
public void stop() {
cv.stop();
super.stop();
}
While my cascade works great in Python, it throws this weird error:
OpenCV Error: Unspecified error (The node does not represent a user object (unknown type?)) in cvRead, file /Users/palimpseste/Development/OpenCV/opencv-2.x/svn/OpenCV/modules/core/src/persistence.cpp, line 4811 terminate called throwing an exception
I did have some trouble getting the Python code to work until I changed cv.cascade to cv2.CascadeClassifier (apparently this has to do with the method for training in cv1 vs cv2). I wonder if all of the cv2 code has been updated in the library?
Any suggestions would be great - I like using and teaching Processing, so getting this working again would be fantastic.
I've noticed on my Mac (running Lion) that if I use selectInput() or selectOutput() and hit the OK button, the file path is loaded properly, but if I hit enter/return (which I usually do) that nothing is returned. It doesn't return a null string (as when the process is canceled) it just goes on like nothing happened.
Could this have to do with my having a keyPressed() function in my code? Is this a bug? Just the way it is? Workaround?
Hi all, I'm building a drawing sketch that uses motion tracking to generate drawings. Because I want the video to appear onscreen behind the drawing and because I don't want the video image saved with the drawing, I'm building a separate PGraphics context with the lines and a transparent background. Here's a screenshot:
The problem I'm running into is with erasing: without the video image, I would just draw shapes with the background color... easy. But since I want to clear areas of the PGraphics array, I'm using the following code from another forum post:
This is great, and works very fast. However, if the cursor moves quickly the erased marks appear as isolated dots, not smooth washes.
I've been trying various methods to clear larger areas, but simple methods like drawing a rectangle with a fill of (0,0) doesn't work, as it is just clear and the lines below show through. Another version, which works ok but is VERY laggy, is this:
drawing.loadPixels();
// get distance between start/end, map to a reasonable value
// (ie: less steps for shorter distances)
int stepSize = int(dist(tipX, prevX, tipY, prevY));
stepSize = int(map(stepSize, 0, width, 1, 10));
// iterate through all steps, erasing a circle at each one
for (int p = 0; p<stepSize; p++) {
float tempX = lerp(tipX, prevX, p);
float tempY = lerp(tipY, prevY, p);
for (int x=0; x<drawing.width; x++) {
for (int y=0; y<drawing.height; y++) {
float distance = dist(x, y, tempX, tempY);
if (distance <= eraserSize/2) {
drawing.pixels[x+y*drawing.width] = color(0, 0);
}
}
}
}
drawing.updatePixels();
This interpolates a bunch of clear circles, but is a total processor hog (and it erases many pixels more than once), especially when also doing blob tracking, etc.
So, my question is two-part:
Any thoughts on an algorithm to write a rectangle of pixels, but at an angle? Doing it horiz/vert is rather easy, but figuring out how to do this at an angle has me stumped (push/popMatrix don't seem to work properly, at least in my code)
Any other ideas of a method for erasing that would be faster?
While there are a few posts on the forum about character encoding, none seem to have answers other than setting the encoding for input/output.
I'm parsing the Wikipedia XML dump, reading it with BufferedReader and outputing results with PrintWriter, all of which are encoded with or use methods that assume UTF-8. I've verified the XML file's encoding with a bit of detective work in the Mac Terminal (as found on
Stack Overlflow):
file -I {filename}
However, some non-English characters show up as weird symbols in my output file, not as they are supposed to. Characters from Western languages appear to mostly be ok (umlauts, etc), but especially it seems that Arabic or Russian characters show up as odd punctuation marks, etc.
Perhaps worth mentioning: I'm parsing the data at some points using Jsoup, but am specifying UTF-8 in the 'parse' command.
Not a dire problem, but any suggestions? Seems to be encoding-related, but shouldn't UTF-8 handle all that ok, especially if they were encoded as UTF-8 to begin with?
Hi all, I'm trying to figure out the direction of the normal of a face from an STL file I've loaded using Toxiclibs. I did something very similar with the OBJloader library without issue (but have abandonded that library because it seems to have a lot of problems loading my files, whereas my STLs from Rhino have no issue with Toxiclibs).
Ultimately, the project is sort of a hack of ray-casting, so I need to know if I'm hitting the front or back of a face.
Looking at the normals, they appear to be normalized but I can't figure out how to go from that to direction. It appears that STLs don't store normal's direction: "In most software this may be set to (0,0,0) and the software will automatically calculate a normal based on the order of the triangle vertices using the 'right-hand rule'."
Anyone have an idea about how to change the "About" screen for an exported Mac application? Not super important, but it would be nice to be able to include contact info, etc. Not sure if this could be done through Processing or changing the package contents once the app is exported.
...but they seem more complicated than necessary. Since the window is already being created by Processing during export to show the icon and version information, it seems it should be possible to just insert text somewhere.
Hi all, I have what is perhaps a strange question: I'm enumerating combinations using a recursive function. I'd like to get one combination, do something with it, then the next, do something, etc. The problem is that draw() updates the window only when done. Is there a way to draw and update the screen outside the draw() function? I'm thinking there might be a Java method that skips the draw()...?
Here is a shorter version of my code to demonstrate:
String input = "0123"; int len = 3;
void setup() { size(300, 300); noLoop(); }
void draw() {
background(255); line(0,0, width,height);
recursiveCombinations(input, len, new String()); }
// the recursive function which generates all combinations in one go void recursiveCombinations(String input, int depth, String output) {
if (depth == 0) {
// not sure why -'0' works to cast to an int... but it does! int r = (output.charAt(0) - '0') * (255/input.length()); // scale to 0-255 based on input size int g = (output.charAt(1) - '0') * (255/input.length()); int b = (output.charAt(2) - '0') * (255/input.length()); println(r + "\t" + g + "\t" + b);
// should write the text to the screen and wait 2 seconds before continuing... fill(0); text(r + ", " + g + ", " + b, 20, height/2); delay(2000); } else { for (int i=0; i<input.length(); i++) { output += input.charAt(i); recursiveCombinations(input, depth-1, output); output = output.substring(0, output.length() - 1); } } }
Hi all, I'm working through Daniel Shiffman's fantastic GA examples (
www.shiffman.net/teaching/nature/ga) and am trying to do some modifications to his 'GAShakespeare' example. I understand the basics of how the GA model works, but am confused about some specific details of the code when it comes to measuring the 'fitness' of the population.
In the Population class, a 'perfectscore' (the stopping point) is defined as:
perfectScore = pow(2, target.length());
And the 'fitness' in the DNA class at a particular moment is similarly defined as:
fitness = pow(2, score);
Does anyone know why it would be 2^nth? When using a longer string however (>20 characters), this results in numbers way too high and I had to change to a long instead. I tried changing the code to be just:
perfectScore = target.length();
fitness = score;
This works, but is
much slower. I have no idea why, since it isn't changing the underlying crossover procedures. Thoughts appreciated!
Hi all, I'm trying to look through a series of still images for face detection using the OpenCV library. It's working great and (I believe) I'm correctly passing the pixel array of my PImage, as suggested by the documentation to avoid memory issues on the Mac (
http://ubaa.net/shared/processing/opencv/opencv_loadimage.html)
*** set a breakpoint in malloc_error_break to debug
OpenCV ERROR: Insufficient memory (Out of memory)
Is there a call to clear the allocated memory before putting in the next image? I would have thought loading a new pixel array would overwrite the existing data, but perhaps not. Am I missing something obvious? Other suggestions?
Can't seem to get my head around this: trying to read a binary file (in this case a .wav file) in, 4 bytes at a time. Ultimately I'd like to stream sample by sample through the file; I've done this with the Ess library but I'd like something a bit faster and more robust.
My research is leading towards using a FileInputStream, but I can't get the syntax right. Anyone have a bit of code to point me in the right direction?
Hi all, it appears possible, but I cannot figure it out for the life of me: how to write sample values from an array into a sound file using Ess or Minim. The Ess documentation (
http://www.tree-axis.com/Ess/AudioFile/AudioFile_write.html) says expressly that this is possible, but the example is generating samples with the Wave object and I can't figure out how to modify it to work with an array.
Has anyone does this before that would be willing to share some code?
Hi all, I'm trying to split a stream of numbers into different text files using PrintWriter. A simplified pseudo-code example:
// create a random number int randomNumber = int(random(0,10)); PrintWriter output;
// depending on what that number is, write to a different file if (randomNumber == 0) { output = createWriter("0.txt"); output.println(randomNumber); } if (randomNumber == 1) { output = createWriter("1.txt"); output.println(randomNumber); }
output.flush(); output.close();
It will write, but seems to overwrite the existing line... or something else wonky. The Java docs show an "append" feature but it doesn't seem to work (and should println work anyway?)...
Hi all, I'm looking to run a very simple bash script from Processing - in this case to combine mp3s, fix their header data, and save. I have it working with no issue from Terminal but can't get it to work in Processing.
Hi all, I would like to pull specific lines from a very large text file - several gigs, so way too large to use loadStrings().
I have a version working that uses a BufferedReader and readLine() to pull every line from the file sequentially. Looking through the documentation, it appears that readLine() starts at line 0 and each time called increments which line it gets. Is this correct?
If I want to pull a specific line, say #100, is this possible?
Here's a piece of my code, where I'm reading from a text file of hex color values and filling every pixel with those values. I need specific lines so when I reach a certain number, the file is saved and the process is started again but at the next point in the text file.
// read hex values from the text file and fill pixels[] with those values loadPixels(); numPx = pixels.length; for (int i=0; i<numPx; i++) {
// read line from file try { hexLine = reader.readLine(); println(count + ": " + hexLine); count++; } // if there's an issue, print the error and quit catch (IOException e) { e.printStackTrace(); hexLine = null; println("Error reading file!"); exit(); } }
I've spent quite a bit of time digging around, trying to find a way to export very large images (in this case, about 1200 x 500,000,000 px). Ultimately, this will be a printed image.
PGraphics seems the best option for offscreen drawing, and I've gotten my sketch to save when running smaller images. However, it runs out of memory at about 1200 x 20,000 px - far too small for this particular project.
I looked at the TileSaver but it seems to only be for high-res images of small OpenGL images.
Any suggestions for offscreen rendering, or a way to spit out lots of smaller .tif or .png files and stitch them together (this option need not be stitched in Processing, but would likely need to be automated rather than in Photoshop by hand).
PS: As a side note for those wondering why anyone would want to create a 500-million pixel long image, the code visualizes every unique combination of the 12 notes from the Nokia Tune ringtone. This is the most ubiquitous piece of music in existence, heard 20k times per second around the world.