We are about to switch to a new forum software. Until then we have removed the registration on this forum.
Hey guys, Im completely new at processing. I came across this piece of code for processing that I am interested in modifying, was wondering if someone could help me out with what I'd like to do.
This is the code:
http://www.generative-gestaltung.de/P_4_3_2_01
So, instead of having text, I would like to have my own shapes (a character set) applied to the input image.
So for example with this it appears to be using the letters A-Z, I would like to use things like triangles, squares or parts of other images..anything.
The only example I know right now is this:
http://vectorpoem.com/playscii/howto_art.html#imageconversion
Which takes a bmp, slices it up and applies the shapes to the image.
The great thing about this processing code is that even though its grid based it looks like I can overlap stuff, so its not so restricted to a grid.
I see the String inputText line, but im not sure how to alter it.
Appreciate the help.
Answers
Well, there is a lot to do in terms of changing the sizing etc.
But the simplest thing to start is changing this line that draws each letter:
...to something else:
You could use triangle(), ellipse(), image(), etc. etc.....
Hey thanks for the response. Im thinking of applying actual fragments of an image to the input image though...so it would be a small image of a triangle that would be a character I could assign. Like for simplicity sake, instead of A, it would be an image of a triangle, B would be a square, C a circle...etc. Would I still start with this code?
Here is a first attempt to get you started. Notice I draw the image on the canvas and them I always extract info directly from the image, and never from the canvas. Why? Because the canvas is changing when you are looping through all the pixels. Notice that I am drawing rectangles filled with a variety of colors from black to white. The number of division in this color scale is 10. Instead of using a color scale, you can use a set of objects (triangle, ellipses,hexagons,etc)
In my case, I used rectangles. However, the final product is not that good. the reason is because I am traversing every pixel and then I draw a rectangle of widthxheight=50x50. Because I am traversing every pixel and drawing a rectangle at that location, rectangles get overwritten by other rectangles and the end result seems to be more like horizontal lines (Yes, I am traversing the image vertically).
A better approach is to do this substitution:
then this becomes a typical image pixelation algorithm. Notice in your case, if you have larger shapes on top of smaller shapes, then it means you have to draw the smaller shapes first. Then you draw the largest shapes on a later pass. If I apply that concept to my sample below, it means I have to traverse my pic object 10 times (since n=10). If color brightness (of your image) is inversely proportional to size, then you will draw first the lightest colors and then the darkest colors.
Kf
Thanks for the response kfrajer. Not sure how to reply properly on here... Bear with me here, im not sure if I understand the rectangle part. Is the rectangle being drawn from within the program? I was hoping to extract part of an image externally and then assign it to a pixel. Now another hard part I am imagining is that I dont want to cut part of an image im extracting to be square or rectangular, I was hoping to also have it cut out random shapes from the image, sortof like a cookie cutter. The best way I can describe it is the inverse of a random shape generator, where youd be cutting out whatever shapes are generated and then applying those random shapes to each pixel, in this case (of Lena). So Youd have your Lena picture, and then another picture, which youd be doing the cutting from. I hope that isnt too nuts.
Yes, in this case. The rectangle is drawn in line 55. The algorithm is simple. Take each pixel of the image. Based on the pixel brightness, draw a rectangle of 50x50 with a defined color based on pixel intensity. However, notice the code above projects one rectangle into one pixel. Every pixel will have their own rectangle so you will end up with many overlapping rectangles. If you go through this exercise (or run the code) you will see just lines...
I think I have an idea of what you want to do... I am basing all my output based on the picture you provided in the first post where you have letters. I notice the letters are different, they have different color (based on the letter) and they have different size as well.
However, a cookie cutter approach using one image as a source and then the second image is where you will place the pieces? Maybe you could write (no need to code) the algorithm. Or maybe show a basic demo showing your approach. This basic demo will give us a better idea to go forward.
Check this post:
https://forum.processing.org/two/discussion/comment/94992#Comment_94992
https://processing.org/reference/PImage_mask_.html
https://forum.processing.org/two/discussion/comment/100774#Comment_100774
Kf
Ok, I think I can explain what I want to do. Say I have a bunch of textile images from books like this: http://imgur.com/a/DHz77
Somehow I need for these shapes to be my characters, instead of the text.
So maybe it falls more into edge detection? If it could pick out each individual shape from the textile and have them saved all together as the 'alphabet' that is to be applied to the input image, like http://imgur.com/a/4iBaQ
Then these shapes are scaled accordingly to make up the input image, just like the text version. Im mostly going to be working with textiles, so the patterns are usually some type of geometric.
I think it is better if you define the images first (and create separate files using your fav image editor's cropping tool) and then you load them in a container. Then you draw on your master image using the images in this container based on your rules. This was the initial goal of your post. However, if we go to edge detection, that will require some AI to pick up the right figures, determine they are unique (to have unique definitions) and determine their original scaling factors. In short, defining the images yourself is the easier way to do. However, if you want to automate the process, that is by itself another challenge and worth on creating its own post here in the forum.
Kf
You're probably right. Ok, so if I were to cut out a slew of shapes, say 10 shapes to begin with - how would I go about feeding them in to that code instead of text? I mean Id love to automate the process, but I don't think I have the brains for it. I think the outcome would be really neat (this is all for the sake of generating abstract art)
There is also: http://www.generative-gestaltung.de/P_4_3_1_02 But again, im not sure how to get my own images as the characters.
This is the modified code. I have a container of PImages and I modify the target pic based on brightness. In your case, you have to load the picContainer with your alphabet of geometries and then, you need to define the rules (about line 51-53) to apply to your target image.
Kf
Hey thanks so much for all the help thus far! Where am I loading in the images, under which line? Sorry, Im slow at this.
Line 17 loads the main image. Line 26 is where you would load your mini images. In this case, I am not loading images but creating images. You should use:
https://processing.org/reference/loadImage_.html
https://processing.org/reference/PImage_resize_.html
The second function can be use to resize your images to the same size, or to the size that you want to use in the end process.
Kf
Sorry, am I substituing the loadImage and Pimage_resize into line 27 and removing the pimg create image?
Substitute this:
for something like this:
Make sure you have a folder called data inside the folder where your sketch file is and you stored all those images inside that folder. This information is provided in the reference: https://processing.org/reference/loadImage_.html
Kf
I am currently not getting anything at all, I have just the one PImage pimg1=loadImage("pic1.png"); imgContainer.add(pimg1); to begin with to test. And a data folder.
If it means anything, im also consistantly getting an "open GL error 1280" too.
Please send me your code on a PM and I can check it out for you. Not sure about the "open GL" message.
Kf
@daddydean
I checked your code. Two things. Remove any references to
int n=10
in the code to avoid any confusions. Also, you cannot haveloadImage("");
as you will have null pointers in your array. Either load more images or just run the code with only one image. However, in your case, I will recommend to have 2 or 3 just to start. Also, you need to resize them to fit your purpose. Maybe resize them to length 50 akalen
value?I also notice you have to make this change, which is important:
float imgPicked=constrain(pixelBrightness/thresh, 0, n-1);
to
float imgPicked=constrain(pixelBrightness/thresh, 0, imgContainer.size()-1);
You should check this other post. It is a bit related to your task(not too much) but it is a good exercise for what you want to do: https://forum.processing.org/two/discussion/23206/using-multiple-masks#latest
The OP of that post provided the iamges. YOu can download them by right-clicking on them and then click on
Save image as...
Kf
@kfrajer
I was still getting the "OpenGL error 1280 at bot begindraw() : invalid enumerant" error, so I switched to processing 1.5.1 and now it seems to be gone.
I've made all the changes to the code, but im just getting the lena picture and nothing else..
Im thinking maybe this is a java or graphics problem im having maybe?
Does the code work on your end?
What OS are you using? Using Processing 1.5... that is old. Drop the P2D designation in size. That should solve that error. My initial code of creating image works fine. This is a code that depends only on one image.
Kf
I dropped the P2D designation and im still getting the error (in 3.3.5). Im using windows 7, 64 bit. I also tried the code on a macbook and its still not working. Must be something Im doing wrong. :(
Ok, let's go back one step. Can you confirm the following code works? You need to define the file name of your image and this image must be in a folder called data inside your sketch folder. Please test the code in the latest Processing version.
The following code works in Win10 64bit and Linux FC 17. Related to your error, you don't have a reference to beginDraw in your code. Do you get this error with Processing 3.35?
Kf
Appreciate your patience.
The code you have provided is with the "create image" not ''load image'" though, so there wouldnt be a data folder?
Where am I putting the image with this code?
Im still getting the Open GL error in 3.35 with the code you have provided.
Yeah thats weird, there is no reference to beginDraw in the code.
For the code above, there is no need to provide any images. I am fetching an image via an URL from an external site. However, the Open GL error is odd. We have to take it one step further back. Can you rn examples provided by Processing? You can find them uder Files>>Examples. I believe in the example selection dialog, you can go to Basics>>Image and run the examples in that folder. Do they run ok?
There is a reference of beginDraw in the internals of the Processing engine as the sketch where all the action happens is a PGraphics object. However, I don't recall seeing this problem before. The first step is to make sure examples code work. If they don't, then it is important to reproduce the error in another machine or to ask deeper questions about your setup. I will suggest to open a new post if the examples don't work as this is off topic of your initial post. We can always come back to this post later when we sort this out first.
Kf
@kfrajer
Ok I got it working! I also loaded a transparent png with the background cut out like: https://creativenerds.co.uk/wp-content/uploads/2012/01/bigstock_Football_402653.jpg
So that shapes could overlap. But im still not quite getting the same effect as the original text method, where the characters can be scaled in and out. It seems like it would take me a long time to build up any large character set made of random shapes, being that they need to have a transparent background in order to overlap. So maybe if there was a way of automating this..im not sure though.
For instance with my image: http://imgur.com/a/DHz77
its made up of lots of shapes, it would be neat for them to automatically be cut out.
I found https://stackoverflow.com/questions/5827809/canny-edge-detection-using-processing
But im not sure how to proceed.
Did you achieve a pattern recognition? Like cutting the source image into base shapes?
@Chrisir
Hey, didnt know anyone else was following all my errors haha!
Im not sure if I follow, the source image doesnt matter too much to me, im interested in how I can apply random shapes to the source image, and to make up the source image. So if I had a source image of a face, and I took my image of shapes: http://imgur.com/a/DHz77 I would get the same face, but made of a random selection of these shapes.
Pretty much the same as the original text version, but with shapes.
I can go and cut each shape out individually, apply a transparent background and load them in, it just takes a while. Im hoping to automate that part, unsure how to do it.
I was indeed referring to the shape image: http://imgur.com/a/DHz77 and how to cut this into shapes.
I tried but it's more complicate than I expected
@chrisir
What exactly did you try?
Basically a silly sketch that tried to distinguish between black and white and kind of walk around a black figure from the border of the image following the outline of the shape from the outside of it.....
@chrisir
and it doesnt really work?
What about edge detection? https://stackoverflow.com/questions/5827809/canny-edge-detection-using-processing
It doesn't work
I don't know about edge detection
I can take a look tonight and post it
@Chrisir Sure! Im a bit stuck at the moment. But basically yeah, just somehow extract all the shapes within an image and apply them to the input image.
@kfrajer
Ok, im getting places now: imgur.com/a/lMEhK
How would I remove the input image from the background?
Also, im not sure how to zoom in and out like the original text version where you could press the arrow keys? Hope you can help me out here, maybe check my code?
As I said, it doesn't work
in function analyze:
He looks along the first line of the image from left to right
finding white spots on first row and select only one white spot per white section. This works.
For each white spot (counted by countElements) the function goAroundAShape which tries to go around the shape but doesn't work. It should go around the shape and end at line 0 of the image again, so the path goes from line 0 around the shape back to line 0. Then this enclosed area should be copied out of the image. That was the plan.
see the colored lines
Best, Chrisir ;-)
@chrisir Wow, this is next level stuff. Im new at processing, so I understand what you're saying but the code is alot for me to take in. Im not sure how I'd even begin to implement this in the code I already have.
Heres what I have now:
http://imgur.com/a/KH8m9
http://imgur.com/a/lMEhK
I dont know how to remove the background input image, and I had to manually cut all these and add transparent backgrounds for the shapes.
@Chrisir Great attempt for edge detection. Could you comment why it didn't work in your case?
@daddydean
Scaling can be done using resize() on a secondary image graphics. What is important to do atm is to accomplish one step before moving to the next.
Your challenge needs to be divided in specific steps:
* Pattern recognition
* Shape classification (based on some rules)
* Build shape data set
* Feed shape from data set to pixel (or image region) based on some rules
The first two tasks are a challenge on their own. Before trying to identify a edge detection algorithm, it is very important to define the input images that the edge detection will be acting on. Based on the input images, one could come up with a set of rules to detect the edges and implement the algorithm accordingly. This task is a challenge by itself. Shape classification is another challenge. Remember, after one sets the rules, any other input image might or might not work depending if the rules can be applied in that image in a strict way. If the rules are not strict, the edge detection might not do a good job and one will need to go back to the drawing board to re-designed the algorithm.
What I am suggesting right now is to do the third and fourth point above. You can build a data set by loading your own shapes manually and then proceed to place them in your sketch based on your placement rules aka. those relationships between pixels and the shapes in your shape sets. For example, if the pixel is bright based on some arbitrary scale, then you will assign one of the shapes in your shape collection. Of course, the shape collection must be ordered in some way, for example by size or by color tones, etc. and it is based on this order that you establish a connection to the pixel characteristic.
You have shown your resultant image. However, to help you with this question, one needs to see your code.
Some images their background is already transparent. If the background is not transparent, then one could use masking to remove the background. This is sort of easy only if the background is very well define. You can check previous post working with masks:
https://forum.processing.org/two/search?Search=mask
For instance:
https://forum.processing.org/two/discussion/comment/101112/#Comment_101112 (Notice you need to click on the image to toggle the effect)
Kf
;-)
Thanks!
Basically because I haven't spend enough time on it.... ;-)
There are probably too many different situations like shape overlapping image boder, concav versus convex shapes etc.
I was concentrating on the 2nd shape from the left in the first row
Here was the issue my detector went south and went to the right but couldn't go over the lowest point / peak of the shape and back again north to the image edge.
Turtle
A new approach would be using a turtle and looking for black / white pixel pairs and follow them. That's why I put in two turtle functions ate the bottom of the code
@kfrajer
Thanks, I've figured out how to remove the input image from the background.
Sorry, I mean to say scaling with a key function...like with the text version you used the up/down arrow keys to scale in and out, left/right to space them apart.
How would I do that? I have added the resize functions.
I found: http://www.v3ga.net/processing/BlobDetection/
Which might come in handy, not sure. Maybe some kind of image segmentation would be better than blob or edge detection, being that I want to separate the shapes in the end?
As for masking, I found:
void setWhiteTransparent(PImage img)
{
int[] maskArray=new int[img.widthimg.height];
img.loadPixels();
for (int i=0;i<img.widthimg.height;i++)
{
if((img.pixels[i] & 0x00FFFFFF) == 0x00FFFFFF)
{
maskArray[i]=0;
}
else
{
maskArray[i]=img.pixels[i] >>> 24;
}
}
But it doesnt seem to be working at the moment.
How do I paste the code propely here in the comments so you can check it out?
@kfrajer
@Chrisir
I also found https://github.com/alex-parker/CenturyOfTheSun/pull/1/commits/ed50abc1dce0f88e4e517109910bc4d52dbd6079?diff=unified
But this is in Python, so im not sure it could help.
@kfrajer
Just an update, I got an image segmentation process to work, I was going to combine what you helped me with here with it:
https://forum.processing.org/two/discussion/23471/combining-two-pieces-of-code#latest