So I've read about how to generate flattened PDF graphics from a 3D rendered scene, but it seems the beginRaw() method just flattens everything and does not remove hidden lines.
So, can anyone suggest a workflow, perhaps using third party software like Blender, for creating 2D vector graphics from P3D graphics in Processing? I know various approaches might have certain trade offs and I can't expect perfect results, but I would be interested in any information you can offer!
I've looked through the forum and documentation, but haven't found the answer to this.
What kind of inputs does randomSeed take, and it what range? Are they integers? What are the min and max?
I want to randomly select a seed while drawing an randomized image, and record the seed so I can regenerate the same picture if I like the results. I just need to know what is a suitable way to select a random seed.
I'm working on creating some custom surfaces, using PShape objects, and I'm noticing some unpleasant edge coloring effects. To test this, I also rendered a standard sphere, with the standard lights(); function call, and this is the result:
Note the white edges along the left and lower-left edge of the surface. Has anyone seen this before? Are there known issues related to this? Is this a hardware issue? I'm getting this on two different Windows machines. Are there any magic ways to avoid this?
If you need some gradients for your Processing applications, I've put together some code that you can easily reuse. You can dither the gradients so they will look extra smooth.
When a gradient is very subtle and dark, you may notice "banding" in the colors caused by integer approximation. Dithering is a way to spread out this error so it become unnoticeable.
The gradients are pretty CPU-intensive, so they should only be used for applications where you don't require speed, and where you really want high-quality output. You could also draw the gradients once to a PGraphics object, and then animate these as needed.
I've been messing around with generative images for a while now - first with ActionScript, then with JavaScript and HTML5/canvas. Now I'm moving into Processing. I've successfully ported some of my examples over to Processing, so I'm figuring things out.
Let me apologize in advance if my questions are in the wrong place...or if they've been answered already. I've tried to help myself but I'm getting a little lost.
My two main questions concern (1) using blend modes, and (2) creating large images for high resolution prints.
(1) Blend modes: Flash/AS3 has blend modes, Canvas/HTML5 has globalCompositeOperation, I'm wondering how to best achieve a similar thing in Processing. In particular, I want to draw strokes (or other shapes) on top of the previously drawn image, and use additive ("lighter") blending. You can see that used in some of my examples
at my flickr site here, generated in JavaScript/Canvas (some use additive blending, some don't). (If you are interested in the code for these images, check out my blog
here and
here.)
Should I be using OpenGL for these blend modes? Can you steer me in the right direction?
(2) What's the general wisdom on creating large images for print applications? Exporting a screen-sized image is not sufficient for prints. PDFs are one option, and I've figured out how to do that, but I'm not sure if it would work for every application, for example if the image is created with pixel manipulation. What's the usual workflow for creating large bitmaps for the sake of exports? Should I be using the PGraphics class? (By the way, I have successfully created large images in the browser with JavaScript and canvas, but size limitations for required me to export images in slices which then needed to be stitched together in Photoshop. I'm looking for better ways to do things. I'm pushing JavaScript past what it's intended for!)
I know these questions are rather newbie...but I'm eager to dive into Processing and I could use a little help!