We are about to switch to a new forum software. Until then we have removed the registration on this forum.
How would I go about...
I have a 3D printer using a DLP projector. The projector illumination falls off around the edges of the build area. I am creating an image that will map the amount of falloff across the entire image and I want to apply this image as a correction filter to a series of images (the layer slices). This way, the resin 'sees' an even illumination across the the entire slice and my accuracy and curing is even.
One thing that I will need to account for is any black pixels in the target image need to stay black. I think creating a mask is needed to do this.
As for the parts of the image that do get affected, it would seem that I would need to somehow compare each pixel in the two images and then apply an inverse adjustment from the filter image to the target image.
Is there a function like Shaders or PImage::mask() that will do this?
Answers
Yes,
PImage.mask()
will mask an image.Re:
If I am understanding right, you want to use PImage.falloff to correct the pixel values (e.g. the brightness) in your input image, boosting pixels towards the edges of the area by defined amounts. Depending on how you construct your falloff mask you could do that with e.g.
blendMode(ADD)
, then draw the falloff map on top of the input image.Thanks for the prompt feedback.
re: Masking - PImage.mask() uses a image to generate the mask. Is there a better way to create a mask using a color (black) value only (avoiding the creation of a mask image)?
re: blendMode() - I'll play around with this, thanks!
@ScottChabineau -- well, in addition to an image, PImage mask accepts an int array -- as the reference mentions:
So if you wanted to write a function:
Then you could dynamically generate a mask and call it like this:
Thanks again. I will read through and try to comprehend the references. Being a sporadic copy-paste programmer out of necessity doesn't work in my favor. But, I can get there... eventually.
Instead of using a mask, does this logic make more sense?
I look at the pixel value in my target image - if it's anything other than black, I then look at the corresponding location in my 'falloff' image and then use blendmode(add) to correct that pixel value. If it's black, I skip it.
Is there a Pro/Con for doing it this way - IE trade off of processing time?
A straight loop through
pixels[]
is pretty efficient -- and it gives you a lot of flexibility to tweak the process.Depending on your needs, you might also want to try a straight blendmode and nothing else, e.g. use blendMode(ADD) and then apply your falloff correction image directly to the canvas with
image()
.Keep in mind that if you need a simple outer mask in addition to a boost you can also draw a mask on a transparent PGraphics and then simply use
image()
to apply it on top of previous drawing on your canvas -- that is a form of masking without requiring the mask operation.