We closed this forum 18 June 2010. It has served us well since 2005 as the ALPHA forum did before it from 2002 to 2005. New discussions are ongoing at the new URL http://forum.processing.org. You'll need to sign up and get a new user account. We're sorry about that inconvenience, but we think it's better in the long run. The content on this forum will remain online.
IndexProgramming Questions & HelpPrograms › 16 bit grayscale... Is this possible
Page Index Toggle Pages: 1
16 bit grayscale... Is this possible? (Read 949 times)
16 bit grayscale... Is this possible?
Dec 16th, 2008, 12:00am
 
Hello all,

I want to create grayscale images but want to have full 16bit color dedicated to the single channel. I think png and tiff supports this but does processing support it?

I want to have full 0 to 65535 range assigned to this grayscale channel. I assume color() will not support it but I can use hex values I think, but dont know how to achieve this in processing, I don't really know if this is possible or not.

Any suggestions?

Thanks...
Re: 16 bit grayscale... Is this possible?
Reply #1 - Dec 16th, 2008, 12:38pm
 
Might I ask why? I believe the eye will have hard time distinguishing more than 256 gray levels, and I am not even sure video cards support that, according to http://lists.trolltech.com/qt-interest/2005-05/thread00130-0.html

As pointed out above, 16bit grayscale is mostly used in image processing, to extract details in dark or light areas from images captured by some high-end camera or scanner.
Re: 16 bit grayscale... Is this possible?
Reply #2 - Dec 16th, 2008, 12:58pm
 
Processing only supports 8bit/channel pictures. You might be able to create a 16bit/channel image by handling it yourself though.

I believe many photo programs can import "RAW" files, so you just need to write the raw 16bit values out to a file and then tell the program you open them with the dimensions and that it's a 16bit greyscale image.
Re: 16 bit grayscale... Is this possible?
Reply #3 - Dec 16th, 2008, 1:16pm
 
Thanks PhiLho.

Its not about the eye actually, but I may need an alternative solution to my problem.

I'm trying to encode 16 bit audio samples to pixels in images, to process the image in an image editing software and decode back to audio later on. So being able to distinguish differences by eye is not a concern for me.

I want the encoding to be meaningful compared to the input audio. So the audio samples would go from 0 to 65535 with 32768 being the zero offset. pure white (65535) would be the highest peak in the positive range and black would be the lowest peak in the negative range. So it would be meaningful to do processing on the image like, for example, If I were to increase the brightness of the image, I would be doing asymmetrical distortion to the audio in a sense... But that is just an example.

It seems that I'll need to find a more meaningful way to pack 16 bits of audio samples into rgb zone (without alpha). I've actually done it before, it was working well but the encoding was not intuitively meaningful. I've packed the samples into RRRRR GGGGGG BBBBB, green being 6 bit and r and b being 5 bits, but the results of image processing with this kind of packing tends to be unpredictable. So I wanted to have a more linear encoding like dark to bright...

It's important for me not to have alpha but pack them all to rgb in a linear fashion. 16 is not divisible by 3(rgb) so one of the colors will have higher bits if I work in this way and as I said, the results of processing the image is not immediately visible to the eye in terms of how it will change the sound.

I can also convert audios to 24 bits, process, decode and dither to 16 bit later on for listening. If 16bit grayscale is possible for storing and processing somehow, I'd highly prefer it. Can someone think of any meaningful and linear encoding schemes?

The reason for using "images" for this is that I want to use readily available image filters designed for filtering images for altering audio. The glitchy attitude. Even with the poor implementation I tried earlier with rgb color gave me some nice results so I want to have more control on it this time.

Any suggestions are welcome...

Thanks.
Re: 16 bit grayscale... Is this possible?
Reply #4 - Dec 16th, 2008, 1:20pm
 
JohnG, thanks that seems like an option! I always thought even raw files would have a standardized header etc.. and didn't want to get into that. Will look into it and report.
Re: 16 bit grayscale... Is this possible?
Reply #5 - Dec 16th, 2008, 1:27pm
 
I've no idea if there is a header type, but from experience photoshop at least allows you to manually set the size and pixel types when loading.

The following will create a simple gradient and write it out.

Quote:
size(256,256);
int[] pix;
pix=new int[width*height];
for(int i=0;i<pix.length;i++)
{
  pix[i]=i;
}
byte[] out=new byte[pix.length*2]; //each int needs 2 bytes.
for(int i=0;i<pix.length;i++)
{
  set(i%width,i/width,color(pix[i]/256)); //show on screen, 8bit.
}

for(int i=0;i<pix.length;i++)
{
  out[i*2]=byte(pix[i]&255); // only the first 8 bits.
  out[i*2+1]=byte(pix[i]>>8); // the upper 8 bits.
}
saveBytes("img1.raw",out);

Re: 16 bit grayscale... Is this possible?
Reply #6 - Dec 16th, 2008, 2:39pm
 
JohnG, unfortunately The Gimp marks the generated file as unsupported so I can't go further with it. Before I try to reach a photoshop installed computer somewhere, can you please tell me if you can open the generated raw file with photoshop by giving correct dimensions or not?

Thanks.
Re: 16 bit grayscale... Is this possible?
Reply #7 - Dec 16th, 2008, 2:56pm
 
I just downloaded the trial version of PS elements 7, and it does let me say the image saved by the code I gave is a 16bit greyscale image that's 256x256 pixels in size, and it opens correctly.

As I can tell the GIMP's raw-import doesn't let you specify greyscale 16bit images, and you can't even create them inside it as far as I can tell.
Re: 16 bit grayscale... Is this possible?
Reply #8 - Dec 16th, 2008, 3:24pm
 
Yes, I thought so. It looks like The Gimp is limited to 8bits per channel, too bad... Well thanks, that helped a lot! At least I can have some fun with photoshop.
Re: 16 bit grayscale... Is this possible?
Reply #9 - Dec 16th, 2008, 3:56pm
 
there is a version of gimp designed to work with deeper images (for movie industry). was always called filmgimp but has changed to cinepaint.

http://www.cinepaint.org/

no windows version though.

> I'm trying to encode 16 bit audio samples to pixels in images

reinventing Oramics? 8)
Re: 16 bit grayscale... Is this possible?
Reply #10 - Dec 16th, 2008, 4:22pm
 
Thanks koogs, actually figured that most filters in photoshop does not work with 16 bit grayscale images. I'll try cinepaint now.

>reinventing Oramics?

Well not really. I'm sure, I'm reinventing something here but I just want to have a visual representation of sound analogous to a vinyl record which I can scratch and burn (visually) and then playback without leaving my beloved chair or wasting physical resources. Smiley
Re: 16 bit grayscale... Is this possible?
Reply #11 - Dec 21st, 2008, 1:08pm
 

In Grayscale level Gimp is limited with 8 bits.

1. You can use IDL. It is very simple to import images or files of any kind to IDL. In IDL command line type iimage. This tool have a lot of filters for all kindes of images.


2.You can use free astronomical programs e. g. "starlink".


Re: 16 bit grayscale... Is this possible?
Reply #12 - Dec 21st, 2008, 4:57pm
 
kobo, I don't know what is the IDL you mention, but it reminds me that ImageMagick might manipulate these images. In a non-interactive way, but it can be interesting to use it to do some effects.
Page Index Toggle Pages: 1