Jump to content
  • Advertisement
Sign in to follow this  
WolfgangWilkinson

12Bit depth to 16 Bit depth greyscale

This topic is 3657 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Ok so I am writing some software for a camera. The camera takes pictures with a 12bit depth to them (RRRR GGGG BBBB i guess??). I am using direct X to draw the image in a 16bit depth display mode ( RRRR RGGG GGGB BBBB using atleast 16 so everything else looks ok). Directly displaying the image I get a lot of blue. So I need to convert the image from 12 bit up to 16bit as well as apply a grey scale for coloring. The old application that I am updating took the 12 bit depth picture down to a 8 bit depth and used a 256 color palette to handle the grey scale. this is how the old application worked shift each pixel's binary value (12 bits) to the right by 4 (thus removing the 4 lsb) then this 8bit value was used to look up a RGB value from a chart that was designed like so index..0..1..2..up to..255 R......0..1..2.........255 G......0..1..2.........255 B......0..1..2.........255 this produced the greyscale I was thinking take the 12bit value and extract the R,G, and B values by masking and then shift the values onto the correct location in the 16bit place holder. However that does not produce the correct values because the intensity of the color is different on the different scale (15 red in 12bit is brighter than 15 red in 16 bit) EDIT: if it helps the camera is a DVC 4000C and it's specs are at http://www.dvcco.com/PDFs/datasheets/cameras/DVC-4000C.pdf [Edited by - WolfgangWilkinson on June 17, 2008 4:32:17 PM]

Share this post


Link to post
Share on other sites
Advertisement
If you want to convert say a 4-bit value to a 6-bit value, you can just shift the 4-bit value to left by 2. Or alternatively you could divide the 4-bit number by 15.0f and then multiply the result by 63.0f. That will keep the value normalized for you.

Share this post


Link to post
Share on other sites
First of all, don't even try to do bit manipulation directly between the two formats. You'll tie your brain in a knot. Use bitfields or otherwise extract the channels of the original value, mess with them, then use bitfields or otherwise assemble them into a 16-bit color value.

A simple and accurate way to increase bit depth is to loop the value. Suppose that you had a 4-bit number that you wanted to convert to the equivalent (saturated) 10-bit number. If that 4-bit number's digits were abcd, then the resultant number would be abcdabcdab. To convert the 4-bit number to a 5-bit number, the result would be abcda. In code terms, if you had a 4-bit number N, and you wanted to convert it to 5 bits, you'd do (N << 1) | (N >> 3). BTW, I forbid you to use that code unless and until you understand exactly what it does and why.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!