Jump to content

  • Log In with Google      Sign In   
  • Create Account


#Actualvanattab

Posted 16 October 2012 - 07:33 AM

Then what is not behaving as expected?
You have values that, when multiplied by 255.0f, result in whole numbers, 1.0f, 2.0f, 3.0f, etc.

There is no fraction, so you are obviously going to get 0.0f every time.
No mystery here. This is exactly what you should expect.
If it is not what you desire, you should change it to be whatever exactly it is that you desire.


L. Spiro



I believe you misunderstand what I am trying to do. I am working on a color vision test to test for color blindness and I would like to be able to display more colors then is possible on an 8bit / per channel display (i.e. 24bit/32bit depth). Say for example I want to display 10bits of color per channel (0-1024), this is equivalent to being able to display 0.25 increments of the 8bit (0-255) scale. For example say I want to render a large letter "A" with a floating point value of 0.503921... on a scale of 0-255 this is 128.50 the way I want to mimic this on an 8bit display is to render half the pixels as 128 and half the pixels as 129. When viewed at a reasonable distance the eye averages over the pixels and interprets the color of the letter as 128.5. Please see the picture below as a reference to want I am talking about. When looking at the image imagine that one of the colors of green is 128 and the other is 129 and that each box was only 1 pixel not 16. Your eye will interpret this as 128.5. Note in the below example it is perfectly uniform in my basic random chance model this is not necessarily true.

www.users.muohio.edu/vanattab/BasicDithering.png

In order to implement this with SlimDX and DirectX 9 what I tried was creating a 128Bit surface (I also tried 64Bit for floating point format and regular) so that I can store color to a higher depth. A 64Bit surface should be able to store 2^16 = 65536 possible values (24/32Bit is 2^8=256). So when I render the "A" to a 64 bit surface the color data should be stored in increments of (1/65536) not (1/255). In my pixel shader I then want to get multiply that floating point number by 255 and should get a number like 128.5 if you take the above example. When I then take the frac(of that number) I should get .5 which corresponds to the % chance that I want scale the color value up to 129. But that is not happening for some reason even though I am drawing the text to a 64bit format the color seems to be quantized to 24/32bit format. Does that make sense now?

#2vanattab

Posted 16 October 2012 - 07:33 AM

Then what is not behaving as expected?
You have values that, when multiplied by 255.0f, result in whole numbers, 1.0f, 2.0f, 3.0f, etc.

There is no fraction, so you are obviously going to get 0.0f every time.
No mystery here. This is exactly what you should expect.
If it is not what you desire, you should change it to be whatever exactly it is that you desire.


L. Spiro



I believe you misunderstand what I am trying to do. I am working on a color vision test to test for color blindness and I would like to be able to display more colors then is possible on an 8bit / per channel display (i.e. 24bit/32bit depth). Say for example I want to display 10bits of color per channel (0-1024), this is equivalent to being able to display 0.25 increments of the 8bit (0-255) scale. For example say I want to render a large letter "A" with a floating point value of 0.503921... on a scale of 0-255 this is 128.50 the way I want to mimic this on an 8bit display is to render half the pixels as 128 and half the pixels as 129. When viewed at a reasonable distance the eye averages over the pixels and interprets the color of the letter as 128.5. Please see the picture below as a reference to want I am talking about. When looking at the image imagine that one of the colors of green is 128 and the other is 129 and that each box was only 1 pixel not 16. Your eye will interpret this as 128.5. Note in the below example it is perfectly uniform in my basic random chance model this is not necessarily true.

www.users.muohio.edu/vanattab/BasicDithering.png

In order to implement this with SlimDX and DirectX 9 what I tried was creating a 128Bit surface (I also tried 64Bit for floating point format and regular) so that I can store color to a higher depth. A 64Bit surface should be able to store 2^16 = 65536 possible values (24/32Bit is 2^8=256). So when I render the "A" to a 64 bit surface the color data should be stored in increments of (1/65536) not (1/255). In my pixel shader I then want to get multiply that floating point number by 255 and should get a number like 128.5 if you take the above example. When I then take the frac(of that number) I should get .5 which corresponds to the % chance that I want scale the color value up to 129. But that is not happening for some reason even though I am drawing the text to a 64bit format the color seems to be quantized to 24/32bit format. Does that make sense now?

#1vanattab

Posted 16 October 2012 - 07:32 AM

Then what is not behaving as expected?
You have values that, when multiplied by 255.0f, result in whole numbers, 1.0f, 2.0f, 3.0f, etc.

There is no fraction, so you are obviously going to get 0.0f every time.
No mystery here. This is exactly what you should expect.
If it is not what you desire, you should change it to be whatever exactly it is that you desire.


L. Spiro



I believe you misunderstand what I am trying to do. I am working on a color vision test to test for color blindness and I would like to be able to display more colors then is possible on an 8bit / per channel display (i.e. 24bit/32bit depth). Say for example I want to display 10bits of color per channel (0-1024), this is equivalent to being able to display 0.25 increments of the 8bit (0-255) scale. For example say I want to render a large letter "A" with a floating point value of 0.503921... on a scale of 0-255 this is 128.50 the way I want to mimic this on an 8bit display is to render half the pixels as 128 and half the pixels as 129. When viewed at a reasonable distance the eye averages over the pixels and interprets the color of the letter as 128.5. Please see the picture below as a reference to want I am talking about. When looking at the image imagine that one of the colors of green is 128 and the other is 129 and that each box was only 1 pixel not 16. Your eye will interpret this as 128.5. Note in the below example it is perfectly uniform in my basic random chance model this is not necessarily true.
www.users.muohio.edu/vanattab/BasicDithering.png

In order to implement this with SlimDX and DirectX 9 what I tried was creating a 128Bit surface (I also tried 64Bit for floating point format and regular) so that I can store color to a higher depth. A 64Bit surface should be able to store 2^16 = 65536 possible values (24/32Bit is 2^8=256). So when I render the "A" to a 64 bit surface the color data should be stored in increments of (1/65536) not (1/255). In my pixel shader I then want to get multiply that floating point number by 255 and should get a number like 128.5 if you take the above example. When I then take the frac(of that number) I should get .5 which corresponds to the % chance that I want scale the color value up to 129. But that is not happening for some reason even though I am drawing the text to a 64bit format the color seems to be quantized to 24/32bit format. Does that make sense now?

PARTNERS