Jump to content
  • Advertisement
Sign in to follow this  
dario_ramos

Signed image on a texture

This topic is 3033 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I'm trying to load a signed 16 bit image on a texture. The requirements forbid me from changing the image pixels to the unsigned format, and using a local transformed copy is an option only if the ones below are not feasible. So first I checked if there was a signed pixel format I could pass to CreateTexture. Maybe I didn't look well enough, but the only signed formats said something about "bump-maps", and it didn't seem like I could use them... Correct me if I'm wrong. So my next idea is using a pixel shader to transform pixels right before rendering them. But I would be doing that on every render cycle, which doesn't sound good... What do you think?

Share this post


Link to post
Share on other sites
Advertisement
The scaling from 0...1 to -1...1 takes only one multiply-add instruction, so it is not very heavy.

The floating-point (and half) texture formats can take in signed data.

Share this post


Link to post
Share on other sites
(I'm assuming you're using D3D9. Things are simpler in D3D10.)

There's nothing particularly bad about D3DFMT_Q16W16V16U16 except that it's not supported on most cards. If you're writing for a specific card and it supports the format, go ahead and use it.

It's also not a big deal to transform from unsigned to signed in the pixel shader. Modern 3D cards have lots of processing power and you might as well use it. Just remember that 0 to 0.5 is the positive range and 0.5 to 1 is the negative range.

Share this post


Link to post
Share on other sites
Quote:
Original post by ET3D
Just remember that 0 to 0.5 is the positive range and 0.5 to 1 is the negative range.


Doesn't that depend on how you're remapping? For the common (1 - 2 * rgb) it certainly holds true, but (2 * rgb - 1) would work too and remap [0;1] directly to [-1;1], with [0;0.5] being negative and [0.5;1] being positive.

Just curious really and atm quite worried this was a dumb question [smile]

Share this post


Link to post
Share on other sites
The highest bit of a signed integer is its sign :)

I was thinking about multiply-add (2 * rgb - 1) but it requires that the data be massaged on the CPU before sending it to the card.

Eyal's method works without processing on CPU side. I think that it does require a conditional branch in addition to a multiply-add in the pixel shader. However, this is still easy for the GPU.

Share this post


Link to post
Share on other sites
Thanks guys, I used a pixel shader since we code for many video cards, and if there was a drop in performance, I sure didn't notice it :)

Edit: I thought I made it, but it still doesn't show the image correctly...

Share this post


Link to post
Share on other sites
Quote:
Original post by ET3D
(I'm assuming you're using D3D9. Things are simpler in D3D10.)


You're right, I'm using D3D9

Quote:
Original post by ET3D
It's also not a big deal to transform from unsigned to signed in the pixel shader. Modern 3D cards have lots of processing power and you might as well use it. Just remember that 0 to 0.5 is the positive range and 0.5 to 1 is the negative range.


What I want to do is the opposite; I want to map from signed to unsigned. I tried adding 0.5 and substracting 1 if the result went over 1, but it didn't work. Someone told me that when adding, if you go over 1, it saturates (i.e. it remains in 1). I don't think that's exactly what's going on, since I'm seeing the exact same image...

Share this post


Link to post
Share on other sites

This D3DFMT_Q16W16V16U16 format got me curious, so for the sake of messing with exotics I gave this a go by loading up such a texture with a gradient from -32768 to 32767. It turns out to be unsupported on my GF8600M but works on my trusty old X1900. For future reference, this gradient seems to come out of a sampler in the range of [-1;1]. So it indeed still needs a multiply-add in the pixel shader as Nik said to take it to [0;1].

I'm still not clear about the positive/negative ranges though. In my test app it seems the gradient is mapped continuously from [-32768;32767] to [-1;1]. Is the 'highest bit' remark referring to the typical external 16 bit image format? I'm feeling quite dense, but it doesn't seem to apply to the rendering internals?

Quote:
Edit: I thought I made it, but it still doesn't show the image correctly...


Sorry about the tread hijack above. Could you elaborate on how it's not looking correct or perhaps post an image showing the problem?

Share this post


Link to post
Share on other sites
Quote:
Original post by remigius

I'm still not clear about the positive/negative ranges though. In my test app it seems the gradient is mapped continuously from [-32768;32767] to [-1;1]. Is the 'highest bit' remark referring to the typical external 16 bit image format? I'm feeling quite dense, but it doesn't seem to apply to the rendering internals?

Quote:
Edit: I thought I made it, but it still doesn't show the image correctly...


Sorry about the tread hijack above. Could you elaborate on how it's not looking correct or perhaps post an image showing the problem?


It's alright, I'm also trying to display an image with pixels value in that range. I'm using DICOM images, so I don't know if you'll be able to see those (I use DCM Editor to extract a .pix and I see that using Quantmaster ImageJ)... So I'll post screenshots:

Here's how it's supposed to look (loaded in ImageJ as a 16-bit signed image):

The right way

Check ImageJ's Histogram below: it shows that there are negative values.

And here's what my software shows: the problem is that it reads pixel values as unsigned. Therefore, I'm trying to convert those using a pixel shader, but I'm not getting it right.

My way

By the way, the image is drawn on a texture with D3DFMT_L16 format (it'as actually composed by many textured quads, but they all have that format)

Share this post


Link to post
Share on other sites

Using the D3DFMT_L16 format, the comments by Eyal and Nik became clear to me. It seems this 'bit worry' does apply to this unsigned format, whereas the signed D3DFMT_Q16W16V16U16 seems to automatically map continuously to [-1;1]. Anyway, when I stick my gradient in this format I get the following result:



This matches Eyals comments about the positive/negative ranges. Nik's comment about the conditionals also makes sense to me now, since you could use q = q > 0.5 ? q - 0.5 : q + 0.5; (q is just a local shader var holding texture.r) to remap this to a continuous unsigned interval [0;1]:



Alternatively you could use q = (q + 0.5) % 1; which forgoes the conditional and produces the same result. Both methods seem to have some continuity artifacts at q = 0 and q = 0.5 when using linear filtering, but switching to point filtering works out well.

I hope this is somehow remotely relevant to your problem [smile]

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!