HLSL is it possible to output integers from pixel shader?

Started by
4 comments, last by Nik02 5 years, 4 months ago

I'm experiencing large floating point precision errors during the conversion between 0-1 floats and 0-65535 integers.

I'm using RGBA64 format for a texture. In Shader Model 3.0 is it possible to output int4 or something instead of float4 from pixel shader?

This question is a bit similar to this: https://gamedev.stackexchange.com/questions/132123/is-it-possible-with-directx11-to-have-pixel-shader-output-an-integer-rather-than

Advertisement

The docs say "[a] pixel shader can output up to 8, 32-bit, 4-component colors, or no color if the pixel is discarded" which seems to suggest not.

RGBA64 is not a DirectX format, do you mean R16G16B16A16_UNORM or R64G64B64A64_UNORM? In the former case, while you will experience some floating point error, it shouldn't be "large"; how are you coding your floats? In the latter case, there simply aren't enough bits.

The first one. 1 / 65535 becomes 0.98 for example. I'm using it as a bitfield and I'm using additive blending so it really adds up.

I'm not 100% sure since I haven't worked with DX9 in many many years, but I don't think that this is possible with that API. DX10 added support for INT/UINT formats, and under DX10/11/12 you can absolutely output an integer from your pixel shader if your render target is using one of these formats.

The older GPUs don't have very good integer support in the shaders.

SM4 has been supported for a long time, so it could be worthwhile to update your baseline. D3D11 isn't that hard either, if you are comfortable with 9.

Niko Suni

This topic is closed to new replies.

Advertisement