Jump to content
  • Advertisement

DX12 Using a vertex buffer with the format R16G16B16A16_SINT

This topic is 377 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

In DirectX 9 I would use this input layout:


with this vertex shader slot:

float4 Position : POSITION0

That is, I would use the vertex buffer format SHORT4 for corresponding float4 in the shader and everything would work great.

In DirectX 12 this does not work. When I use the format DXGI_FORMAT_R16G16B16A16_SINT with float4 in the shader, I get all zeros in the shader.

If I use int4 in the shader instead of float4, I get numbers in the shader but they are messed up. I can't figure out exactly what is wrong with them because I can't see them. The shader debugger of visual studio keeps crashing.

The debugger layer does not say anything when I use int4, but it gives a warning when I use float4.

How can I use the R16G16B16A16_SINT input layout?

Share this post

Link to post
Share on other sites

You need to use R16G16B16A16_SNORM.

SINT is when you use the raw signed integer values, and you must declare your variable as int4. The values will be in range [-32768;32767] since they're integers.

SNORM is when the integers are mapped from range [-32768;32767] to the range [-1.0;1.0] and your variable must be declared as float4.

Share this post

Link to post
Share on other sites

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!