Sign in to follow this  
16bit_port

d3d10 input layout

Recommended Posts

I have a color variable (D3DXCOLOR) in my vertex and its value is [0,255].

Is it possible to pass it into my shader and have it automatically be interpreted as a value from [0,1]?

I can just normalize it to [0,1] at initialization but I would like to know if I can avoid doing that.

I've tried setting up its input element description as one of these :
DXGI_FORMAT_R8G8B8A8_TYPELESS
DXGI_FORMAT_R8G8B8A8_UNORM
DXGI_FORMAT_R8G8B8A8_UNORM_SRGB
DXGI_FORMAT_R8G8B8A8_UINT
DXGI_FORMAT_R8G8B8A8_SNORM
DXGI_FORMAT_R8G8B8A8_SINT

with its byte offset as
PrevElementOffset += 4

and the color components are still in [0,255]

but my mesh doesn't render.

(It does render if I normalize it [0,1] and use DXGI_FORMAT_R32G32B32A32_FLOAT and PrevElementOffset += 16 )

Is it possible doing it that way or am I stuck with normalizing it at initialization?

Share this post


Link to post
Share on other sites
The D3DXCOLOR struct is made up of 4 float's. So yeah it's not going to work if you use a format that specifies 4 8-bit integer components. The only way you would be able to use it directly is by specifiying a format with 4 32-bit floats. However it has a casting operator for DWORD, which lets you convert it to a 32-bit integer with 4 8-bit unsigned components. This would let you use the DXGI_FORMAT_R8G8B8A8_UNORM format and the shader will interpret it as [0,1] values. However it requires that your values in the D3DXCOLOR color struct are of the range [0,1] before casting.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this