I wanna use a 32-bit R8G8B8A8 texture format. I want to write data to it in compute shader and then read that data in pixel shader, but I can't get it working at all. First of all I cannot use format like DXGI_FORMAT_R8G8B8A8_UNORM because it's not possible to specify appropriate RWTexture2D in compute shader. So I decided to use DXGI_FORMAT_R8G8B8A8_UINT format, pack by 8-bit components to that in compute, and read in pixel shader. No success.
Try changing the format from [color=#1C2837][size=2]DXGI_FORMAT_R8G8B8A8_UINT to DXGI_FORMAT_R32_UINT, since you declare your texture as a uint in the shader.
If you just want to write to a R8G8B8A8_UNORM through a UAV, you can do that and it works fine. Just declare a RWTexture2D<float4>, and write float4's to the texture. However you can't read the values through a UAV, only through a SRV. If you need to read and write in the same shader, the workaround is to use R32_UINT, and then use the functions in D3DX_DXGIFormatConvert.inl to manually pack and unpack the values.
Guys... THANK YOU! It was driving me crazy and all your ideas worked!
Obviously, for packing I needed R32_UINT rather than R8G8B8A8_UINT - thanks n3Xus! And to answer question of Bearish Sun, I got red color. What is obvious as I was writing to this one channel only.
And finally thank MJP for that float4. Actually I could have guessed that since UNORM is... well... normalized format. I'm aware of that compute shader write/read restriction, just couldn't get that writing in CS and reading in PS right ;).
Again, thank you guys!