I try to find out why this intrinsic is always returning zero. Here's my code:
float4 PS(in float4 vPosition : SV_POSITION) : SV_TARGET
{
float o = 1.5f;
uint res = f32tof16(o);
float resf = asfloat(res);
return float4(resf, 1, 1, 1);
}
I tried with different values.
The result should be in the lower part of returned value.
Function is compiled with ps_5_0 profile. Render target's format is R32G32B32_FLOAT. Device: nVidia Quadro 1000M, feature level 11.0.
EDIT:
checked with command-line compiler fxc. The result is:
//
// Generated by Microsoft ® HLSL Shader Compiler 9.29.952.3111
//
//
// fxc /T ps_5_0 /E ps test.hlsl /Od
//
//
//
// Input signature:
//
// Name Index Mask Register SysValue Format Used
// -------------------- ----- ------ -------- -------- ------ ------
// SV_POSITION 0 xyzw 0 POS float
//
//
// Output signature:
//
// Name Index Mask Register SysValue Format Used
// -------------------- ----- ------ -------- -------- ------ ------
// SV_TARGET 0 xyzw 0 TARGET float xyzw
//
ps_5_0
dcl_globalFlags refactoringAllowed
dcl_output o0.xyzw
mov o0.xyzw, l(0,1.000000,1.000000,1.000000)
ret
// Approximately 2 instruction slots used
I'm wondering, why the compiler skips the code without any warning...