Jump to content

  • Log In with Google      Sign In   
  • Create Account

f32tof16 confusion


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
8 replies to this topic

#1 Alaryn   Members   -  Reputation: 120

Like
0Likes
Like

Posted 17 August 2012 - 02:36 AM

Hi,
I try to find out why this intrinsic is always returning zero. Here's my code:

float4 PS(in float4 vPosition : SV_POSITION) : SV_TARGET
{
	float o = 1.5f;
	uint res = f32tof16(o);
	float resf = asfloat(res);
	return float4(resf, 1, 1, 1);
}

I tried with different values.
The result should be in the lower part of returned value.

Function is compiled with ps_5_0 profile. Render target's format is R32G32B32_FLOAT. Device: nVidia Quadro 1000M, feature level 11.0.

EDIT:
checked with command-line compiler fxc. The result is:
//
// Generated by Microsoft ® HLSL Shader Compiler 9.29.952.3111
//
//
//   fxc /T ps_5_0 /E ps test.hlsl /Od
//
//
//
// Input signature:
//
// Name				 Index   Mask Register SysValue Format   Used
// -------------------- ----- ------ -------- -------- ------ ------
// SV_POSITION			  0   xyzw		0	  POS  float
//
//
// Output signature:
//
// Name				 Index   Mask Register SysValue Format   Used
// -------------------- ----- ------ -------- -------- ------ ------
// SV_TARGET				0   xyzw		0   TARGET  float   xyzw
//
ps_5_0
dcl_globalFlags refactoringAllowed
dcl_output o0.xyzw
mov o0.xyzw, l(0,1.000000,1.000000,1.000000)
ret
// Approximately 2 instruction slots used
I'm wondering, why the compiler skips the code without any warning...

Edited by Alaryn, 17 August 2012 - 04:37 AM.


Sponsor:

#2 Tordin   Members   -  Reputation: 604

Like
1Likes
Like

Posted 17 August 2012 - 03:33 AM

You are trying to convert a float to a uint? or dose the uint stand for half float in hlsl?
i think that f32tof16 is menat to be used when you are writing the final colors to SV_TARGET.
even if you use the asfloat instruction, the number has been converted to a uint and there for might lose data.

so something like this :

float4 PS(in float4 vPosition : SV_POSITION) : SV_TARGET
{
    float o = 1.5f;
    return float4(f32tof16(o), 1, 1, 1);
}

"There will be major features. none to be thought of yet"

#3 Alaryn   Members   -  Reputation: 120

Like
0Likes
Like

Posted 17 August 2012 - 03:45 AM

That's working, thank you, but documentation says that f32tof16 returns uint and I thought, that float16 bits will be stored in the lower part of uint similarly to f16tof32, where it reads from these bits.

#4 Tordin   Members   -  Reputation: 604

Like
1Likes
Like

Posted 17 August 2012 - 04:13 AM

Yeah, i saw that on the documentation.. but since a uint is 32bits aswell, it´s kinda funny on why it would cast it to a uint.
It could be that on the gpu, an uint is 16bits and only floats is 32bits due to performance.

I will see if i find out why it´s doing this. (skipping your instruction in the orginal code)
"There will be major features. none to be thought of yet"

#5 Alaryn   Members   -  Reputation: 120

Like
0Likes
Like

Posted 17 August 2012 - 04:32 AM

Hmm, I switched render target format to R16G16B16_FLOAT and use following shader:

float4 PS(in float4 vPosition : SV_POSITION) : SV_TARGET
{
	return float4(f32tof16(1.5), 1.5, 1, 1);
}

And finally I looked up the output. The red channel was 0x73c0 (incorrect) and the blue channel 0x3e00 (correct 1.5 representation).

Edited by Alaryn, 17 August 2012 - 04:36 AM.


#6 CryZe   Members   -  Reputation: 768

Like
4Likes
Like

Posted 17 August 2012 - 05:14 AM

Why are you manually converting the results anyway? If you're rendering to a R16G16B16_FLOAT resource, the Output Merger converts the values for you.

Also your original code converts the single precision float to a half precision float and reinterprets the bits as a single precision float. Since the most significant word is always 0, the resulting single precision floating point value is always 0.

Edited by CryZe, 17 August 2012 - 05:20 AM.


#7 kauna   Crossbones+   -  Reputation: 2557

Like
3Likes
Like

Posted 17 August 2012 - 02:47 PM

Hi,
I try to find out why this intrinsic is always returning zero. Here's my code:

float4 PS(in float4 vPosition : SV_POSITION) : SV_TARGET
{
	float o = 1.5f;
	uint res = f32tof16(o);
	float resf = asfloat(res);
	return float4(resf, 1, 1, 1);
}

I'm wondering, why the compiler skips the code without any warning...


Your code is being optimized. Since the result is always the same, the compiler may optimize out all your instructions. The result of the operations is likely 0 or not a number which results as 0.

Remind you that asfloat isn't the intrinsic to reverse f32to16. The correct operator is f16tof32. Asfloat inteprets the bit pattern as floating point number. It doesn't consider the value to be half precision float.

Also, it doesn't make sense to use those operators with floating point render targets. Those instructions are used to compress 32-bit floating point values to half precision floating point values stored in 16-bit integers which can be store in a 16-bit integer render target.

Cheers!

#8 pcmaster   Members   -  Reputation: 673

Like
1Likes
Like

Posted 20 August 2012 - 02:21 AM

Or you can use f32to16 to pack two halfs into an uint. Like this:
float2 toBeQuantised(333.333, 666.666);
uint half1 = f32to16(toBeQuantised.x);
uint half2 = f32to16(toBeQuantised.y);
uint twoHalfs = half1 | (half2 << 16);

But this doesn't make that much sense or use, in addition to what Kauna said :-)

Edited by pcmaster, 20 August 2012 - 02:22 AM.


#9 Alaryn   Members   -  Reputation: 120

Like
0Likes
Like

Posted 20 August 2012 - 02:43 AM

Thank you all. After some time I realized that I just didn't understand HLSL implicit conversions and that was general problem for me. ;)

For example, if I want to get "raw" float16 value, I must set RT format to R16G16B16_FLOAT (simplest way) or R32G32B32_UINT (in this case, the value is stored in the LSB).
And if want to send the value via semantic (to another shader), I just have to set its type to uint. Then, the f32tof16 intrinsic works as I want it to work.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS