Jump to content
  • Advertisement
Sign in to follow this  

D3DFMT_R16F format and D3DXFLOAT16

This topic is 2103 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I ashamed of myself for this post but I'd like to understand what's going on.


Here is the problem:

I'm creating a D3DFMT_R16F texture to store -1 to +1 values.

This is working fine and displaying the texture with DirectX texture tools shows what I'm expecting. Also when locking the data I have a pitch of 128 when in actual fact it is 64. First thing I cannot explain...


Then I'm trying to load again that texture from file using the directX9 func.

That's where I'm having troubles, I tried many solutions but none worked.

I tried reinterpret_cast<D3DXFLOAT16*>, I tried D3DXFloat16To32Array() and all sorts of things.

I then wrote a piece of code to load the file from myself, not using D3DXCreateTextureFromFile but a simple stream read, and guess what, using the reinterpret_cast worked fine...


So I'd like to undderstand what's going on with this file format ? If anyone has got some clues, welcome.

Just to let you know, I have no problem with any other format like L8, R32F, or whatever.

This should be the same as writing to the file anyway so I'm just missing something here.

Thanks for helping.

Share this post

Link to post
Share on other sites

Found my stupid mistake finally...

Just had to specify D3DX_DEFAULT_NONPOW2 when loading the texture ! Damn default parameters...

Post could be deleted I guess.

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!