I ashamed of myself for this post but I'd like to understand what's going on.
Here is the problem:
I'm creating a D3DFMT_R16F texture to store -1 to +1 values.
This is working fine and displaying the texture with DirectX texture tools shows what I'm expecting. Also when locking the data I have a pitch of 128 when in actual fact it is 64. First thing I cannot explain...
Then I'm trying to load again that texture from file using the directX9 func.
That's where I'm having troubles, I tried many solutions but none worked.
I tried reinterpret_cast<D3DXFLOAT16*>, I tried D3DXFloat16To32Array() and all sorts of things.
I then wrote a piece of code to load the file from myself, not using D3DXCreateTextureFromFile but a simple stream read, and guess what, using the reinterpret_cast worked fine...
So I'd like to undderstand what's going on with this file format ? If anyone has got some clues, welcome.
Just to let you know, I have no problem with any other format like L8, R32F, or whatever.
This should be the same as writing to the file anyway so I'm just missing something here.
Thanks for helping.