Sign in to follow this  

render to D3DFMT_A16B16G16R16 texture

This topic is 4822 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've spent the last few days writing a texture pre-processor that adds (static) lighting to meshes. It looks quite good, but there are a few issues at the moment. Part of the process is to produce a 'position map' of the mesh, where the 3D location that each point on a texture corresponds to is encoded in the red, green and blue colour components. For example, here is a position map of a cube mesh I made: The problem I was having is that 8 bits per direction isn't accurate enough for my purposes. So I've created textures with 16 bits per colour channel instead (D3DFMT_A16B16G16R16). I then set this as the render target, render coloured triangles as before, copy it to a system memory surface of the same format, lock it, read the bytes and ... the data is in A8R8G8B8 format. Argh! Why is this happening? How do I render to 16 bits per channel texture formats correctly? There's quite a lot of code, but if you want to see any particular bits then please ask.

Share this post


Link to post
Share on other sites
And you've made sure to set this format in DISPMODE when you create the device? I had a similar problem where it was throwing out all my alpha... no idea if that isn't it [headshake]

Share this post


Link to post
Share on other sites
OK, I tried setting the backbuffer pixel format to match, but I got an 'invalid call' error. It's not a valid display mode.

My card doesn't actually support D3DFMT_A16B16G16R16 textures either, but the reference device does. That's good enough for me, since this is for pre-processing not real-time stuff. However it's only available as a texture format, not as a display mode.

Share this post


Link to post
Share on other sites
I just checked the SDK - Dx8.1 (at least appears) not to support 64-bit textures... which doesn't suit too well for you.

I guess one way you could do it would be to use two separate textures, then merge the values somehow... but that would kill preformance. I was going to suggest using SOFTWARE_VERTEX_PROCESSING to get around card limitations, but honestly, I don't know if that's going to help!

Well, I'm stumped. Anyone else want to give it a shot?

Share this post


Link to post
Share on other sites
ive never done this before, so im not familear with the details of your algorythem. but i have an idea which perhaps might work...

could you create a surface twice the size of your destination surface at D3DFMT_A8R8G8B8, then to emulate a 16 bit depth, store the values for each 16 bit pixel as 2 8 bit pixels. if its in system memory you should totally be able to pull this off...ooops unless you actually need to use it for rendering again later on, or as a render surface.

this was just a shot in the dark...

Share this post


Link to post
Share on other sites
(changed username, I think it's about time I chose something more descriptive!)

Thanks for the suggestions. I've been thinking about using two texture surfaces, or even three (one for red, one for green, and one for blue). However, colours would then be incorrectly interpolated, e.g. half of:

0x00010000 (65536)
wouldn't produce the expected value
0x00008000 (32768)
because DirectX sees the value as four components:
0x00
0x01
0x00
0x00
and halfing each give:
0x00
0x00 (rounded down)
0x00
0x00

This is a big problem, and I can't find a reasonable way to divide up and recombine colour values where the arithmetic would be performed correctly. I even scratched my head at the Chinese Remainder Theorem (aka: evilness) for a couple of minutes. :)

Unless anyone has any more ideas I think I'll give up and write a simple software triangle rasterizer. Since this is for a pre-processing step it doesn't matter if it's slow. I can then use the high-precision location map to produce ordinary A8R8G8B8 processed textures which can be used in the renderer (the colour accuracy of the final result is much less critical).

Thanks anyway.

Share this post


Link to post
Share on other sites
Quote:
Original post by g
The problem I was having is that 8 bits per direction isn't accurate enough for my purposes. So I've created textures with 16 bits per colour channel instead (D3DFMT_A16B16G16R16). I then set this as the render target, render coloured triangles as before, copy it to a system memory surface of the same format, lock it, read the bytes and ... the data is in A8R8G8B8 format. Argh!

You cannot directly display surfaces that contain more than 32 bits. But you can render to them and then use them in a subsequent pass (which is what you're trying to do). To do a simple visual check of your 64 bit surface you can try to save it out as a BMP file using D3DXSaveTextureToFile or D3DXSaveSurfaceToFile. Your image will be saved as a BMP file (which only has 24 bits) but at least it will give you a way to check to see if your results look reasonable.

I think the problem might be in how you're reading the data. I've rendered to D3DFMT_A32B32G32R32F textures and read the data back with no problems. It's not the fastest thing in the world but since you mentioned that this is a pre-processing step, there shouldn't be any problems. By the way, since this is a pre-process anyway, why not use the full 128 bit textures? Unless texture size is a concern for what you're doing, 128 floating point textures will give you the full precision of 32 bit float values for each component of the positions you're rendering.

How are you copying the surface to system memory? How are you reading that system memory copy? Perhaps the problem is in how you're reading that data back in.

neneboricua

Share this post


Link to post
Share on other sites
Quote:
Original post by neneboricua19
How are you copying the surface to system memory? How are you reading that system memory copy? Perhaps the problem is in how you're reading that data back in.


Good suggestion. I was using GetRenderTargetData to copy the data back into system memory. Having changed strategy (now locking it and copying myself) everything seems to have started working.

I'm not using D3DFMT_A32B32G32R32F textures because I don't want to muck around with floating point values. If I needed even more accuracy then I would, but 16 bits turns out to be more than enough.

Here's an example of what I've been working on - it's basically a partially pre-lit texture map:

Share this post


Link to post
Share on other sites

This topic is 4822 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this