mixing texture bit depths?

Started by
6 comments, last by Bill Adams 23 years, 11 months ago
Can dx7 textures with different bit depths (8,15,16,24,32) be used interchangably? Specifically, can I use a 8-bit texture and a 24-bit texture in the same application for different sets of vertices? Also, is the screen bit depth independent of textures? (Eg; Is some magical translation taking place in dx7?) I can''t seem to find a definative answer in the documentation. I''m just adding textures to my "engine" (based on the sdk texture tutorial) and would like the flexibility of mixing hi- and lo-res textures, and switching window sizes and bit depths. I realize an attached palette is needed for 8-bit and below. Thanks in advance
Advertisement
The screen bit-depth IS independent of texture bit-depth.
You may use textures of differing bit-depth in the same application; however, they MUST be stored in video memory as the same bit-depth. This means you''d have to convert the textures to one common bit-depth.
There are many message threads here at GameDev about converting between bit-depths. Just do a search on "16 to 24" or a similar phrasing and you should get mnay, good results.
SonicSilcion, where ever did you get the idea that textures must be in the same bitdepth as the screen?

---

You can use all the bit depths supported by the card interchangable at in the same application without any trouble. And yes screen bit depth is independent of the textures.

There is no translation taking place in DX7, except for at load time where the stored texture is translated into a format supported by the card. This is however only true for texture creation with D3DX. If you don''t use D3DX you have to do the translation yourself, you can see how this is done in d3dtextr.cpp that comes with the D3DFrame.


- WitchLord

AngelCode.com - game development and more - Reference DB - game developer references
AngelScript - free scripting library - BMFont - free bitmap font generator - Tower - free puzzle game

Hmmm... interesting. I guess what I''m thinking is that textures end up as projected triangles on the backbuf and primary surface, and therefore must render to the same bit depth as those surfaces. Assuming I select 16-bit for the display (to support the majority of cards, and conserve video memory) and use a 24-bit texture (acknowleging loss of resolution in this case), logic dictates a translation has to take place somewhere. I can see several possibilities:

(1)during load of the bitmap to memory
(2)during the copy/bitblt of the memory image to a texture surface
(3)during the SetTexture()...DrawPrimative() functions

My REAL question is, does it happen in (3) so that I can avoid writing a translate routine in (1) or (2) - or invoking a utility function. (this at least in the short term - I suspect that matching the primary surface would be more efficient!)

Thanks SonicSilicon and WitchLord for your replies. It''ll take some time to make sense of the d3dtextr.cpp code, but for the short term I''m just trying to get a generic direction...
Ahh, that explains the confusion.

The textures are stored in a bit-depth independent of the screen''s bit-depth.

I was under the impression that the textures had to be stored {on the video card} as the same bit-depth to each other {not the screen.} I may be wrong, as Sp... er, WitchLord noted. Maybe that was true of an older version of DirectX, specifically 3, since I haven''t looked at the documentation in a long time {= 5 years.} Or maybe I completely messed up.

As for your example of using textures of higher bit-dept than the screen:
The textures are stored as they are given to the card. All calculations {ie. texture mapping} make use of all 24 bits of information even though calculations themselves only handle 16 bits. The screen is NOT NESCESARILY calculated at 24-bit then downsampled. Man, this is a confusing topic.
Anyone care to help?
Most cards use 8 bits per channel (ARGB) in internal computations. This means that as they access a texel in the texture its color is first translated to 32bit ARGB then after blending etc it is translated to the pixelformat of the rendertarget.

I''m not completely sure about this but I think that the speed for this translation is not dependent of the pixelformat of either rendertarget or texture. However, you will get an increase in framerate if you use 16 bit textures instead of for example 32 bit because there is less data that needs to be transported from system memory to videomemory.

In short: The pixelformats of textures and rendertarget are automatically translated by the hardware (not DirectX). You''ll probably not see any increase in speed because you match the pixel format of the textures to that of the rendertarget. (You may get increases for other reasons though)



- WitchLord

AngelCode.com - game development and more - Reference DB - game developer references
AngelScript - free scripting library - BMFont - free bitmap font generator - Tower - free puzzle game

Thanks again for the replies...This is scary, I''m actually starting to understand this stuff - even the hardware implications! It does make sense, for example, for the hardware to operate internally on individual color bytes (for lighting etc) rather than a packed 16-bit or paletted 8-bit format.

The hardware must get the texture and render target pixelformat info via DX7 from the related surface descriptions. I can picture the n-bit-to-32bit translation being done by logic gates almost instantly as the pixel is read from memory!

Its all clear now - my question is answered!
This has been a good day...
I''m always happy to assist

- WitchLord

AngelCode.com - game development and more - Reference DB - game developer references
AngelScript - free scripting library - BMFont - free bitmap font generator - Tower - free puzzle game

This topic is closed to new replies.

Advertisement