Jump to content
  • Advertisement
Sign in to follow this  
jujumbura

D3D initialization, how do I chose?

This topic is 4833 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Good morning community, So I've been trying to build up some basic intitalization code for Direct3D, based on the Visual Studio tutorials and other examples I've found on the net. What I'm currently struggling with is chosing an appropriate DisplayMode. I have seen a number of examples talk about searching through all of the available display modes on a device. This seems like a pretty good idea to me; either I can specifiy what mode I'd like to use and either use that or the next best fit, or I can let the user chose explicity from a list built up from the search. Here's what I've run into though: both GetAdapterModeCount() and EnumAdapterModes() require a D3DFORMAT in their parameter list. So for example: GetAdapterModeCount( D3DADAPTER_DEFAULT, D3DFMT_X8R8G8B8 ); this would return the number of display modes that the primary display device supports in 32-bit color, right? So then I can loop through the modes and check each one with EnumAdapterModes() to see if it is the width/height I want. But how am I supposed to know which format I want in the first place? There are three 32-bit color formats, D3DFMT_X8R8G8B8, D3DFMT_A2R10G10B10, and D3DFMT_A8R8G8B8. I believe that the "A" part is how many bits are used for alpha, and the rest correspond to RGB bits. So what the heck are the "X" bits for? And what's the difference between having 2 bits for alpha vs 8 bits, is it a tradeoff for alpha range vs color range? Lastly, MSDN says that the alpha value is ignored. To my understanding, this is because the display adapater doesn't give a hoot about alpha, only your backbuffer does. So you only need an "OK" from the adapter for your color depth, and the alpha is just somethign between you and directX, but you pass in what you intend to use for your backbuffer format for simplicity. Am I right about all of this? Or any of it? Every freakin tutorial sets this stuff up without saying how they came about chosing their formats in the first place. I'd love to hear any sage advice about D3D initialization. Thanks much as always. Mike

Share this post


Link to post
Share on other sites
Advertisement
I've never personally seen anything used other than D3DFMT_X8R8G8B8. This is indeed 32 bit color, where 8 bits are completely ignored. D3DFMT_A8R8G8B8 is the same, but where the 8 bits are alpha. I don't know if this would work as a display format or not, but if it does, it will essentially be acting like D3DMT_X8R8G8B8 anyway, since, as you said, displays don't care about alpha. I also don't believe backbuffers care about alpha, unless you've manually created your own render target texture and such. I could be wrong, though. D3DFMT_A2R10G10B10's purpose is indeed to get higher accurracy with colors, at the cost of significantly lower accurracy for alpha. However, I don't expect many cards to be able to work with this. D3DFMT_X8R8G8B8 is by far the most used for the backbuffer/display format. The only other one I'd ever expect to use other than D3DFMT_X8R8G8B8 would be D3DFMT_R5G6B5 for 16-bit color.

Share this post


Link to post
Share on other sites
Oh, word? Well that's good to know. What about alpha blending and stuff? Can you still do that in D3DFMT_X8R8G8B8? Or textures with alpha values( holes where you can see through )? Would I need D3DFMT_A8R8G8B8 for any of those things?

Share this post


Link to post
Share on other sites
First, look at the help on D3DFORMAT. Near the end is a list of valid backbuffer formats. There's only like 5 of them. Also, while the docs say enumerating both 16-bit modes X1R5G5B5 and R5G6B5 are identical, when I tried it they produced different results.

X bits are unused bits. For a while there were cards with 24-bit backbuffers (R8G8B8). This is bad for memory, as it likes to fetch 32 bits (or multiples) at 32 bit (or multiple) offsets. Trying to fetch the second pixel in 24 bit mode would require 2 memory fetches. The solution was to just go ahead and waste memory to gain speed. Most cards these days don't offer 24-bit modes.

For alpha, yes, your monitor can't be transparent, so alpha is ignored in terms of what display modes can be used. Asking for an A8R8G8B8 backbuffer allows you to store alpha values in your backbuffer, and later use blending with DESTALPHA. For regular alpha blending based on the alpha in a texture, you don't need alpha in the back buffer.

Typically a game will decide it's format like this.

We have 16 bit art. We don't have lighting to introduce new bits of data. We might as well try for 16-bit and go to 32 if needed.

We want nice pictures, and don't care enough about the end user. We just require 32-bit mode.

We'd prefer nice pictures, but some people's cards might not be powerful enough to handle 32-bit everywhere. We'll allow a checkbox to scale back the backbuffer to 16-bit, the textures to 16-bit, etc.

Most cards these days do 32-bit just fine, and when prototyping it's much simpler to just stick with A8R8G8B8 in the backbuffer and worry about possibly using a lower bit depth later.

Share this post


Link to post
Share on other sites
Hmm a'ight.

What all is going on when I select different color depths? I mean colors are represented as 3 floats in DirectX in general. If I am in D3DFMT_X8R8G8B8 mode, does that mean that my "red" float gets scaled and truncated to fit into 8 bits? So chosing a format like D3DFMT_A2R10G10B10 would give me more values between colors because there are more bits to represent the color range?

Share this post


Link to post
Share on other sites
Textures are often 8 bit per element (ie: X8R8G8B8). Lighting calculations are often stored in a D3DCOLOR (A8R8G8B8), or computed in a shader as "floats". That's in quotes because between the vertex shader and the pixel shader the interpolators aren't float based. DX8 era cards will tend to be 8 or 9 bit. More bits are available for texture coords, but using a ps.1.1 shader to read the coords as colors may truncate the values to 8 or 9 bits again. DX9 cards may support FP16, or FP32 interpolation. I don't know if there's any guaranteed extra precision to alpha blending, but it's likely done in similar 8, 9, 16, or 32 bit precision, depending on the card.

So, for a single pass the results may be based on mixing 8 bit colors with 8 bit lights, 8 bit alpha blended with an 8 bit frame buffer. Or maybe with a 5 bit frame buffer (R5G5B5). The result is lots of rounding error at each step, will a possibility of lots more rounding error when writing to the frame buffer.

Now, if you're doing multi-pass, you're going to take your rounded off values, alpha-blend them with possibly truncated frame-buffer values, then round off and/or truncate the result again, and store it in the frame buffer. Doing several passes really brings out the lack of precision in a 16 bit frame buffer.

While the alpha is ignored, the display depth matches the backbuffer depth. You ask for a 16-bit display by asking for a 16-bit backbuffer. So, asking for different color depths is choosing how much rounding off you can live with. Lower bit-depths may also run faster, as there is less memory bandwidth being used. For something like Doom3 doing a pass per light, a 16 bit buffer really isn't going to look nice as the rounding errors keep creeping in, over and over.

Share this post


Link to post
Share on other sites
With floats, 1.0 is full intensity and 0.0 is black. In X8R8G8B8 each component gets 8 bits, or a value from 0 to 255. So if your "red" float is 0.5, it becomes 127 (or 128?) on the screen. In A2R10G10B10 each component gets a 10-bit value, that is, from 0 to 1023. So your "red" float of 0.5 becomes 512 on screen. Oh, and most video cards can't display R10G10B10 as far as I know.

Share this post


Link to post
Share on other sites
So just for the record, is D3DFMT_X8R8G8B8 pretty much supported on all new video cards?

Share this post


Link to post
Share on other sites
Quote:
Original post by Namethatnobodyelsetook
X8R8G8B8 has been supported since, say, the mid 90s. It will be available on any card you'd still consider using today.

Just to make sure, can someone else confirm this? I am reusing my D3D Initialization code that checks for both X8R8G8B8 and A8R8G8B8, but I might as well dump the A8R8G8B8 code if I don't need it...

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!