Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

voodoo

OpenGL Question: PixelFormat in OpenGL

This topic is 6733 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

OK what values should Is et the bits and the z-buffer in the pixel format structure? Currently I set both to the current bit depth of my app. My app is set up to be able to handle multiple res. I.E: If i choose to start of my app at 16bits I set both to 16bit, then while running i decide to change to full screen or other res lets say 32bit. I destroy the window and reset the pixel format structure to the new bit depth. Is this good or should I set them both constant? and nevr have them change?

Share this post


Link to post
Share on other sites
Advertisement
The pixel format stuff should always be the same for any mode your app might be running under (unless you''ll allow switches from 8 bpp [indexed] to 16, 24, or 32 bpp [non-indexed]). This is what I have in my project right now (slightly modified):

PIXELFORMATDESCRIPTOR pfd;

// set the pixel format for the game window
memset(&pfd, sizeof(PIXELFORMATDESCRIPTOR), 0);
//---------------- ---- --- -- -
pfd.nSize = sizeof(PIXELFORMATDESCRIPTOR);
pfd.nVersion = 1;
pfd.dwFlags = PFD_DRAW_TO_WINDOW / PFD_SUPPORT_OPENGL / PFD_DOUBLEBUFFER / PFD_SWAP_EXCHANGE;
pfd.iPixelType = PFD_TYPE_RGBA;
pfd.cColorBits = 24;
pfd.cDepthBits = z_depth;
pfd.iLayerType = PFD_MAIN_PLANE;

Now that I''m thinking about it, I''m not entirely sure that cColorBits should always be 24, anymore. Does anyone know? I''m under the assumption that cColorBits only refers to the number of bits used to hold color information in the frame buffer--24 or 32-bpp modes will always use 8 bits per channel, and 16-bpp modes are just weird...so just use 8 bits per channel, too. =)

cDepthBits, I think, can be just about anything, I assume. I''d assign it either a value held in some user-modifiable variable or some constant. MSDN''s example uses 32, and that may just be fine. Test your app to see what works--I need to do the same. =)

I''d appreciate some other comments/agreements/corrections from you gurus out there. =)

Share this post


Link to post
Share on other sites
Yeah I do alow switching res. Bassicly thw indow is destroyed and then recreated so I set the bits of the descriptor to the current bit depth.

Share this post


Link to post
Share on other sites
I use 16 bpp or 32 bpp for the color bits. 24 isn''t supported on some modern cards (i.e, the one I have) and when set that way, sometimes behaves strangely.


*oof*

Share this post


Link to post
Share on other sites
Now that I think about it, it was pretty silly of me to have set it to 24 all the time when the member cColorBits should contain the number of bits used for the frame buffer, which would be different for different color depths. =) Silly me. The thing I''m still left wondering is exactly what it means to have an x-bit Z-buffer. That is to say, let''s assume that I''m using an 8-bit Z-buffer and I use glVertex3f...how does a 32-bit, floating point value get shrunken down to a bits? Is it just that? That, of course, being a/the reason one wants a larger Z-buffer? =)

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!