Archived

This topic is now archived and is closed to further replies.

LilBudyWizer

Descriptors

Recommended Posts

Within a number of APIs, particularly Windows and DirectX, you have descriptors that control a great many things in a single function call. Either you pass a data structure or you pass a bunch of parameters. Often times there are rules that if you set one field to this then another field is ignored or restricted in value. Most often you default a large part of it. So it seems logical to encapsulate it in a class. The constructor sets the defaults and access functions override the defaults and allows you to perform edit checks. Viewing the descriptor stand alone it seems all clear cut and straight forward. The problem is that isn''t stand alone. As a specific example with OpenGL under Windows you have to set a pixel format for a device context before creating a rendering context using it. You have to set the pixel format for certain options to be valid in OpenGL and then you have to tell OpenGL to actually use those options. An example would be double buffering. The device context has to support it and then you have to tell OpenGL to draw to the back buffer. So ultimately you are actually wanting to encapsulate OpenGL and the pixel format descriptor is just a side line though crucial. The user has to control the pixel format. You can just give them a method that takes your encapsulation of the PIXELFORMATDESCRIPTOR, but that seem crude. That is not hiding the details from the user. The idea is to have the user focused on what they want to do and not the technical details of how to do it. The technical details should only be apparent when they attempt the impossible or where they need to deliver resources to you such as a window handle. So an alternative is for them to indirectly modify the pixel format. One way would be to default the pixel format. You would then provide functions that allow them to do something like enable double buffering. If the pixel format in use doesn''t support it then you destroy the rendering context, change the pixel format, recreate the rendering context and tell OpenGL to draw to the back buffer. That begins to behave more like you would want it to. The user is just saying what they want done and you are handling the details of doing it. One draw back is that you could end up creating and destroying a context over and over during initialization. When it comes to performance initialization/termination is generally a pretty petty issue. Since you don''t change screen format a lot it would be here, but in other situations it may not be. So an alternative is to instantiate the class, but not actually attach until you are told explicitly to do so. The problem with the create/attach/detach/destroy sequence is that it introduces situations where methods may or may not be valid depending upon what you did or did not do before. So an alternative is to do all those things that had to be done to make the call valid when the call is made assuming you have enough information to actually do it. So if they attempt to draw a line you attach to the device context using the default pixel format. Now you are getting into situations where you can actually start to impact your performance. All these extra checks everytime you make an OpenGL call starts to add up when most of the time they make no differance. It is basically error checking with error recovery logic. On the one hand you are impacting performance in what may be a performance critical routine and on the other you have a fragile class easily broken. Another thought is that a descriptor is basically a call list. What the single call is doing is what could have been done with many seperate calls. So you have this huge descriptor to just save you a bunch of calls even though the only real savings is in error checking, i.e. both an assignment and a function call is one statement. So you could allow the user to literally build a call list. It would then be like a display list in OpenGL or a vertex buffer in DirectX. You then wouldn''t have to repeatedly release/acquire resources and repeatedly check if the context is valid for the call. Management of the list takes work though. So what I''m wondering is how others have approached this type of problem. It isn''t specifically how do you handle pixel formats for OpenGL, but the general problem of descriptors. Although I can come up with many alternatives it always seems that none of them seem to flow nicely. That makes me think either I''m missing an alternative or an alternative I do have is incomplete. The other possiblity is that it basically takes more work than I think I should have to put into it. I''m a programmer because I''m lazy so that is a definite possibility.

Share this post


Link to post
Share on other sites
Well, to handle the case where certain things need to happen before others - use assertions and status flags. That way the user will automatically be made aware of, and can eliminate the bug in a debug build, and yet incur no release build penalty. I would use the create/attach/detach/destroy sequence for managing classes, it''s almost always more flexible, and sometimes more efficent.

Call list can be sub-optimal because they do not minimize state-transitions in the renderer. So whatever design you decide on to describe drawing should be geared towards minimizing those changes. e.g. in a given scene, you want to draw polygons that use the same texture one after the other.

The best way to ensure that everything is drawn in such a sorted order, I don''t know... someone with more high-performance rendering experience is going to need to answer that.

Magmai Kai Holmlor

"Oh, like you''ve never written buggy code" - Lee

"What I see is a system that _could do anything - but currently does nothing !" - Anonymous CEO

Share this post


Link to post
Share on other sites