[Design] Problems with Graphics Abstraction

Started by
0 comments, last by GenuineXP 17 years, 2 months ago
I made a similiar post to this one a few days ago or so, but I got no replies, so I thought I'd try to reorganize my question and not post so much extra baggage. :-) I'm having some more difficulties abstracting the graphics system of my engine. Specifically, I'm having some clashing issues with my Window and Renderer classes because... well, I'm not sure which should do and have what. I originally didn't provide support for opening multiple windows, mostly because the APIs I'm using to implement the system right now don't support it (SDL). So Window became a meager little class that Renderer::setVideoMode() would return after setting up video. Window provides some basic functionality that I thought was more related to the display window (i.e., Window) than the graphics device (i.e., Renderer). For instance, Window can be used to minimize/restore the display window, set the caption text, show/hide the mouse cursor, and set the associated icon image. That's about it. Window has no data. With multiple Windows in mind, I've changed things a bit. Renderer::setVideoMode() is now void. Rather, a Renderer binds a Window to work with instead of creating one (and only one). For convenience, setVideoMode() has an overload that takes a Window parameter and binds it. In this way, a Renderer can setup video for any number of windows and perform other operations on it, namely rendering. I can't provide an implementation of this that works right now, but I want a design that supports it. The problem I have now is that Window still has no data in it, and this seems incorrect. Renderer holds data about the current video mode, such as screen dimensions, color depth, etc. This is all bundled up in a VideoMetrics struct. I'm thinking that VideoMetrics should really be a part of Window now or perhaps even a new class to represent render targets that Renderers can bind and work with and that Window would derive from. Is seems that something all Windows (or render targets) need are VideoMetrics. Derived implementors of Window would store further information, such as handles or contexts. With all of these changes, I've been wondering if setVideoMode() even belongs in Renderer. It makes sense conceptually, but I'm not absolutely certain. So, does this new approach make sense? Where should I put these different aspects? I know my questions are a bit nebulous (and I added a lot of extra baggage again), but any input is appreciated. Thanks, and sorry for the long post. EDIT: I've also been debating whether or not I should even provide support for multiple display windows. That's a very uncommon feature used by games after all. If I don't support it, should I even include a Window class? [Edited by - GenuineXP on January 30, 2007 10:40:17 AM]
Advertisement
After some thinking and looking at a few examples, I've opted to NOT add support for multiple windows. It's pointless and complicates design with little benefit. However, I'm still wondering if my little Window class has a place in all of this.

I'm back to setVideoMode() returning an instance of a Window object that clients can use to control the display window. Should such functionality stay in the Window class? It seems to make sense in the code because there's a clear separation of API calls: SDLRenderer (the SDL/OpenGL implementation of Renderer) makes only limited SDL function calls (to set the video mode and setup OpenGL attributes, etc.) and many OpenGL function calls while SDLWindow makes calls only to the SDL_WM_*() family of functions (none of which are present in SDLRenderer).

Seems to make sense, no?

This topic is closed to new replies.

Advertisement