quote:Original post by dmikesell
quote:
If you just want to make sure that your D3D renderer doesn''t try to use objects from an OpenGL one, then the obvious solution would be to add a pure virtual function to the interface classes that just returns a value indicating whether this object uses GL or D3D. Then, just check the return value of this function in each call to the renderer functions.
Every call, every frame? If each renderer accepts different kinds of objects, why put them in the same hierarchy? Just have two renderers and use whichever one the user picks at runtime.
"Just use whichever one the user picks at runtime?" That involves duplicating every bit of graphics related code in my entire program.
If I derive them both from the same interface, I can save a ton of time on coding because I can use the same functions for MOST operations. Some operations, however (for example, those that take D3D or OGL specific parameters) must be duplicated because of the nature of the APIs being wrapped.
It''s probably possible to abstract every single thing in both APIs so that they look exactly the same and I can use them as such; however, I doubt the effort it would take to do that (not to mention the performance loss of not being able to optimize) is worth the academic cleanliness of the resulting hypothetical code. Instead, I make a couple dynamic_cast calls and get the benefits of direct access for all of the DX implementations while keeping the benefits of the abstraction for the rest of the engine.