Switch between D3D and OGL

Started by
24 comments, last by Dae 18 years, 4 months ago
I guess I didn't choose my words well when I said "the video card wants..."

What I mean is what Yann L said. The video card will accept either way. But generally you get better performance if you interleave.

The GameCube is similar. It will take either. But you get better performance for not interleaving.

This all goes back to how I was saying "I don't want my engine to *work* on these platforms, but only work *well* on one or two." So I didn't want to force interleaved and then make my GameCube port useless.
Advertisement
Thanks for the input, but theres one post I don't understand.

Quote:Original post by Yann L
Quote:Original post by Grain
How is it not useful? His problem is exactly what the bridge pattern is for.

Because the additional dereference over the Abstraction object is completely useless in this case. The bridge pattern is usefull, if you want to switch Implementation on the fly. For example, if you'd have both a OpenGL_Implementation and a D3D_Implementation object instanciated, and switch between both at runtime. But you can't do that. You either use the one or the other. You select one at startup, create an object of that type, and access it over the Implementation base class. If you want to change renderers, you destroy the current rendering object (don't forget the virtual dtor !), and create a new one. Two instanciated OpenGL and D3D objects can and should never coexist at the same time.

Going over Abstraction is just useless overhead.

Quote:
Can anyone shed light on why I was recommended to add a Mutex for that new statement? It was said that mutliple threads might try to access it, even when I dont have plans to make multiple threads, and should add it. While Bezben says he doesnt think I don't need the Mutex.

Unless you plan to multithread your rendering code (which you should probably not do), Mutexes are not needed. Some people like to put them everywhere, because "just in case", but this shows a lack of understanding of what a mutex actually is. Adding them in your context (ie. single threaded rendering) is not only useless from a functional point of view, but can also become a performance problem. Mutexes add overhead, so use them only where actually needed.


Grain's Bridge Pattern is exactly what I'm using, if you take what else he said into account "And when you want to switch between OpenGL and Direct3D you change the mImplementation pointer to either a OpenGL_Implementation instance or a D3D_Implementation instance". That also follows by you saying not to have two objects of each one created at the same time. But you say not to have the extra abstraction? but isn't that sort of necessary because I want to call System->Render() for example and have it return whichever choice my options are (D3D or OGL), and the only way I know how to do that is to return a pointer to the interface. I cant return a pointer to a D3D or OGL specific object, or else I'd be calling System->D3DRender() or something wouldnt I.

Quote:Feel free to ignore me, but really keep asking yourself what you want out of a project of this size. Is it just ego, or do you have legitimate reasons for doing so?

I am actually only going with OpenGL. I just wanted to add a D3D option in there should I ever want to convert it to D3D, or practice what I learn when I finally learn D3D, or someone who knows D3D want's to help but doesn't use OGL, etc. Other than that I guess just added it for fun.
010001000110000101100101
Quote:Original post by ProgramMax
Again, agreed. The Dues Ex example I gave is due to the Unreal Engine not being origionally designed with consoles in mind. The console port is in turn not as good.

The OpenGL port was much worse ;) But you're right of course, porting a 3D engine to a platform it was not originally designed for is very difficult, and will almost always result in sub par quality and performance. But this does also apply to other parts of software development - just try to port any software making extensive use hard casts and assuming Intel byte order to OSX... Welcome to a world of pain. Good upfront planning is extremely important for such projects.

Quote:
Grain's Bridge Pattern is exactly what I'm using, if you take what else he said into account "And when you want to switch between OpenGL and Direct3D you change the mImplementation pointer to either a OpenGL_Implementation instance or a D3D_Implementation instance". That also follows by you saying not to have two objects of each one created at the same time. But you say not to have the extra abstraction?

Well, assume the following class structure:
class CRenderer {   public:      CRenderer() { }      virtual ~CRenderer() { }      virtual void Init() = 0;      virtual void Render() = 0;      ... etc ...};class CRenderer_OpenGL : public CRenderer {   public:      virtual void Init() { /* OpenGL init */ }      virtual void Render() { /* OpenGL rendering */ }      ... etc ...};class CRenderer_D3D : public CRenderer {   public:      virtual void Init() { /* Direct3D init */ }      virtual void Render() { /* Direct3D rendering */ }      ... etc ...};


Now in your startup code, you select the renderer based on user input, config files, or whatever, and create a new object of the appropriate type:
Init(){   CRenderer *Renderer;   switch( whichOneToUse ) {      case OGL: Renderer = new CRenderer_OpenGL; break;      case D3D: Renderer = new CRenderer_D3D; break;      // and more if you like   }   Renderer->Init();   // ...}


The access to this renderer is then abstracted through the polymorphic base type:
RenderFrame(){   // pre-frame stuff...   Renderer->Render();   // post-frame stuff...}
What you are doing is fine. The difference is your pointer to an interface isn't quite a bridge.

Think of it this way. What if you had:

MyEngine->Render( );
MyEngine->Render( );

From our point of view it would make sense that these would both call to D3D or both call to OGL. If we switched which it was, then we would expect it to call the other.

But that's from our point of view. That's the key to the bridge.

Suppose we weren't aware that after the first call, the renderer changed. And so the second call went to the other renderer. That's when you would use a bridge.

When you don't know (or don't want to know) which implementation the call is being directed to. But that isn't your case. For your engine you know which renderer the call gets directed to...it's which ever you assigned it.

So though the difference in code is subtle, the difference in concept is drastic. That's why you don't really gain anything from using a bridge.
Oops, Yann L got the response in before I did. `,:o) Oh well.

Hey Yann, just out of curiosity...do you have perhaps a post or a page detailing you? How old are you? How long have you been working with 3D programming? I'm interested.
Thanks for clearing that up :) I know what to do now.
010001000110000101100101

This topic is closed to new replies.

Advertisement