Switch between D3D and OGL

Started by
24 comments, last by Dae 18 years, 4 months ago
Hello again :)

Quote:Original post by Grain
Quote:Original post by Dae
Haha, alright thanks man. Its too bad theres no like generic pointer that would point to D3D or OGL even though they are different classes. Deriving them from the same class works.. but eh wouldn't that be a little high maintenance.
If anyone knows of another way too I'd really appreciate it! :)
Look up the Bridge design pattern.


The bridge pattern is implemented by defining something like this:
class Abstraction{private:  Implementation *mImplementation;public:  void doSomething() { mImplementation->doSomething(); }};class Implementation // abstract class{public:  virtual void doSomething() = 0;};


And then you inherit Implementation to provide yours. Of course, you are also able to inherit Abstraction in order to create some other abstraction, but that's not the point here. In fact, the bridge pattern is only interesting when you inherit Abstraction and then modify the behavior of the doSomething() method (have a look to the GoF for a more concrete example of this pattern).

The fact is that the OP (Dae) needs to create the Implementation part and its derived classes. Thus, the bridge pattern will not really be usefull in this case.

Of course, the advise is still good: the more pattern description you read, the faster you'll progress in OO design ;)

Regards,
Advertisement
Oh, yeah that makes sense Emmanuel Deloget. You can do that in Flash AS too, and thats because everything is derived from Object. Which seems to be working here, and its definetly not as hard as I though it would have been. Good to know I shouldn't try this with pointers. Thanks!

I'll definetly be looking into more design patterns. I know theres a book on them, but I wont be able to pick it up for a while. For now I'm going to have to hope to see their name (Singleton, Bridge, etc) used somewhere so I can look them up on google.

Ah, I see about the purpose of the Release function Bezben. I see how it would be useful in the original code. Now that I've got the object within a system class, I can simply delete the object from within the system class.

I also think it's a bad idea to have rendering code in multiple threads. See I don't get it. I got some help at EFNet, #C++, and it was said that two calls to Instance() could occur at the same time, which could lead to multiple instances when its a Singleton. I haven't read up on multithreading or synchronization, because I thought I had to take some indirect method to multithread. But in IRC I was told that calling Instance() twice would lead to multiple threads, and so was recommended to use a Mutex around the new in Instance(). Many other good programmers saw me using it and all seemed to think it was necessary.

So I'm definetly going to have to read up on synchronization and design patterns. I just noticed the link, that book has the exact topics I need. Its not even out yet, haha. Thanks for the link, TheOther.

Thanks for the help so far guys, this is fun. Again, this much better than misc assignments that only use a class to encapsolate for the hell of it.
010001000110000101100101
Quote:Original post by Dae
So I'm definetly going to have to read up on synchronization and design patterns. I just noticed the link, that book has the exact topics I need. Its not even out yet, haha. Thanks for the link, TheOther.

Thanks for the help so far guys, this is fun. Again, this much better than misc assignments that only use a class to encapsolate for the hell of it.


T-minus 15 days and it should be on the shelves (or at least available via Amazon).

Quote:Original post by Emmanuel Deloget

The fact is that the OP (Dae) needs to create the Implementation part and its derived classes. Thus, the bridge pattern will not really be usefull in this case.

Regards,

How is it not useful? His problem is exactly what the bridge pattern is for.


class Abstraction{private:  Implementation *mImplementation;public:  void doSomething() { mImplementation->doSomething(); }};class Implementation // abstract class{public:  virtual void doSomething() = 0;};

To make this useful for him he would need to do:
class OpenGL_Implementation :public Implementation {...};class D3D_Implementation  :public Implementation {...};Then you can render through the Abstraction interface. And when you want to switch between OpenGL and Direct3D you change the mImplementation pointer to either a OpenGL_Implementation instance or a D3D_Implementation instance



Quote:Original post by Dae
I'll definetly be looking into more design patterns. I know theres a book on them, but I wont be able to pick it up for a while. For now I'm going to have to hope to see their name (Singleton, Bridge, etc) used somewhere so I can look them up on google.


Wikipedia has some interesting articles on design patterns

http://en.wikipedia.org/wiki/Design_pattern_%28computer_science%29
Sweet, thank you Grain!

I'm actually using that exact pattern now btw. I guess I am using the Bridge pattern, its posted up there.

Can anyone shed light on why I was recommended to add a Mutex for that new statement? It was said that mutliple threads might try to access it, even when I dont have plans to make multiple threads, and should add it. While Bezben says he doesnt think I don't need the Mutex.
010001000110000101100101
Quote:Original post by Grain
How is it not useful? His problem is exactly what the bridge pattern is for.

Because the additional dereference over the Abstraction object is completely useless in this case. The bridge pattern is usefull, if you want to switch Implementation on the fly. For example, if you'd have both a OpenGL_Implementation and a D3D_Implementation object instanciated, and switch between both at runtime. But you can't do that. You either use the one or the other. You select one at startup, create an object of that type, and access it over the Implementation base class. If you want to change renderers, you destroy the current rendering object (don't forget the virtual dtor !), and create a new one. Two instanciated OpenGL and D3D objects can and should never coexist at the same time.

Going over Abstraction is just useless overhead.

Quote:
Can anyone shed light on why I was recommended to add a Mutex for that new statement? It was said that mutliple threads might try to access it, even when I dont have plans to make multiple threads, and should add it. While Bezben says he doesnt think I don't need the Mutex.

Unless you plan to multithread your rendering code (which you should probably not do), Mutexes are not needed. Some people like to put them everywhere, because "just in case", but this shows a lack of understanding of what a mutex actually is. Adding them in your context (ie. single threaded rendering) is not only useless from a functional point of view, but can also become a performance problem. Mutexes add overhead, so use them only where actually needed.
I agree about the fixed function vs shader deal.

I have been working on a platform & api independent engine for a while now. There were a few problems at first like ...I forget the specifics...but I think for vertex colors D3D wanted RGBA and OGL wanted ARGB.

That was no big deal. As I loaded models I could just swizzle the data around.

But then it got a little more tricky. A computer's video card wants vertex data to be interleaved: XYZ UV XYZ UV. But the GameCube wants these in seperate streams: XYZ XYZ UV UV. Okay...so more swizzling.

But all this swizzling effectively means I'm loading the model to some buffer, then converting it and saving it in a seperate buffer. This two-stage loading clearly shows preference to one platform or api. My goal is to make an engine that works *just as well* on any given system. I don't want anyone to say "Yeah it claims it is cross-platform, but it sucks on GameCube so no one would use it."

So I implemented something that managed how the model was loaded. It would read the file and fill the buffers according to what that specific platform/api wanted. This shows no preference and solves my problem.

Significantly more complicated, but it still is do-able. Same concept for shaders.

Anyway, I think you will find that a D3DRenderer and OGLRenderer that derive from RendererInterface is only the beginning. Loading models or textures into either is vastly different.

And I highly suggest AGAINST "obviously I will have to put a few if/thens in the code based on the renderer." That means your renderer isn't written correctly. Your interface takes care of the abstraction between D3D/OGL. You shouldn't need if/thens for abstraction.

The reason I say this is something I hinted to before. For example, a friend of mine worked with the Unreal Engine (not sure which version). Because of things like that, the code wasn't portable at all. Despite the fact that the engine claims that it is cross platform. A truely solid example of this is Dues Ex being ported to the PS2. The PS2 has, what, 4mb of video memory? After your front and back buffer you have maybe 2? Fitting textures into 2mb of memory is not easy. Sony knew this and so they made that memory crazy fast. The idea being you can load maybe a handful of textures, render those objects, the unload the textures. Continue until the entire scene is drawn.

From what I understand the Unreal Engine didn't take care of that. So your code would have to do the texture loading and unloading. That means tons of if/thens in game code based on the platform/renderer. That means the game is effectively not portable.
Quote:Original post by ProgramMax
I have been working on a platform & api independent engine for a while now. There were a few problems at first like ...I forget the specifics...but I think for vertex colors D3D wanted RGBA and OGL wanted ARGB.

By default, OpenGL expects RGBA. Using vertex shaders and/or extensions, you can make it accept pretty much any format you want.

Quote:Original post by ProgramMax
But then it got a little more tricky. A computer's video card wants vertex data to be interleaved: XYZ UV XYZ UV. But the GameCube wants these in seperate streams: XYZ XYZ UV UV. Okay...so more swizzling.

The video card doesn't care about the vertex layout. Might it be interleaved or in separate streams, it will render it. The performance is usually better on interleaved data though, so most people use that. But you can very well interleave a part of your vertex data (for example XYZU0V0) and use a separate stream for other parts (eg. colours).

Note that this functionality is only supported on OpenGL, not on D3D. So if you want vertex layout compatibility, stay with an interleaved format.

Quote:Original post by ProgramMax
But all this swizzling effectively means I'm loading the model to some buffer, then converting it and saving it in a seperate buffer. This two-stage loading clearly shows preference to one platform or api. My goal is to make an engine that works *just as well* on any given system. I don't want anyone to say "Yeah it claims it is cross-platform, but it sucks on GameCube so no one would use it."

Making an engine run on PC and consoles is an entirely different world than a simple D3D/OGL renderer, due to the fact that current console hardware is very different to PC 3D hardware. But I don't think this matters to the OP. Both D3D and OGL access the same hardware, making this task much easier.

Quote:Original post by ProgramMax
Anyway, I think you will find that a D3DRenderer and OGLRenderer that derive from RendererInterface is only the beginning. Loading models or textures into either is vastly different.

Not really, both are in fact very similar.

Quote:Original post by ProgramMax
And I highly suggest AGAINST "obviously I will have to put a few if/thens in the code based on the renderer." That means your renderer isn't written correctly. Your interface takes care of the abstraction between D3D/OGL. You shouldn't need if/thens for abstraction.

Very true. But don't abstract too much. As I said, D3D and OGL are very, very similar - or can be made similar to each other using the appropriate extensions in OpenGL (matrix transpose, BGRA formats, etc). Try to keep as much code API independent, and then route the rest through abstracted adapter objects.
Quote:Original post by Yann L
Very true. But don't abstract too much. As I said, D3D and OGL are very, very similar - or can be made similar to each other using the appropriate extensions in OpenGL (matrix transpose, BGRA formats, etc). Try to keep as much code API independent, and then route the rest through abstracted adapter objects.


True true Yann L.

Mind you, the very first question you should be asking yourself is:

"Do you really need to support multiple renderers?"

What exactly is your reason for going down this path?

There's loads of existing 3D renderer projects around such as Irrlicht, Ogre, Torque, etc, that get you starting to make game content out of the box, rather than fiddle for 6-12 months putting your own framework together.

Ages ago, I tried to be "cool" by supporting both renderers in an old engine I made. The original intent was to use it for my shareware games. I quickly sunk way too much time into it, and really stalled the whole process of working on the GAMES I want to (as opposed to an ENGINE).

Personally in my case, I would've been much much more productive (and happier) just sticking with one API and going from there.

Feel free to ignore me, but really keep asking yourself what you want out of a project of this size. Is it just ego, or do you have legitimate reasons for doing so?

(There's nothing wrong with going down this path, btw. Don't get me wrong.)

hth,
Quote:Original post by Yann L
By default, OpenGL expects RGBA. Using vertex shaders and/or extensions, you can make it accept pretty much any format you want.


I don't remember what the problem was off the top of my head. I'll check and get back to you.

Quote:Original post by Yann L
The video card doesn't care about the vertex layout. Might it be interleaved or in separate streams, it will render it. The performance is usually better on interleaved data though, so most people use that. But you can very well interleave a part of your vertex data (for example XYZU0V0) and use a separate stream for other parts (eg. colours).


Right, for PCs. That's exactly what I was saying. But the GameCube does care. It is specifically designed to be similar to the N64 in that respect. Even the docs recommend that you don't interleave the data.

Quote:Original post by Yann L
Making an engine run on PC and consoles is an entirely different world than a simple D3D/OGL renderer, due to the fact that current console hardware is very different to PC 3D hardware. But I don't think this matters to the OP. Both D3D and OGL access the same hardware, making this task much easier.


Agreed.

Quote:Original post by Yann L
Very true. But don't abstract too much. As I said, D3D and OGL are very, very similar - or can be made similar to each other using the appropriate extensions in OpenGL (matrix transpose, BGRA formats, etc). Try to keep as much code API independent, and then route the rest through abstracted adapter objects.


Again, agreed. The Dues Ex example I gave is due to the Unreal Engine not being origionally designed with consoles in mind. The console port is in turn not as good. I think you should definatly get a clear definition of what you want your engine to do. If it doesn't have to port to consoles, don't worry about them.

This topic is closed to new replies.

Advertisement