Writing an OpenGL/DirectX render...

Started by
7 comments, last by gameplayprogammer 15 years, 4 months ago
I want to create a Renderer class that i can then inherit and write an OpenGLRenderer and DirectXRenderer classes. I want the finished classes to be able to draw 2d as well as 3d. I dunno where to start with this, i just have the base class and the OpenGL and DirectX derived classes, with virtual Draw3D and Draw2D functions. Do i put the window creation code( the winapp stuff) into the Renderer class in a function called CreateWindow() or is it kept separate from the renderer. Here is my rough code structure so far...

// In the window application code update loop

StateMachine.Draw(Renderer* r);

// in my state machine class
void StateMachine::Draw(Renderer* r)
{
state->Draw(Renderer* r);
}

// in my state class
void State::Draw(Renderer* r)
{

std::vector<2DObject*> 2dObjects;
stateGUI->Get2DObjects(2dObjects);
for( all 2dObjects )
{
   r->Draw2D( this2DObject->x,this2DObject->y,this2DObject->image);
}

std::vector<3DObject*> 3dObjects;
stateGUI->Get3DObjects(3dObjects);
for( all 3dObjects )
{
   r->Draw3D( this3DObject );
}

}

Any help on this would be great. ( ...and yes i do want to do this as i want to learn how to do it )
Advertisement
Due to single responsibility principle, I would keep the window creation/management out of the renderer class (you could go a step further and split the scene rendering from the UI rendering, if you want). As for where to start, it might be a good idea to start with making a list of features your rendering pipeline needs to support. You have one already - wanting to do both OpenGL and DirectX - but there are many more questions that you should clear up before starting on the coding part. To get some ideas about features you might or might not want to support, and about possible architectures of the pipeline, you could always take a look at how some of the open source solutions out there do it. Also, it's a bit high level and a bit dated (and a lot to read), but there are some cool ideas about renderer architecture in the material / shader implementation thread.
In my render engine I divided the renderer and window creation system.
To tie the window and the renderer together I use a RenderContext object. This RenderContext is created by passing it a Window object.
The RenderContext is then passed to the Renderer object.


Lightbringer:
Thanks for posting the link about material/shaders. I was just busting my head about this one...
The other design issues are great food for thought as well.
Look at all the pretty colours!
Too much to soon guys :-)
I would like to do just a simple renderer that can render a 3d and 2d scene.
Instead of having all the win api code in one long file, i just want to break it up into classes and have a simple renderer to do the drawing.
You thread is all about shaders and it's 5 pages long. It seems advanced to me.
I think i only need to load and store 3d models and textures ( in OpenGL and DirectX) for now.

Anything simpler?
The best advice I can probably give is to think about this stuff on a conceptual level, not too much on a code level (yet). What I always start out with is write down for myself how I ultimately want to USE the system (what calls I want to make and such).

Conceptually, keeping the winapi code seperate from the renderer seems a good idea. Personally, I usually create a WindowManager class that does all winapi-related stuff for me. As you probably already know winapi can result in long code, which is something that generally you code once and don't touch often afterwards. This to me seems like a good reason to keep it away in a seperate class, and also your renderer has nothing to do with your window management.
You mentioned you want to implement both OpenGL and DirectX, so I assume you would also keep open the option for cross-platform. So this means you could do the same for the WindowManager as for the renderer: create an abstract version of the class that other classes that are implementation-specific can derive from. (for example one winapi implementation and one for linux).

Generally I start with letting the window manager create the winapi forms, and pass one control (window or other control) to the renderer to use as a rendering context.

As for the renderer itself, this can become a bit complicated, but for now you mentioned you want to keep thing simple, and you seem to be on the right track. This is what I do:

My whole scene (so all objects, not just renderable ones but also things like lights) is stored somewhere in a Scene class. This class gets the info from a loaded map for example.

It contains a list of all renderable objects, which are kept seperately from non-renderable objects (once again, such as lights, although arguably you could render those as well. The camera is another important example).

I use a scene class to have everything at one place, and it also holds my algorithm for culling non-visible objects. Personally I use stuff like octrees and quadtrees, but for simplicity's sake I'm just going to say that for each object, we check if it can be seen by the (active) camera. If it is visible, the scene class passes the object to the renderer class. Since the object is a renderable object, it derives from a RenderableObject class. I do this to ensure that the renderer has all the information it needs to draw the object. So this could be vertices (indices), textures, etc.

So this renderable object arrives at the renderer, which simply adds the object to a queue it keeps.

Later that same frame, a call is made to renderer.Draw(). The renderer now takes the queue, which is filled with ALL visible, renderable objects. Each object has the same data that is required to draw it on screen. The renderer may even sort the queue, for example to draw all objects that use the same texture first before drawing any objects that require a texture switch, and when the sorting is done, for each item in the queue the renderer just calls a render function that takes the info from the object and draws it on screen.

This way, the only thing you need to worry about with your objects is that they contain the necessary information. This way, you can just always do the same thing:

1: Load an object, e.g. a model from a file
2: Insert the object into the scene
3: Update the scene (this determines visible objects and passes them to the renderer
4: Renderer puts item in queue
5: Renderer renders all objects in the queue (possibly after sorting)
6: Renderer clears the queue. (don't forget this step :))

I hope this answered your question a bit :)

Note that the described design is a bit more than you asked for, but I just wanted to draw you the bigger picture I had in mind when designing this, and show you that such a system can start out really simple, but can easily be extended to include culling, shaders, optimization (by sorting the renderer's queue), etc.

Just an addition: you might wonder what you should do with things like textures (where to save em, etc)

What I do is store a list somewhere that contains a pair of 1. a string (the key), 2. the object itself (texture in this case).

I store all loaded textures in this list, (which in C++ would logically be implemented with a std::map iirc), like this: (pseudo code)

list.Add( new Entry( "textureIdentifier", LoadTexture( "filename" ) ) );

The renderer should have access to this list, and each renderable object just stores a string that contains the texture identifier, instead of a local copy of the texture itself.

This makes resource management a lot easier, cleaning up, using the same resource multiple times (possibly with reference counting) and such things become easier. You could use a similar approach for shaders, and possibly other data.
Quote:Original post by gameplayprogammer
Too much to soon guys :-)
...
You thread is all about shaders and it's 5 pages long. It seems advanced to me.

Well, I did say it's long and high-level :) But if you read it a couple of times, eventually you'll have a lightbulb turn on above your head and you'll come out of it feeling smarter ^_^

One of the takeaways from that thread is the concept of abstracting your renderable data as chunks, which are simply vertex and index streams. So your renderer does not know about renderable game objects, but only about streams provided by chunks. How you create those chunks from your game objects is up to you, but it allows you to present very different objects (mesh, particle system) to the renderer with a common interface and ready for use in vertex arrays or vertex buffer objects (you will want to forget about immediate mode rendering as soon as possible so this is useful). Of course, the real treat is about how that integrates with shaders and effects, but you don't need to go that far just yet.
excellent post, Subotron. exactly what i was looking for.
lightbringer, thanks for the paragraph "abstracting your renderable data as chunks", thats really helpfull. will try to have another read of the thread on shaders once i get through this stage.
excellent post, Subotron. exactly what i was looking for.
lightbringer, thanks for the paragraph "abstracting your renderable data as chunks", thats really helpfull. will try to have another read of the thread on shaders once i get through this stage.

This topic is closed to new replies.

Advertisement