Renderer Design

Started by
10 comments, last by Zemedelec 18 years, 7 months ago
I'm just now beginning to start working on my 3D renderer (school, work, and family have kept me back), and I'm looking for some insight on how I should go about designing the renderer. I'm attempting to go as cross-platform as possible, so Opengl is most definately the must. I've already decided to work on my own custom Vector and Matrix objects, but I don't really have a good idea how to get the pipeline/renderer setup. Any suggestions would be welcomed!
-John "bKT" Bellone [homepage] [[email=j.bellone@flipsidesoftware.com]email[/email]]
Advertisement
what type of rendering will you be working with mostly? raytraced or projection based? what type of stuff do you want it to handle.. there are so many ways to do this, why not share with us some of your idea so we can comment on them or learn from them. from personal experience materials and colision detection code seem to be the most messy. in my render design I tried to keep material concept as abstract as possible.

Tim
before you think about writing your renderer you should think about how to handle textures in your engine


e.g.: do you define shader files for each texture or a group of textures?
shader files may contain information about shadowcasting light emission and what ever you associate with a surface for the given texture

once this is done a good mesh class implemenation might be helpful
especially if you want to create triangle strips or do collision detection later on

- also think about how you want to implement lights
- how to do culling and if necessary space partition
- define the renderer's role does it only set states and shaders or does it access the geometry data directly?
e.g.: model->draw() vs. draw(model);
- i think i will use the second approach to keep the rendering code seperate this makes it easier for me to implement other features later

- also think about prerendering the depthbuffer if you work with a lot of complex pixelshaders to reduce overhead and maybe do some hardware occlusion culling

just some notes i could think of right now
http://www.8ung.at/basiror/theironcross.html
I got to some thinking today (it sometimes happens!), and I think I am going to take the approach of having a base class interface that I will be able to devrive different objects that I want rendered in there. This was the first way that it came to me;

I decided on devriving all objects that wish to be rendered from the IRenderState interface. They will then be passed into the device object probably by an array, and from there we will do the raw calls to the Opengl layer.

Anyone have any thoughts on this approach? Thanks again guys!
-John "bKT" Bellone [homepage] [[email=j.bellone@flipsidesoftware.com]email[/email]]
On a conceptual level dont you think it makes more sense that your renderable objects have a IRenderState than that they are a IRenderState? (i.e composition instead of inheritance). Of course that depends on what exactly you want you IRenderState to be, but thats what the name sounds like to me.
Well the name was just thrown out there. It'll probably be named IRenderProc or something.
-John "bKT" Bellone [homepage] [[email=j.bellone@flipsidesoftware.com]email[/email]]
you could provide a rendering implementation for each object type in your renderer

and you could completely skip the fix function pipeline as already mention in the other threads so you can design your renderer to completely fit your needs without the hassle to set and unset tons of states

have a look here I will probably go the same way later
eleminate fixed pipeline


the shader approach mentioned in the thread using dlls and exporting of classes sounds promizing

you export a base class with virtual functions

and the dll implements the imported class defines its properties and functions
and when you class a function of the base class you automatically call the desired
function you wish to use

i have just tested this with a little example its pretty easy to use

[Edited by - Basiror on September 13, 2005 5:39:09 AM]
http://www.8ung.at/basiror/theironcross.html
I read through most of those threads, but they seem to be talking about mostly Shader design. I understand that Programmable Pipeline refers to Shaders, but is that all it refers to? I don't exactly understand what Fixed Function Pipeland is besides that it doesn't have Shaders. I am most definately not at the point yet to implement Shaders, but I will want to do that in the future. Here was my basic design idea for the Renderable objects:
class IRenderable{    public:      virtual void Draw(void)=0;};class CMesh : public IRenderable{   public:      void Draw(void);};void drawScene(const IRenderable& renderList,int n){  for( each render list )     Draw();}

I am new to graphics programming (as you can probably tell with all of the questions) I own a few books related to Direct3D programming, but I would like to work with Opengl. I'm using the books to train myself in the Mathmatics (and Physics class does that good too). So let me know how the approach would handle, I am not programming anything yet, just asking questions :).

Thanks a bunch.
-John "bKT" Bellone [homepage] [[email=j.bellone@flipsidesoftware.com]email[/email]]
for a beginner project that should work fine

but for more advanced implementations you aim should be to keep render code seperated from the data, e.g.: you can use your mesh class with physic sdks and the renderer so you keep only one copy in memory

in general keeping code for different jobs seperated is a good idea for later code reuse


as for the fixed function pipeline, modern hightech engines try to do as many steps as possible in a ass few passes as possible so they need to make use of vertex and pixelshaders, thus fixed function pipeline becomes quite useless so they design their renderers around shaders instead of designing aroung the FFP
and the nice side by effect is that it makes things easier :)

i would create a basic mesh class

if you have different objects try to represent them with the mesh class

in your renderer take care about the different states and render your meshes that are passes by a list of mesh class instances or pointers to mesh class instances

sort the list you render by texture to keep texture switches at a minimum

for the beginning this is all you need to take care of

later when you design a more complex system you should associate shaders with your mesh class

i for example do it as follows
shader: {texturename/id,apitexid, shadername/shaderid}
i sort by shaderid since it goes in hand with the texture id in most cases

if you do local lighting with pointlights and so on you need to take this into consideration but i think you will encounter these difficulties later on your own

http://www.8ung.at/basiror/theironcross.html
The idea I had was to collect all patches that are being rendered in a scenegraph, which will be grouped by shaders, blending, textures, etc, ordered front to back, invisible patches clipped out, then that send to the graphics card. Keeps it nice and simple, and if you wanna implement a new feature, add a new node to the scenegraph.
Adventures of a Pro & Hobby Games Programmer - http://neilo-gd.blogspot.com/Twitter - http://twitter.com/neilogd

This topic is closed to new replies.

Advertisement