Sign in to follow this  
Ainokea

How does your renderer work?

Recommended Posts

I am having trouble designing mine so I wanted to see some examples to get some idea of how to make it. So please post yours. *note* bye renderer I meant class/structore whatever in charge of rendering your game objects.

Share this post


Link to post
Share on other sites
Yo tambien quiero el knowledgo.
(I also want t3h knowledg--o. Didn't know spanish for knowledge)

Hey Ainokea, who you votin for? Politics different outside the continental US? Which uh, island you on?

Yo quiero panqueques.

EDIT: I did respond, responded by saying that I don't know the answer and would appreciate it if somebody else would answer. Also, I forgot aboot PM system.

[Edited by - Boku San on October 3, 2004 1:09:25 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by Boku San
Yo tambien quiero el knowledgo.

Hey Ainokea, who you votin for? Politics different outside the continental US? Which uh, island you on?

Yo quiero panqueques.

Im under 18 I dont vote. No politics are about the same and I am on the big island, now why are you posting instead of pming the question?

Share this post


Link to post
Share on other sites
Quote:
Original post by Ainokea
I am having trouble designing mine so I wanted to see some examples to get some idea of how to make it. So please post yours.

*note* bye renderer I meant class/structore whatever in charge of rendering your game objects.


Hmmm well simply taking a peek at others might not be what you are after.

Usually you should start out with what features you want to implement on the rendering side (like what type of game is it going to be for, the renderer will be different for an fps and rts). Then work your way into how should the game talk to the renderer, pass the objects to render. This will also depend on the game genre. In the case of rts's you have many little models (units), so passing just one model and an instancing system would probably be nice. In the case of FPS's your world would be your biggest concern, is it going to be terrain (dynamic data to renderer, LOD) or is it just indoor. What effects are you going to need to render, particle systems, etc.

Maybe if you posted some requirements, you would get a better response on how to design an efficient renderer for what you want.

HTH

Share this post


Link to post
Share on other sites
There's an article here called "abstracting your renderer" or quite similar to that.
The author sets up basic requirements and makes them abstract enough to be used with DirectX or OpenGL.
Basically he has an interface for the Renderer, Textures and Vertex Buffers.

I built my new system kinda like this too. I made an DX8 and DX9 render module and have a half working OpenGL module too. All of those reside in DLLs and can be switched during runtime.

I also added a basic Mesh class (for simple object loading and dynamic modification), which has functions like AddFace and can be used as source for a vertex buffer. I avoided to have a basic vertex structure as for different vertex formats.

This renderer is just a rendering system for abstracting the API. There's no entity manager on top. If i decide to add one it will be a separate class, you shouldn't mix those too.

The rendering class has functions like initialize, loadtexture, loadfont, createvertexbuffer and some simple display helpers for fast testing like renderquad, renderquad2d.

You might also want to take a look at existing renderers like OGRE or Irrlicht.

Share this post


Link to post
Share on other sites
I have seperate renderers for seperate types of objects.
For example, I have a renderer for meshes's, camera's, particle engines.


Graphics
|
-------------------
| |
DX (for future) OGL
|
----------------
| | | |
Mesh Cam Part Menu


What happens at start up:

- Graphics: read settings file
- Graphics: create OGL renderer
- OGL: create MD2, Cam, Particle, menu renderer
- Mesh: register with OGL
- Cam: register with OGL
- Particle: register with OGL
- Menu: register with OGL

...

- OGL: render loop
- OGL: fetch object
- OGL: get object render type
- OGL: call specialised renderer
- Renderer: do stuff

In practise the loop looks as follows:

while ((o = (IDrawableInterface*)getNext()))
{
type = o->getRendertype();

if (type < MAX_RENDERERS)
{
r = renderers[type];
if (r)
{
if (oldRenderer != r)
{
if (oldRenderer)
oldRenderer->deactivate();
oldRenderer = r;
r->activate();
}
r->render(o, ms);
}
}
}



Every object is sorted by the renderer it needs. Camera's are sorted first, then meshes, particles and menus.
I think this is quite a neat manner of doing this.

Share this post


Link to post
Share on other sites
For every viewport I have, run this loop (viewports are stored in rendering order):
0. Run this viewport's shader.
1. Render scene from the viewport's attached camera, using triangle ID's instead of textures.
2. Raytrace every pixel "owned" by this viewport. (Viewport stores a CPixel object for every pixel it owns to eliminate testing) Use the interface's pixel mask to blend/assign the color of pixels touched by an interface object.
3. (NOT YET IMPLEMENTED) Update global illumination. Im still thinking of ways to do this fast.

I know this isn't considered real-time bit it wont matter since my engine is designed for "future hardware." I am currently coding my viewport manager (for 2-player split screen, little boxes with other info or graphics in them, rear-view mirrors, or anything else game designers see fit for a new viewport) and then I can test the rendering speed. Ill update my web-site when I have a working demo or screenshots. Right now it only has info on the networking side of my engine. (BTW, I will eventually code a distributed processing system to take some of the load off the server and client)

Share this post


Link to post
Share on other sites
I've found it very nice to think of the whole thing as a pipeline. Minimazing the connection between the objects.(Ok. That wasn't something new, I admit. But I find it handy :))

Share this post


Link to post
Share on other sites
Quote:
Original post by Ainokea
I am having trouble designing mine so I wanted to see some examples to get some idea of how to make it. So please post yours.

*note* bye renderer I meant class/structore whatever in charge of rendering your game objects.


i have an abstract class CMesh that has a Draw() function
all 3d render objects are derived from it

to render:
i have a class C3DDevice that wraps all the 3d functionality needed, all i do is call C3DDevice->Draw(CMesh *mesh), the reason behind it is cuz the device also polls the tri, and vertex count of the mesh and adds it to its private statistics

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Quote:
Original post by Endurion
There's an article here called "abstracting your renderer" or quite similar to that.


Are you talking about this?

Share this post


Link to post
Share on other sites
Here's a description of my rendering layer:

I use a material system, with an interface that exposes a list
of typed parameters. The renderer acts as a material instance
factory, and there's a separate XML DOM parser-let that can
populate a material with property values based on text input.

The renderer also acts as a vertex- and index buffer allocator;
the users of the renderer can specify two usage patterns:
"changed very seldom" (for static geometry) and "probably changed
every frame" (for particle systems, CPU skinned geometry, etc).

Last, the renderer can take "transform state" which turns out to
be either a modelview matrix, or a chain of modelview matrices
(for posed skeletons).

The actual renderer call for rendering and state management is
ultra simple:


virtual void begin() = 0;

virtual void drawMeshWithMaterialAndState(
I_VertexBuffer * vb, I_IndexBuffer * ib, I_Material * m,
Matrix4 const * transform, size_t transformCount ) = 0;

virtual void present() = 0;


That's it! That's the only way you can render stuff in my
renderer. There are a bunch of separate classes that make certain
kinds of rendering easier, like there's something which
aggregates 2D quads (for UI) and, at the bottom, issues one or
more buffers to the renderer based on how many materials are
used.

There are global functions to allocate a C_DX9Renderer or a
C_OGLRenderer to get it all working; these global functions just
return the abstract I_Renderer interface, so the specific
implementation is hidden inside each subsystem. Each renderer
implementation sorts by material or vertex buffer or distance to
camera or whatever based on what works best for that
implementation. Gross transparency far-to-near sorting, and
occlusion near-to-far sorting, goes here, inside present().

Having to pre-declare the materials you are going to use (with
some tuneable variables such as alpha and color, or shader
uniforms) is a major performance and code clarity win compared to
systems that want to bang every little bit of the renderer state.

I'm working on offscreen render target support; it seems I'll
have to add another function to my renderer interface to allocate
a sub-render-target, and pass the specific render target to
begin(), although it's not all worked out yet.

Note that camera, culling, and a bunch of similar higher-order
functions go outside of the renderer; you can use a portal
system, a scene graph, an octree, or a plain linked list of
everything; that doesn't matter to the renderer!

Share this post


Link to post
Share on other sites
Quote:
Original post by hplus0603
..
Last, the renderer can take "transform state" which turns out to
be either a modelview matrix, or a chain of modelview matrices
(for posed skeletons).

The actual renderer call for rendering and state management is
ultra simple:


virtual void begin() = 0;

virtual void drawMeshWithMaterialAndState(
I_VertexBuffer * vb, I_IndexBuffer * ib, I_Material * m,
Matrix4 const * transform, size_t transformCount ) = 0;

virtual void present() = 0;



Hmmm interesting, just a question tho how does the multiple transform matricies help for skeletal animation (assuming thats what you meant), dont u also need the bone data for that. I'm currently doing mine in software so the renderer doesn't know if its drawing a model, the world or a 2D UI. Would like to move that to use hardware accell (vertex shaders, ogl extensions).

btw I too have only one function to render geometry and only one exit point out to the graphics card (in OGL thats a call to glDrawElements, everything goes through the same pipeline).

Share this post


Link to post
Share on other sites
why not define renderables first? how ur render machine handles them it's up to u. a surface with a material is all u need. surface can be a poly or a curve or even a point. the material on the other hand can be a multi layered color map.

Share this post


Link to post
Share on other sites
Quote:
Original post by Shadowdancer
AP above is me. Could someone please fix the farking login?

There was supposed a link in there.

Thanks shadow that explains what I need to know.

Share this post


Link to post
Share on other sites
I'm no expert, but here's some info about my current renderer.

All rendering goes through the function Renderer::PushTriangles(). The renderer caches the triangles and flushes whenever a) the buffer is full, b) the state changes, or c) the projection or camera matrix changes.

Renderable objects inherit from Surface and are free to implement PushTrianlges() (which calls Renderer::PushTriangles()) in any way they see fit. For example, a Billboard might orient its verts toward the camera first, or a Bezier patch might choose an appropriate level of detail.

The renderer uses OpenGL, and as I have no experience with DirectX, I haven't tried to abstract the renderer to support it. I may or may not do that in the future.

The renderer currently supports a Quake 3-style shader system. I have an older graphics card (GeForce 2) that AFAIK doesn't support vertex or pixel shaders, so I don't have any experience with those. I do hope to add them in the future though.

Share this post


Link to post
Share on other sites
Quote:
Original post by Shadowdancer
AP above is me. Could someone please fix the farking login?

There was supposed a link in there.


Yes, that's the one. Sorry about not linking, i was in a rush.

Share this post


Link to post
Share on other sites
I abstract my pipeline system into several different rendering entities

* Buffer - stores an arbitrary buffer descriptor with stride and size and stuff.
* Mesh - contains an array of buffers, each of which map to a mesh attributes (eg. vertex, colour, texcoord0, attribute0 ...)

Each mesh will be rendered via a 'pipeline'. There is a general pipeline which handles most rendering, but the idea is that you can define as many pipelines as you want - each to handle a different type of rendering if you need it.

The general pipeline is broken down into 3 phases
* mesh instancing
* material submission
* mesh submission

mesh instancing uploads or caches the mesh into GPU memory if it can. It skips this phase if the mesh buffers are already cached or the mesh submission mode is non-cachable (eg. immediate mode)

Material submission uploads material attributes, vertex shaders, pixel shaders and shader environment parameters in the form of lights, skinning meshes, blending weights or anything else that is required by the shaders.

Mesh submission is the simple transmission of the mesh via any submission path I choose (eg. immediate mode, vertex array (with/without range), vbo etc). If the mesh is already cached, this is simply a request to render the cached geometry and will involve no DMA overhead.

Phases 1 and 3 are achieved via auto-generated source code that generates hundreds of functions (I call them 'submission paths') for specially submitting and instancing the different combinations of mesh attributes from the abstract buffers.

The entire pipeline is callback based so it is theoretically possible to completely overload any phase of the operation - additionally, you can add additional 'nodes' to the pipeline callback chain - just in case you need to handle something at a a later phase.

I also don't bother about abstract rendering interfaces (eg. DX, OpenGL, S/W) - I find it hard enough maintaining one while doing my best to keep it clean. My advice is to stick with one API... OpenGL is good enough for me for now - just choose one for yourself. :)

Share this post


Link to post
Share on other sites
My Renderer uses a vertex cache system, it implements DrawPrimative but with some modifications. It takes the Textures, World Matrix and Shader info as well. Then it sorts the calls into as few actual DP calls as possible. After the user calls Presort(), they can Call render(). This method allows for cross platform but removes excess calls to Draw Primative, and it also deals with loading duplicate Textures, and shaders. Mine is kindof hairy right now but im hoping to improve upon it soon.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this