How does your renderer work?

Started by
17 comments, last by HippieHunter 19 years, 6 months ago
AP above is me. Could someone please fix the farking login?

There was supposed a link in there.
Advertisement
Here's a description of my rendering layer:

I use a material system, with an interface that exposes a list
of typed parameters. The renderer acts as a material instance
factory, and there's a separate XML DOM parser-let that can
populate a material with property values based on text input.

The renderer also acts as a vertex- and index buffer allocator;
the users of the renderer can specify two usage patterns:
"changed very seldom" (for static geometry) and "probably changed
every frame" (for particle systems, CPU skinned geometry, etc).

Last, the renderer can take "transform state" which turns out to
be either a modelview matrix, or a chain of modelview matrices
(for posed skeletons).

The actual renderer call for rendering and state management is
ultra simple:

  virtual void begin() = 0;  virtual void drawMeshWithMaterialAndState(     I_VertexBuffer * vb, I_IndexBuffer * ib, I_Material * m,     Matrix4 const * transform, size_t transformCount ) = 0;  virtual void present() = 0;


That's it! That's the only way you can render stuff in my
renderer. There are a bunch of separate classes that make certain
kinds of rendering easier, like there's something which
aggregates 2D quads (for UI) and, at the bottom, issues one or
more buffers to the renderer based on how many materials are
used.

There are global functions to allocate a C_DX9Renderer or a
C_OGLRenderer to get it all working; these global functions just
return the abstract I_Renderer interface, so the specific
implementation is hidden inside each subsystem. Each renderer
implementation sorts by material or vertex buffer or distance to
camera or whatever based on what works best for that
implementation. Gross transparency far-to-near sorting, and
occlusion near-to-far sorting, goes here, inside present().

Having to pre-declare the materials you are going to use (with
some tuneable variables such as alpha and color, or shader
uniforms) is a major performance and code clarity win compared to
systems that want to bang every little bit of the renderer state.

I'm working on offscreen render target support; it seems I'll
have to add another function to my renderer interface to allocate
a sub-render-target, and pass the specific render target to
begin(), although it's not all worked out yet.

Note that camera, culling, and a bunch of similar higher-order
functions go outside of the renderer; you can use a portal
system, a scene graph, an octree, or a plain linked list of
everything; that doesn't matter to the renderer!
enum Bool { True, False, FileNotFound };
Quote:Original post by hplus0603
..
Last, the renderer can take "transform state" which turns out to
be either a modelview matrix, or a chain of modelview matrices
(for posed skeletons).

The actual renderer call for rendering and state management is
ultra simple:

  virtual void begin() = 0;  virtual void drawMeshWithMaterialAndState(     I_VertexBuffer * vb, I_IndexBuffer * ib, I_Material * m,     Matrix4 const * transform, size_t transformCount ) = 0;  virtual void present() = 0;



Hmmm interesting, just a question tho how does the multiple transform matricies help for skeletal animation (assuming thats what you meant), dont u also need the bone data for that. I'm currently doing mine in software so the renderer doesn't know if its drawing a model, the world or a 2D UI. Would like to move that to use hardware accell (vertex shaders, ogl extensions).

btw I too have only one function to render geometry and only one exit point out to the graphics card (in OGL thats a call to glDrawElements, everything goes through the same pipeline).
why not define renderables first? how ur render machine handles them it's up to u. a surface with a material is all u need. surface can be a poly or a curve or even a point. the material on the other hand can be a multi layered color map.
Abnormal behaviour of abnormal brain makes me normal...www.zootfly.com
Quote:Original post by Shadowdancer
AP above is me. Could someone please fix the farking login?

There was supposed a link in there.

Thanks shadow that explains what I need to know.
______________________________________________________________________________________With the flesh of a cow.
I'm no expert, but here's some info about my current renderer.

All rendering goes through the function Renderer::PushTriangles(). The renderer caches the triangles and flushes whenever a) the buffer is full, b) the state changes, or c) the projection or camera matrix changes.

Renderable objects inherit from Surface and are free to implement PushTrianlges() (which calls Renderer::PushTriangles()) in any way they see fit. For example, a Billboard might orient its verts toward the camera first, or a Bezier patch might choose an appropriate level of detail.

The renderer uses OpenGL, and as I have no experience with DirectX, I haven't tried to abstract the renderer to support it. I may or may not do that in the future.

The renderer currently supports a Quake 3-style shader system. I have an older graphics card (GeForce 2) that AFAIK doesn't support vertex or pixel shaders, so I don't have any experience with those. I do hope to add them in the future though.
Quote:Original post by Shadowdancer
AP above is me. Could someone please fix the farking login?

There was supposed a link in there.


Yes, that's the one. Sorry about not linking, i was in a rush.

Fruny: Ftagn! Ia! Ia! std::time_put_byname! Mglui naflftagn std::codecvt eY'ha-nthlei!,char,mbstate_t>

I abstract my pipeline system into several different rendering entities

* Buffer - stores an arbitrary buffer descriptor with stride and size and stuff.
* Mesh - contains an array of buffers, each of which map to a mesh attributes (eg. vertex, colour, texcoord0, attribute0 ...)

Each mesh will be rendered via a 'pipeline'. There is a general pipeline which handles most rendering, but the idea is that you can define as many pipelines as you want - each to handle a different type of rendering if you need it.

The general pipeline is broken down into 3 phases
* mesh instancing
* material submission
* mesh submission

mesh instancing uploads or caches the mesh into GPU memory if it can. It skips this phase if the mesh buffers are already cached or the mesh submission mode is non-cachable (eg. immediate mode)

Material submission uploads material attributes, vertex shaders, pixel shaders and shader environment parameters in the form of lights, skinning meshes, blending weights or anything else that is required by the shaders.

Mesh submission is the simple transmission of the mesh via any submission path I choose (eg. immediate mode, vertex array (with/without range), vbo etc). If the mesh is already cached, this is simply a request to render the cached geometry and will involve no DMA overhead.

Phases 1 and 3 are achieved via auto-generated source code that generates hundreds of functions (I call them 'submission paths') for specially submitting and instancing the different combinations of mesh attributes from the abstract buffers.

The entire pipeline is callback based so it is theoretically possible to completely overload any phase of the operation - additionally, you can add additional 'nodes' to the pipeline callback chain - just in case you need to handle something at a a later phase.

I also don't bother about abstract rendering interfaces (eg. DX, OpenGL, S/W) - I find it hard enough maintaining one while doing my best to keep it clean. My advice is to stick with one API... OpenGL is good enough for me for now - just choose one for yourself. :)

do unto others... and then run like hell.
My Renderer uses a vertex cache system, it implements DrawPrimative but with some modifications. It takes the Textures, World Matrix and Shader info as well. Then it sorts the calls into as few actual DP calls as possible. After the user calls Presort(), they can Call render(). This method allows for cross platform but removes excess calls to Draw Primative, and it also deals with loading duplicate Textures, and shaders. Mine is kindof hairy right now but im hoping to improve upon it soon.

This topic is closed to new replies.

Advertisement