• Advertisement
Sign in to follow this  

Data-driven renderer

This topic is 1975 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I've a question regarding data-driven renderer design and maybe someone can enlighten me:

From what I've read, the renderer design shouldn't make any assumption of which rendering techniques will be used so theoretically it should be possible to do forward/deferred rendering cascaded shadow mapping, whatever.

The problem is how will something like cascaded shadow mapping be defined in simple data? The view frustum has to be slipt, matrices updated and all (visible) scene objects drawn for each cascade...

This is rather complex to defined in xml or whatever format is used... Or is a scripting language used to program this? And the renderer is called data-driven because it uses scripts instead of native code?

By the way, has anyone read GPU Pro 3? Is the chapter about data driven renderer design any good? I've read the pages available on Google Books which is most of the article but I'm wondering if there's any implementation code. Edited by TiagoCosta

Share this post


Link to post
Share on other sites
Advertisement
You could break it up using a command list, (or JSON file like BitSquid does), where you specify the steps in the pass.

You'd split the frustum on the client side, creating a camera of each. Then you'd build your passes. This optionally could be done on the engine side, but you'd limit what the client could do.

The first pass you'd provide the first frustum in the cascade, specify the RTs and DSTs to draw into, and the RTs and DSTs to use as inputs, and a list of geometry. Then create more passes for each subsequent view frusta and render into the same target or different targets or whatnot.

Share this post


Link to post
Share on other sites
[quote name='phantom' timestamp='1345577183' post='4971938']
[url="http://bitsquid.se/presentations/flexible-rendering-multiple-platforms.pdf"]http://bitsquid.se/p...e-platforms.pdf[/url]

Have a read of that, it goes into some details about how it can be done.
(We do something very much like it at work but ours is Lua based and does some things (subjectively) better and some worse... the 'generators' concept is one we'd like to introduce for example)
[/quote]

I've already read that, but they only show that simple fullscreen effect...

Well while reading it again: "A Modifier can be as simple as a callback function provided with knowledge of when in the frame to render" so they do use scripts in modifiers.

Talking about BitSquid architecture: Why do you think they don't use generators to draw objects? I think its weird to have that "special case"...

Share this post


Link to post
Share on other sites
The approach my engine takes is to assume that every project's renderer will have different requirements: performance, features, forward vs. deferred, etc. Thus, my engine doesn't consist of a main 'renderer', but just defines a set of classes that make doing common things like object batching, sorting, culling, shader management, shader attribute binding really easy. I do provide a basic forward renderer and deferred renderer, but most games would probably want to design something that is more specific to their needs. The code for each renderer is less than 1000 lines and mostly consists of code which generate standard matrix attributes (model/view/modelview/projection/normal matrices).

So, my engine has a system where each shader attribute is given a usage binding (position, normal, texture coordinate, etc), plus an optional value. If the value is not set, the renderer automatically looks in a cache of global shader variables for things like viewing matrices and then also inspects each object's vertex buffer to see if there are any per-vertex attributes that the shader needs.

This system allows pretty much anything that you can do with base OpenGL but with automatic shader attribute management and a data-driven design. The engine as a whole operates on 'mesh chunks', a simple structure containing a pointer to a vertex buffer, material object, and index buffer. These chunks are culled and sorted individually.

I had though about doing a command-based rendering system but that ended up being really overdesigned, inefficient, and necessitated some state at the renderer level - something that I want to avoid. I suppose I still have rendering 'commands', but they are just mesh chunks. Edited by Aressera

Share this post


Link to post
Share on other sites
Nice one Hodgman, a good example of how graphs (as a structure / design) can be very powerful and yet so flexible.

Share this post


Link to post
Share on other sites
Beware of over-engineering.
Example: rendering a shadow map.
My current approach is to be very-high level with scripts requesting whole "Light casting shadow" resources. This is somewhat flexible, but not nearly as much as driving the renderer directly (allocate a depth map, use it as a depth render target from projector X, bind back to shaders, render world...). My scripts don't have access to the renderer and probably won't have for quite a while. I'm just throwing it in for your consideration. Albeit testing has been pretty scarce by my side I'm already having plenty of troubles so I'm rather focusing on the core features I need.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement