Data-driven renderer

Started by
6 comments, last by Krohm 11 years, 8 months ago
I've a question regarding data-driven renderer design and maybe someone can enlighten me:

From what I've read, the renderer design shouldn't make any assumption of which rendering techniques will be used so theoretically it should be possible to do forward/deferred rendering cascaded shadow mapping, whatever.

The problem is how will something like cascaded shadow mapping be defined in simple data? The view frustum has to be slipt, matrices updated and all (visible) scene objects drawn for each cascade...

This is rather complex to defined in xml or whatever format is used... Or is a scripting language used to program this? And the renderer is called data-driven because it uses scripts instead of native code?

By the way, has anyone read GPU Pro 3? Is the chapter about data driven renderer design any good? I've read the pages available on Google Books which is most of the article but I'm wondering if there's any implementation code.
Advertisement
http://bitsquid.se/presentations/flexible-rendering-multiple-platforms.pdf

Have a read of that, it goes into some details about how it can be done.
(We do something very much like it at work but ours is Lua based and does some things (subjectively) better and some worse... the 'generators' concept is one we'd like to introduce for example)
You could break it up using a command list, (or JSON file like BitSquid does), where you specify the steps in the pass.

You'd split the frustum on the client side, creating a camera of each. Then you'd build your passes. This optionally could be done on the engine side, but you'd limit what the client could do.

The first pass you'd provide the first frustum in the cascade, specify the RTs and DSTs to draw into, and the RTs and DSTs to use as inputs, and a list of geometry. Then create more passes for each subsequent view frusta and render into the same target or different targets or whatnot.
Perception is when one imagination clashes with another

http://bitsquid.se/p...e-platforms.pdf

Have a read of that, it goes into some details about how it can be done.
(We do something very much like it at work but ours is Lua based and does some things (subjectively) better and some worse... the 'generators' concept is one we'd like to introduce for example)


I've already read that, but they only show that simple fullscreen effect...

Well while reading it again: "A Modifier can be as simple as a callback function provided with knowledge of when in the frame to render" so they do use scripts in modifiers.

Talking about BitSquid architecture: Why do you think they don't use generators to draw objects? I think its weird to have that "special case"...
The approach my engine takes is to assume that every project's renderer will have different requirements: performance, features, forward vs. deferred, etc. Thus, my engine doesn't consist of a main 'renderer', but just defines a set of classes that make doing common things like object batching, sorting, culling, shader management, shader attribute binding really easy. I do provide a basic forward renderer and deferred renderer, but most games would probably want to design something that is more specific to their needs. The code for each renderer is less than 1000 lines and mostly consists of code which generate standard matrix attributes (model/view/modelview/projection/normal matrices).

So, my engine has a system where each shader attribute is given a usage binding (position, normal, texture coordinate, etc), plus an optional value. If the value is not set, the renderer automatically looks in a cache of global shader variables for things like viewing matrices and then also inspects each object's vertex buffer to see if there are any per-vertex attributes that the shader needs.

This system allows pretty much anything that you can do with base OpenGL but with automatic shader attribute management and a data-driven design. The engine as a whole operates on 'mesh chunks', a simple structure containing a pointer to a vertex buffer, material object, and index buffer. These chunks are culled and sorted individually.

I had though about doing a command-based rendering system but that ended up being really overdesigned, inefficient, and necessitated some state at the renderer level - something that I want to avoid. I suppose I still have rendering 'commands', but they are just mesh chunks.

From what I've read, the renderer design shouldn't make any assumption of which rendering techniques will be used so theoretically it should be possible to do forward/deferred rendering cascaded shadow mapping, whatever.
This is a separate issue -- your renderer should be constructed in a few tiers/layers. The lower layer deals with abstracting resources, states, commands, across platforms. On top of this layer, you should be able to implement any kind of rendering technique. The game developer should be able to make use of this layer if they need to, but will by default use a higher level layer.
The game's specific 'high level' renderer is built upon the lower layer, and this is where specific rendering techniques and pipelines will be implemented.
Regardless of whether the higher-level is data-driven or not, it should still have a clear separation from the lower-level (i.e. this level might be hard-coded in C++, or might be described with XML, whatever).
The problem is how will something like cascaded shadow mapping be defined in simple data? The view frustum has to be slipt, matrices updated and all (visible) scene objects drawn for each cascade...

This is rather complex to defined in xml or whatever format is used... Or is a scripting language used to program this? And the renderer is called data-driven because it uses scripts instead of native code?[/quote]IMHO, if you don't have to recompile the EXE/DLL/etc to make a change, then it's "data driven"... so this definition includes scripts.
N.B. Lua in particular is often used as a data description language, as well as a general scripting language -- many lua files just contain tables of information and no code, which makes it almost the same as JSON/XML/etc. This kind of Lua code is definitely "data".

As for describing complex rendering pipelines in data, here's the "concept art" from my engine's data-driven render pipeline viewer:
Vv8Iy.png
Dashed & rounded boxes are objects beloning to the game, which it has "exported" to the data-driven renderer by name.
The solid & rounded boxes are temporary data buffers generated during the pipeline -- e.g. the frustum culling function generates an array of indices of visible models.
The Dashed rectangle named "Render" is a named entry-point into the data-driven file, called by the game.
Solid rectangles are hard-coded functions that have been exported to the data-driven renderer.
The yellow plugs on top are the function arguments -- e.g. "Draw Models" takes a pool of model instances, an array of indices into that pool and a render-target to draw to (Camera should also be plugged into "Draw Models" as an input, but it's missing on this diagram...)
The yellow plugs below are the function return values -- "Draw Models" returns the render-target that it drew to.
The dashed green plugs show the inferred flow of control when this graph is compiled into a linear sequence of commands.
The large grey box denotes a data-driven function, made up of a "sub-graph", which you can see inside of (a bloom post-process effect).

The above diagram could be represented in XML, Lua, hard-coded C++, text, etc... as it's just a big bag of nodes and links.
Another cool feature of this is that you can actually inspect the pipeline data and determine your Render-Target pool requirements. e.g. the above pipe requires maximum of 1 "standard" target and 2 "1/4th standard" targets at any one time. As you add more complex post-process stages, the tools can automatically tell you what the minimum RT pool requirements are, and can try different ways of linearizing the graph to maximize resource sharing.
Nice one Hodgman, a good example of how graphs (as a structure / design) can be very powerful and yet so flexible.
Beware of over-engineering.
Example: rendering a shadow map.
My current approach is to be very-high level with scripts requesting whole "Light casting shadow" resources. This is somewhat flexible, but not nearly as much as driving the renderer directly (allocate a depth map, use it as a depth render target from projector X, bind back to shaders, render world...). My scripts don't have access to the renderer and probably won't have for quite a while. I'm just throwing it in for your consideration. Albeit testing has been pretty scarce by my side I'm already having plenty of troubles so I'm rather focusing on the core features I need.

Previously "Krohm"

This topic is closed to new replies.

Advertisement