Light-weight render queues?

Started by
12 comments, last by Hodgman 9 years, 2 months ago

How do you differentiate in rendering code for example between normal and diffuse textures?

You typically don't. All that matters to the program is to which texture unit each is bound.

That's what I don't understand. Constant buffers, texture slots, samplers, drawtypes, depthstencil buffers etc dosn't sound like "high-level data". A texture unit or slot for example sounds like something privy to the renderer rather than a high-level scene object. What am I missing?

Advertisement

A model has multiple renderables, a renderable is linked to a material.

Then based on that material, you just set a diffuse map or if applicable for that material, also another map like normal map etc.

Crealysm game & engine development: http://www.crealysm.com

Looking for a passionate, disciplined and structured producer? PM me


That's what I don't understand. Constant buffers, texture slots, samplers, drawtypes, depthstencil buffers etc dosn't sound like "high-level data". A texture unit or slot for example sounds like something privy to the renderer rather than a high-level scene object. What am I missing?

Constant buffers, texture slots, depthstencil buffers, ... are operating resources (hence resources not in the sense of assets). If you have "high-level data" like material parameters or viewing parameters or whatever is constant for a draw call, they can be stored within a constant buffer to provide them to the GPU. From a high-level view it's the data within the buffer that is interesting, not the buffer which is only the mechanism to transport it. From a performance point of view, it's the transport mechanism that is interesting, not the data within. Same for textures.

With programmable shaders the meaning of vertex attributes, constant parameters, or texture texels is not pre-defined. It is just how the data is processed within a shader script that gives the data its meaning. To give a clue of how it is processed, the data is marked with a semantic.

Now, does the renderer code need to know what a vertex normal, a bump map, or a viewport is? In exceptional cases perhaps, but in general it need not. It just need to know which block of data need to be provided as which resource (by its binding point / slot / whatever). The renderer code does not deal with high level data, it deals with the operating resources. That is what swiftcoder says.

State parameters for the fixed parts of the GPU pipeline are different since they need to be set by the renderer code explicitly.

How do you differentiate in rendering code for example between normal and diffuse textures?

You typically don't. All that matters to the program is to which texture unit each is bound.


That's what I don't understand. Constant buffers, texture slots, samplers, drawtypes, depthstencil buffers etc dosn't sound like "high-level data". A texture unit or slot for example sounds like something privy to the renderer rather than a high-level scene object. What am I missing?

First up, there's two approaches to render queues. It's perfectly fine to choose to have a high level one, where you have a mesh/material/instance ID/pointer... I just prefer to have a low level one that more closely mimics the device, as this gives me more flexibility.

Assuming a low-level one, the back-end doesn't care what the data is being used for, it just plugs resources into shaders' slots, sets the pipeline state, and issues draw-calls.
So for easy high-level programming, you need another layer above the queue, defining concepts like "model" and "material".

At the layer above the queue itself, there's two strategies for assigning resources to slots:
1) Convention based. You simply define a whole bunch of conventions, and stick to them everywhere.
e.g.
- The WorldViewProjection matrix is always in a CBuffer in register b0.
- The diffuse texture is always in register t0.
- A linear interpolating sampler is always in register s0.
On the shader side, you can do this by having common include files for your shaders, so that every shader #includes the same resource definitions.
On the material/model side, you then just hard-code these same conventions -- if the material has diffuse texture, put it into texture slot #0.

2) Reflection based. Use an API to inspect your shaders, and see what the name and slot of each resource is.
Write your shaders however you want (probably still using #includes for common cbuffer structures though!), and then on the material side, when binding a texture, get the name from the user (e.g. "diffuse"), and then use the reflection API to ask the shader which slot this texture should be bound to.
Likewise but slightly more complex for CBuffers. When a user tries to set a constant value (e.g. "color"), search the reflection API to find which cbuffer contains a "color" member. If the material doesn't already have a clone of that cbuffer, make one now (by copying the default values for that cbuffer). Then use the reflection API to find the offset/size/type of the color variable, and copy the user-supplied value into the material's instance of that cbuffer type. Then make sure the material binds it's new cbuffer to the appropriate slot.

I use #1 for engine-generated data, such as camera matrices, world matrices, etc... and I use #2 for artist-generated data, such as materials.

This topic is closed to new replies.

Advertisement