Propagating data through an engine to constant buffers

Started by
25 comments, last by Seabolt 9 years, 2 months ago

I was hoping to get some advice about engine design, and more specifically, about passing data that models need to exist in a constant buffer at render time. The below is based on an entity-component framework, in case that helps clarify where data should be coming from.

For probably >90% of the content in my scene (basic models with a vertex shader, pixel shader, and only really need a WVP transform) I have a reserved (in the context of my engine) constant buffer slot I'm using to set data that comes from the Entity's corresponding TransformComponent combined with the view and projection matrix that comes from the current camera being used. For such a simple case, this is easy and straightforward enough that I haven't really thought to revisit it.

Recently, I've started adding tesselated heightmap-based terrain to the engine, and unlike the common entities, it also requires an additional constant buffer that houses things like min and max tesselation factors, camera position, frustum planes (for culling), and a view matrix (used to move generated normals to view space for the GBuffer). I haven't done a good job in building flexibility into the current pipeline to accomodate the need for anything outside of the standard constant buffer described above which, again, houses mostly the WVP transformation matrix.

When I started thinking longer term, I realized I was going to run into the same issues for things like ocean rendering, volumetric fog, or really anything that is "non-standard" in that it's not just a model with a straightforward Vertex Shader -> Pixel Shader -> Done type of setup. I'll go over below what I have existing right now to band-aid this situation, but I would really appreciate input about how to better be able to get at the data I need for a specific model's constant buffer requirements without the model having to know about upstream objects (for example, without the model being able to query the scene for the current camera being used to get the view matrix).

Current solution:

In my render component, which houses a pointer to a model (which contains a vertex buffer, index buffer, and subset table describing offsets and shaders/texture per subset - ideally where a would like constant buffer data to live since models depend on it), I have added a std::function member to allow for "extra work" and a boolean flag to acknowledge its presence. The gist is that during setup if a renderable entity (one with a RenderComponent) needs to perform extra setup work, it can define what work needs to be done in that std::function member and the main render loop will check if it's flag is set during each iteration. So, like below:


// during scene setup - create a render component with the provided model
RenderComponent* pRC = Factory<RenderComponent>.create(pTerrainModel);
pRC->setExtraWork = [&](DeviceContext& deviceContext, FrameRenderData& frameRenderData)
{
  // do the additional work here - in the case above, retrieve extra data needed
  // for constant buffer from the frameRenderData and store and enable that to a
  // shader constant buffer slot
}


/////// later in rendering loop
if(pCurrent->hasExtraWork())
{
  pCurrent->getExtraWork()(deviceContext, frameRenderData);
}


//////// and the way the extra work member is defined in RenderComponent
std::function<void(DeviceContext& deviceContext, FrameRenderData& frameRenderData)> m_extraWork;

The FrameRenderData is just a generated struct of references to the data relevant to any given frame - the current camera, the current entities to be rendered, etc.

The other thought I had would be to trigger an event at the start of each frame containing the FrameRenderData and let anything that wants to know about it listen for it, but then I feel like my models or render components would need to have event listeners attached, which also seems like iffy design at best.

While the above technically works, I feel like it's kludgy and was wondering if anyone had thoughts on a better way to get data to dependent constant buffers in a system setup similar to what's above.

Thanks for your time and help.

Advertisement

I have tortured myself over this decision in several of my engines that I've written.

Generally I have a couple of things that I look up. Usually I'll have my materials have some way of knowing what constants are in my shaders then doing a lookup for the size of the constant, register location, etc. Then the gameplay side just knows the name of the constant and provides a void* to the material.

Now the downside is that every shader needs to have additional information to tell the game what constants it is looking for and other information regarding it. This can be a pain, and I just do it by hand.

Perception is when one imagination clashes with another

The main problem you're having is abstracting everything using the component model; it is creating a conflict of what things you should pass to the shader manager in order to set the correct data.

Models are not render components.

A model (at run-time) is a renderable model, and you need to create a specialized library in order to know what things you should pass to the shader—this is the only way of don't mix up game engine high-level architeture with low-level architeture (models are lower level than a render component).

A model library knows what a graphics library is. Thus (the game simulation), a model that can be rendered is just a bunch of buffers and shaders information such textures, a local-matrix, and it has no loading information (once they're loaded you can't change what you're rendering because isn't a animation).

The graphics library (it supplies slots to bind a matrix, a option to set the current view-matrix, projection-matrix, shadow-map matrix, etc.) is what a model and a terrain need to be rendered (they play together).

Terrains aren't models.

A terrain library knows what the graphics library is. A scene doesn't have terrains (a terrain isn't a actor nor a camera nor a light), so you need to re-iterate when you say that a render component can have a pointer to the terrain.

After separating a models and terrains you need a way to pass the information to the shader. That's when you see what things you can reduce in order to not keep the graphics library dependent of any other module.

There is no 100% default shader data.

Beginners graphics programmers trying to abstract 100% a shader is the same thing of physics programmers trying to abstract constraints, and they're only asking for pain (at least at the beginning).

Need some vegetation effects in the game? Create vegetations effects in your shader and make sure that you have a way of your shader manager (at software level) supply that to the class that manages vegetations that above the graphics module—this is just saying "create a simple slot in the shader and a function that passes the data (time, wind characteristics, etc.) to the shader at run-time".

A entity-component system it is at the highest level possible of a engine (if you define a engine being the module that uses everything on low-level modules).

That said, you should be able to create some kind of shader parser in order to give the artists the ability to have a high-level view of your rendering architeture (fog, vegetation parameters, lighting models, shadow parameters, etc.). If you're not using any shader parser you can at least re-define what models, terraints, and game render components are before using them.

IMHO that's one flaw of a entity-component architeture because you keep stuck on "I can't do that, I need a abstraction layer", etc. instead of defining only one thing:

"Use componentization over inheritance, and make sure that you keep everything inside the engine module (the highest-level module on the game engine)".

@Seabolt Thanks for the input. It's definitely a challenging design to get just right :)

@Irlan Also, thank you for your comments. I think you and I are actually on the same page in a lot of places, and maybe something like the shader manager is the missing piece for me.

To clarify a few things that may help the discussion:

Models are not render components.

Correct! Render components own a pointer to a model instance, that's all. A model actually knows nothing about render components or anything in a higher level system, and that's the aim of the post - to keep it that way :)

Your and my concepts of models and graphics libraries and the separations between the two are matched, also. My graphics library has no concept of a model. In fact, it's mostly just a thin wrapper around DirectX 11 functions so hopefully when the time comes, porting to other API backends (OpenGL/DirectX 12) isn't too much of a hassle. The graphics library can be used to create various buffers (vertex, index, constant), textures, device contexts, etc. (you get the picture), and has the ability to set data to the actual pipeline slots, but that's about the extent of it.

A model simply contains a mesh and the data needed to render that mesh (save for the constant buffers). A mesh just contains a vertex buffer, an index buffer, and topology descriptions, but has no knowledge of anything higher level than that.

Terrains aren't models.

True. My terrain class has all sorts of data about the terrain instance, including methods for getting the height at a specific coordinate, etc., that have nothing to do with actually rendering the terrain. But it also necessarily contains a model as described above (with a mesh containing a vertex and index buffer), and that model can be added to an entity as its render component so it will get drawn in the same rendering loop as every other entity.

And that's where the current hiccup in my design is, and possibly something the shader manager will solve for after I've thought it through a little more. A model is as simple as any simple thing can be, so when its any particular model's turn to be rendered (be it a house, a wizard, or a terrain), I need a way upstream of the model to ensure that it's is getting everything it needs for drawing itself. For the most part, it contains this - a model has a mesh which has the vertex and index buffers and topology requirements to send to the graphics context. It's shaders and shader resources are stored as part of its subset descriptions. It's just the dynamically updating things that I'm still trying to get figured in, like a constant buffer that has a view matrix, a camera position, and frustum planes, which depends on the specific frame.

Does that help clarify where I currently am with this?

Thanks,

WFP

But it also necessarily contains a model

Terrain necessarily does not contain a model/mesh. It constructs primitives through basic math for the X and Z, and via a heightmap for the Z. It also has specialized LOD methods, and if you are using GeoClipmap terrain there is a special update required for the heightmap texture itself.

Vegetation necessarily does not contain a model or mesh.
Volumetric fog. Water.


Your problem is that you have designed a system that acts like a funnel. At the end of the day, your system requires a certain type of data in a certain format to create a render, which is a design flaw.
Things can render vastly differently from each other, which is why you always let the objects render themselves. Parts of the pipeline can be shared, such as data for sorting in a render-queue, but at the end of the day it needs to be the actual object that does the render. This allows full flexibility and also solves your problem, since objects will be able to upload to shaders whatever data they need, and only that data.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

But it also necessarily contains a model

...
Volumetric fog. Water.
...

Actually any volumetric effect doesn't really need to contain a model.

Also, I'd like to note that you might be trying to generalize materials possibly into single shader. As long as we will have hacks in computer graphics (which means as long as we are not able to run real time bidirectional path tracer without any noise in output image and generate those images at 100 fps with highly complex BSDF based materials), you shouldn't do that.

Just a little explanation - there is a generalized mathematical model describing how the light scatters around surface (e.g. incoming light vs. outgoing light), which is called BSDF (Bidirectional Scattering Distribution Function) - there are even some generic BSDF already implemented in some offline rendering packages (these materials are most generic, you can describe almost any material using it, and they are terribly slow) - BSDF is a superset of less complicated and faster set of functions - like BRDF for example (I guess you've heard about BRDF already). Using correct, physically based implementation of BRDF is still not possible currently (you need ray tracer for that ~ although I get quite good results with my ray tracer - for simple scenes (Sponza, and such) it even gets realtime on solid hardware)

BSDF is quite far future (for interactive rendering), because I don't see a way how you could work with it inside a rasterizer-based renderer and still produce at least semi-accurate results.

My current blog on programming, linux and stuff - http://gameprogrammerdiary.blogspot.com

Actually any volumetric effect doesn't really need to contain a model.

Nor does terrain or water or vegetation. That’s why I listed them there together.
Hard to tell if your emphasis is on “any”, which would in that case mean that you are just expanding upon my list.
And indeed, no volumetric effect needs a model/mesh.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid


Also, thank you for your comments. I think you and I are actually on the same page in a lot of places, and maybe something like the shader manager is the missing piece for me.

Just remember of the SRP (Single Responsability Principle): if you're doing something that is unrelated to a class you need to create another class and give that this responsibility. So, dividing tasks it can be a great start.


But it also necessarily contains a model as described above (with a mesh containing a vertex and index buffer), and that model can be added to an entity as its render component so it will get drawn in the same rendering loop as every other entity.

The more abstracted you do things that don't necessarely need to be, more you tend to write unmaintainable code. A terrain is so specialized that you find books out there like "how to render terrains", "shader guide for terrains", "pratical rendering with terrains", etc. A terrain doesn't have a mesh. It has buffers, a lot of texture layers (to do texture blending, etc.); do not mess up with the models.


Correct! Render components own a pointer to a model instance, that's all.

Looks like your "render component" is a model instance. If it is then it isn't a problem, but a render component can be anything that can be rendered at the game-side.

If you're not confortable with the entity-component-system I'd recommend creating a simple hierarchy such game entities (used in CryEngine, ID Tech, etc.) and keep sub-dividing the entities responsibility so do the components. I don't use ECS myself because I think that for one single person to manage that in a engine it's asking for unmaintainable code.

Hi all,

Thanks for the additional comments. I think it's starting to become more clear how I'm going to need to restructure a few things, but I have a few more questions that might help me out a little.

But it also necessarily contains a model

This was mainly in the context to how I have my heightmap-based terrain setup. As stated, in my engine a model is just a vertex buffer, index buffer, and a subset table. With my approach, I'm storing all of these in a single model that the terrain class builds during initialization and owns and submitting it to the input assembler as a control patch (D3D11_PRIMITIVE_TOPOLOGY_4_CONTROL_POINT_PATCHLIST). Are you saying instead that the Terrain class itself should just directly own the necessary buffers and subset table? This isn't a huge refactoring effort by any means to pull out of the model class and give to the terrain directly, it just seems like it's duplicating code.

I guess I see two possible approaches emerging. One is to loosen up exactly what a render component owns, and instead have it be more like below:


// anything that a scene needs to render must implement this interface
class IRenderable
{
public:
  void render(GraphicsContext& deviceContext, FrameRenderData& frameRenderData) = 0;
};

class RenderComponent: public BaseComponent
{
public:
  // basic interface
private:
  IRenderable* pRenderable;
};

In this setup, whether I'm using my current simple heightmap terrain or later on adopt geoclipmapping, for example, as long as the terrain class implements IRenderable, it can be used as a render component in the scene and drawn during the main render loop to the GBuffer (or whatever render target it needs). This would also mean that a model class would need to implement the IRenderable interface, and therefore know how to draw itself. I don't hate the idea too much, but it seems like a model should contain data about what's needed to draw itself, but not actually do the drawing. Maybe I'm just trying to over-complicate and over-abstract here, though.

The other approach would be to have specialized renderers for different types of objects. One would exists for models in their current context, and there would also be a specialized one for terrain, water, vegetation, etc. For example:


class IRenderer
{
  void drawScene(GraphicsContext& context, FrameRenderData& renderData) = 0;
};

class ModelRenderer: IRenderer
{
  void drawScene(GraphicsContext& context, FrameRenderData& renderData) override;
};

class TerrainRenderer: IRenderer
{
  void drawScene(GraphicsContext& context, FrameRenderData& renderData) override;
};
// more specialized renderers
...
//
RenderSystem
{
  std::vector<IRenderer*> m_renderers;
  // scenes can add only the renderers they need
  void addRenderer(IRenderer* pRenderer);
};

With this setup, the renderers themselves are specialized and know only how to draw objects that "match" them. So for example, if I were to later down the road create a new terrain class based on geoclipmapping, I would create a specialized renderer class for it and register that with the render system during scene setup. Regardless of whether or not it has a model, the specialized renderer will know enough about whatever it is it's supposed to be drawing to set the appropriate data to its constant buffers and get it into the GBuffer or whatever other render target it may happen to need.

Does this make sense, or am I still missing it?

Thanks for your help, all.

My approach is a bit different in that low-level rendering is not directly related to an entity.

Low-level rendering means to execute the stages in a graphic pipeline. Each stage has its specific task within the pipeline. A stage that uses the GPU requires a GPU program. From all available GPU programs left after filtering by the platform and user settings, at most a few of them are suitable for a specific stage because they implement the solution to the stage's task.

A renderer as a component of an entity (be it a game object in ECS or an entity in its own) specifies which one of the remaining GPU programs is to be used. In the end it is the interface of the GPU program that need to be fulfilled, e.g. to set all constant blocks as expected and to stream in all vertex attributes as expected. Some of this stuff is provided by the entity via its renderer, probably stored in some other components. Some is from elsewhere, e.g. the camera or the view settings from the viewing system or stage specific settings from the stage.

Collecting this stuff is the task of the stage when building the rendering jobs. In this sense a renderer component is not an active component. Maybe that this gives a bit of flexibility away (although I have not hit such a problem yet), but it relates the responsibility of what to do within stage processing to the stage itself.

This topic is closed to new replies.

Advertisement