Jump to content
  • Advertisement
Sign in to follow this  
WFP

Propagating data through an engine to constant buffers

This topic is 1239 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I was hoping to get some advice about engine design, and more specifically, about passing data that models need to exist in a constant buffer at render time.  The below is based on an entity-component framework, in case that helps clarify where data should be coming from.

 

For probably >90% of the content in my scene (basic models with a vertex shader, pixel shader, and only really need a WVP transform) I have a reserved (in the context of my engine) constant buffer slot I'm using to set data that comes from the Entity's corresponding TransformComponent combined with the view and projection matrix that comes from the current camera being used.  For such a simple case, this is easy and straightforward enough that I haven't really thought to revisit it.

 

Recently, I've started adding tesselated heightmap-based terrain to the engine, and unlike the common entities, it also requires an additional constant buffer that houses things like min and max tesselation factors, camera position, frustum planes (for culling), and a view matrix (used to move generated normals to view space for the GBuffer).  I haven't done a good job in building flexibility into the current pipeline to accomodate the need for anything outside of the standard constant buffer described above which, again, houses mostly the WVP transformation matrix.

 

When I started thinking longer term, I realized I was going to run into the same issues for things like ocean rendering, volumetric fog, or really anything that is "non-standard" in that it's not just a model with a straightforward Vertex Shader -> Pixel Shader -> Done type of setup.  I'll go over below what I have existing right now to band-aid this situation, but I would really appreciate input about how to better be able to get at the data I need for a specific model's constant buffer requirements without the model having to know about upstream objects (for example, without the model being able to query the scene for the current camera being used to get the view matrix).

 

Current solution:

In my render component, which houses a pointer to a model (which contains a vertex buffer, index buffer, and subset table describing offsets and shaders/texture per subset - ideally where a would like constant buffer data to live since models depend on it), I have added a std::function member to allow for "extra work" and a boolean flag to acknowledge its presence.  The gist is that during setup if a renderable entity (one with a RenderComponent) needs to perform extra setup work, it can define what work needs to be done in that std::function member and the main render loop will check if it's flag is set during each iteration.  So, like below:

// during scene setup - create a render component with the provided model
RenderComponent* pRC = Factory<RenderComponent>.create(pTerrainModel);
pRC->setExtraWork = [&](DeviceContext& deviceContext, FrameRenderData& frameRenderData)
{
  // do the additional work here - in the case above, retrieve extra data needed
  // for constant buffer from the frameRenderData and store and enable that to a
  // shader constant buffer slot
}


/////// later in rendering loop
if(pCurrent->hasExtraWork())
{
  pCurrent->getExtraWork()(deviceContext, frameRenderData);
}


//////// and the way the extra work member is defined in RenderComponent
std::function<void(DeviceContext& deviceContext, FrameRenderData& frameRenderData)> m_extraWork;

The FrameRenderData is just a generated struct of references to the data relevant to any given frame - the current camera, the current entities to be rendered, etc.

 

The other thought I had would be to trigger an event at the start of each frame containing the FrameRenderData and let anything that wants to know about it listen for it, but then I feel like my models or render components would need to have event listeners attached, which also seems like iffy design at best.

 

While the above technically works, I feel like it's kludgy and was wondering if anyone had thoughts on a better way to get data to dependent constant buffers in a system setup similar to what's above.

 

Thanks for your time and help.

Share this post


Link to post
Share on other sites
Advertisement

I have tortured myself over this decision in several of my engines that I've written.

 

Generally I have a couple of things that I look up. Usually I'll have my materials have some way of knowing what constants are in my shaders then doing a lookup for the size of the constant, register location, etc. Then the gameplay side just knows the name of the constant and provides a void* to the material.

 

Now the downside is that every shader needs to have additional information to tell the game what constants it is looking for and other information regarding it. This can be a pain, and I just do it by hand. 

Share this post


Link to post
Share on other sites

@Seabolt Thanks for the input.  It's definitely a challenging design to get just right :)

 

@Irlan Also, thank you for your comments.  I think you and I are actually on the same page in a lot of places, and maybe something like the shader manager is the missing piece for me.

 

To clarify a few things that may help the discussion:

 

 

Models are not render components.

 

Correct!  Render components own a pointer to a model instance, that's all.  A model actually knows nothing about render components or anything in a higher level system, and that's the aim of the post - to keep it that way :)

 

Your and my concepts of models and graphics libraries and the separations between the two are matched, also.  My graphics library has no concept of a model.  In fact, it's mostly just a thin wrapper around DirectX 11 functions so hopefully when the time comes, porting to other API backends (OpenGL/DirectX 12) isn't too much of a hassle.  The graphics library can be used to create various buffers (vertex, index, constant), textures, device contexts, etc. (you get the picture), and has the ability to set data to the actual pipeline slots, but that's about the extent of it.

 

A model simply contains a mesh and the data needed to render that mesh (save for the constant buffers).  A mesh just contains a vertex buffer, an index buffer, and topology descriptions, but has no knowledge of anything higher level than that.

 

 

 

Terrains aren't models.

 

True.  My terrain class has all sorts of data about the terrain instance, including methods for getting the height at a specific coordinate, etc., that have nothing to do with actually rendering the terrain.  But it also necessarily contains a model as described above (with a mesh containing a vertex and index buffer), and that model can be added to an entity as its render component so it will get drawn in the same rendering loop as every other entity.

 

And that's where the current hiccup in my design is, and possibly something the shader manager will solve for after I've thought it through a little more.  A model is as simple as any simple thing can be, so when its any particular model's turn to be rendered (be it a house, a wizard, or a terrain), I need a way upstream of the model to ensure that it's is getting everything it needs for drawing itself.  For the most part, it contains this - a model has a mesh which has the vertex and index buffers and topology requirements to send to the graphics context.  It's shaders and shader resources are stored as part of its subset descriptions.  It's just the dynamically updating things that I'm still trying to get figured in, like a constant buffer that has a view matrix, a camera position, and frustum planes, which depends on the specific frame.

 

Does that help clarify where I currently am with this?

 

Thanks,

WFP

Share this post


Link to post
Share on other sites

But it also necessarily contains a model

Terrain necessarily does not contain a model/mesh. It constructs primitives through basic math for the X and Z, and via a heightmap for the Z. It also has specialized LOD methods, and if you are using GeoClipmap terrain there is a special update required for the heightmap texture itself.

Vegetation necessarily does not contain a model or mesh.
Volumetric fog. Water.


Your problem is that you have designed a system that acts like a funnel. At the end of the day, your system requires a certain type of data in a certain format to create a render, which is a design flaw.
Things can render vastly differently from each other, which is why you always let the objects render themselves. Parts of the pipeline can be shared, such as data for sorting in a render-queue, but at the end of the day it needs to be the actual object that does the render. This allows full flexibility and also solves your problem, since objects will be able to upload to shaders whatever data they need, and only that data.


L. Spiro Edited by L. Spiro

Share this post


Link to post
Share on other sites

 

But it also necessarily contains a model

...
Volumetric fog. Water.
...

 

Actually any volumetric effect doesn't really need to contain a model.

 

Also, I'd like to note that you might be trying to generalize materials possibly into single shader. As long as we will have hacks in computer graphics (which means as long as we are not able to run real time bidirectional path tracer without any noise in output image and generate those images at 100 fps with highly complex BSDF based materials), you shouldn't do that.

 

Just a little explanation - there is a generalized mathematical model describing how the light scatters around surface (e.g. incoming light vs. outgoing light), which is called BSDF (Bidirectional Scattering Distribution Function) - there are even some generic BSDF already implemented in some offline rendering packages (these materials are most generic, you can describe almost any material using it, and they are terribly slow) - BSDF is a superset of less complicated and faster set of functions - like BRDF for example (I guess you've heard about BRDF already). Using correct, physically based implementation of BRDF is still not possible currently (you need ray tracer for that ~ although I get quite good results with my ray tracer - for simple scenes (Sponza, and such) it even gets realtime on solid hardware)

 

BSDF is quite far future (for interactive rendering), because I don't see a way how you could work with it inside a rasterizer-based renderer and still produce at least semi-accurate results.

Share this post


Link to post
Share on other sites

Actually any volumetric effect doesn't really need to contain a model.

Nor does terrain or water or vegetation. That’s why I listed them there together.
Hard to tell if your emphasis is on “any”, which would in that case mean that you are just expanding upon my list.
And indeed, no volumetric effect needs a model/mesh.


L. Spiro Edited by L. Spiro

Share this post


Link to post
Share on other sites


Also, thank you for your comments.  I think you and I are actually on the same page in a lot of places, and maybe something like the shader manager is the missing piece for me.

 

Just remember of the SRP (Single Responsability Principle): if you're doing something that is unrelated to a class you need to create another class and give that this responsibility. So, dividing tasks it can be a great start.

 


But it also necessarily contains a model as described above (with a mesh containing a vertex and index buffer), and that model can be added to an entity as its render component so it will get drawn in the same rendering loop as every other entity.

 

The more abstracted you do things that don't necessarely need to be, more you tend to write unmaintainable code. A terrain is so specialized that you find books out there like "how to render terrains", "shader guide for terrains", "pratical rendering with terrains", etc. A terrain doesn't have a mesh. It has buffers, a lot of texture layers (to do texture blending, etc.); do not mess up with the models.

 


Correct!  Render components own a pointer to a model instance, that's all.

 

Looks like your "render component" is a model instance. If it is then it isn't a problem, but a render component can be anything that can be rendered at the game-side.

 

If you're not confortable with the entity-component-system I'd recommend creating a simple hierarchy such game entities (used in CryEngine, ID Tech, etc.) and keep sub-dividing the entities responsibility so do the components. I don't use ECS myself because I think that for one single person to manage that in a engine it's asking for unmaintainable code.

Share this post


Link to post
Share on other sites

Hi all,

Thanks for the additional comments.  I think it's starting to become more clear how I'm going to need to restructure a few things, but I have a few more questions that might help me out a little.

 

 

But it also necessarily contains a model

This was mainly in the context to how I have my heightmap-based terrain setup.  As stated, in my engine a model is just a vertex buffer, index buffer, and a subset table.  With my approach, I'm storing all of these in a single model that the terrain class builds during initialization and owns and submitting it to the input assembler as a control patch (D3D11_PRIMITIVE_TOPOLOGY_4_CONTROL_POINT_PATCHLIST).  Are you saying instead that the Terrain class itself should just directly own the necessary buffers and subset table?  This isn't a huge refactoring effort by any means to pull out of the model class and give to the terrain directly, it just seems like it's duplicating code.

 

I guess I see two possible approaches emerging.  One is to loosen up exactly what a render component owns, and instead have it be more like below:

// anything that a scene needs to render must implement this interface
class IRenderable
{
public:
  void render(GraphicsContext& deviceContext, FrameRenderData& frameRenderData) = 0;
};

class RenderComponent: public BaseComponent
{
public:
  // basic interface
private:
  IRenderable* pRenderable;
};

In this setup, whether I'm using my current simple heightmap terrain or later on adopt geoclipmapping, for example, as long as the terrain class implements IRenderable, it can be used as a render component in the scene and drawn during the main render loop to the GBuffer (or whatever render target it needs).  This would also mean that a model class would need to implement the IRenderable interface, and therefore know how to draw itself.  I don't hate the idea too much, but it seems like a model should contain data about what's needed to draw itself, but not actually do the drawing.  Maybe I'm just trying to over-complicate and over-abstract here, though.

 

The other approach would be to have specialized renderers for different types of objects.  One would exists for models in their current context, and there would also be a specialized one for terrain, water, vegetation, etc.  For example:

class IRenderer
{
  void drawScene(GraphicsContext& context, FrameRenderData& renderData) = 0;
};

class ModelRenderer: IRenderer
{
  void drawScene(GraphicsContext& context, FrameRenderData& renderData) override;
};

class TerrainRenderer: IRenderer
{
  void drawScene(GraphicsContext& context, FrameRenderData& renderData) override;
};
// more specialized renderers
...
//
RenderSystem
{
  std::vector<IRenderer*> m_renderers;
  // scenes can add only the renderers they need
  void addRenderer(IRenderer* pRenderer);
};

With this setup, the renderers themselves are specialized and know only how to draw objects that "match" them.  So for example, if I were to later down the road create a new terrain class based on geoclipmapping, I would create a specialized renderer class for it and register that with the render system during scene setup.  Regardless of whether or not it has a model, the specialized renderer will know enough about whatever it is it's supposed to be drawing to set the appropriate data to its constant buffers and get it into the GBuffer or whatever other render target it may happen to need.

 

Does this make sense, or am I still missing it?

 

Thanks for your help, all.

Share this post


Link to post
Share on other sites

My approach is a bit different in that low-level rendering is not directly related to an entity.

 

Low-level rendering means to execute the stages in a graphic pipeline. Each stage has its specific task within the pipeline. A stage that uses the GPU requires a GPU program. From all available GPU programs left after filtering by the platform and user settings, at most a few of them are suitable for a specific stage because they implement the solution to the stage's task.

 

A renderer as a component of an entity (be it a game object in ECS or an entity in its own) specifies which one of the remaining GPU programs is to be used. In the end it is the interface of the GPU program that need to be fulfilled, e.g. to set all constant blocks as expected and to stream in all vertex attributes as expected. Some of this stuff is provided by the entity via its renderer, probably stored in some other components. Some is from elsewhere, e.g. the camera or the view settings from the viewing system or stage specific settings from the stage.

 

Collecting this stuff is the task of the stage when building the rendering jobs. In this sense a renderer component is not an active component. Maybe that this gives a bit of flexibility away (although I have not hit such a problem yet), but it relates the responsibility of what to do within stage processing to the stage itself.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!