Design: Managers VS Single Entities

Started by
15 comments, last by Zipster 8 years, 11 months ago

I see this differently, maybe I have thinking of this concept wrong, to me a renderable is a object such as a Quad.

Well you can see it however you want, what I'm saying is that your current definition, as implied by the sample code you showed and the problems you have with, is part of what is limiting your design. Sure you can define a renderable as a single quad, but that's not really a useful abstraction in general.

For an instance of SpriteBatcher they have:

So here you have a vertex buffer with a particular format. You can describe the format with data as well, as simple as some flags or as complex as a class with offsets and attribute types inside. You have a VAO, buffer objects for the index and vertex buffers, a shader program. All data.


struct RenderCommand { // "Renderable," or "RenderObject," or "DrawCommand" or "Drawable" or "RenderJob" or...
  VertexFormatFlags vertexFormatFlags;
  void * vertexData;
  void * indexData;
  int vertexBufferId;
  int indexBufferId;
  int arrayObjectId;
  int programId;
  map_of<string, Uniform> programUniforms; // including world matrix, et cetera
  array_of<Texture *> textures;
  // ...other stuff...
}

You can extend this with whatever other data you find yourself needing; often this might include state information, such as whether or not culling should be enabled or not. Whatever.

This structure represents a logical draw call. It's all the data necessary for you to make that call. All the state you must set. The system that consumes a big list of all of these can use this information to sort the calls appropriately; sort anything flagged as transparent from back-to-front, for example, and ensure it's all rendered after solid objects. It can sort by shared textures or shared shaders, so as to avoid you needing to make excess state changes. This is both more optimal and completely circumvents the problem you initially illustrated where things go all pear-shaped if you forget to call your "end()" method. You don't need any sort of "end" method any more, the RenderSystem has all the draw calls requested as a big list of these RenderObject and all the logic to make sure they ordered and sequenced properly along with their appropriate API state changes is in there, in one place.

Continuing on to your particle emitters, you point out a need for two VAOs and two shaders. You can't bind two shaders at once, thus a particle emitter will naturally involve two RenderCommands, one for each VAO/shader pair. One is for the "update particles" call, the other is for the "render particles" call. Here you will need to come up with a means of ensuring that those two commands always execute in the right order and add that to your RenderCommand. One way to do that is to give each command a unique (per frame) "job ID" and a "dependency job ID," which allow your sorting routines in RenderSystem to ensure one always executes first, if this isn't already implicit in the sorting based on your shaders. Other ways to sort include allowing a "bucket" to be specified when you insert the RenderCommand into the RenderSystem, and preventing the RenderSystem from reordering the buckets relative to each other (bucket 0 always executes before bucket 1) and simply insert the two RenderCommands into different buckets. Et cetera.

In the case of the first command, it's configured such that its geometry shader is writing to an output buffer, and the ID of that output buffer is used for the ID of an input buffer in the second command.

That's it. All the of stuff you're doing in these disparate sprite and particle systems is capture-able as data, as values of fields on a generic structure (or set of structures) that can be generically processed, avoiding repetition, increasing maintainability, probably improving performance, and eliminating the possibility of the sort of state co-operation errors you encounter with a system where every kind of thing manages it own entire render path.

Now your SpriteBatcher still needs to manage a set of vertex buffers, but whenever you ask it to give you a new sprite, it modifies the buffer and returns a RenderCommand pertaining to that buffer. If it happens to be unable to make a sprite with the existing buffer, it creates a new and returns a RenderCommand for that one instead. Either way you simply take all its render commands and insert them into the RenderSystem. Similar for the particle system, you ask for a new particle emitter from it, it maybe allocates some new buffers or resources if it needs them, but ultimately just returns two RenderCommands appropriate-constructed for insertion into the RenderSystem.

The idea is simply to take all the low-level OpenGL state-manipulation that you used to be doing in both places, factor it out by representing the differences as data, and move it into a single uniform place. Build on top of that single uniform place for the parts of the thing that need to be specific, such as the fact that certain specific shaders must be used for rendering or updating the particles.

Advertisement

[Edits]
Overall things are starting to click, but I still got some rocky parts

You don't need any sort of "end" method any more, the RenderSystem has all the draw calls requested as a big list of these RenderObject and all the logic to make sure they ordered and sequenced properly along with their appropriate API state changes is in there, in one place.


This does not make sense to me. How does my batch know when a batch is done (assuming its not forced)?
I know you say just have the batch return a RenderCommand, but the issue I have here is that I map directly into the batcher's VBO.
This requires an initial MapBuffer call, followed by a UnmapBuffer call when I am done. If I do these calls ever single time I add in a sprite. Its going to kill my performance, also if I have a intermediate data spot it defeats the purpose of even mapping directly into a buffer.


//Breakdown of current Sprite Batcher

batch.BeginBatch(); //Call to MapBuffer, get pointer to buffer for direct placement of data
batch.DrawSprite(); //Check if buffer full / texture swap. If check passes: UnmapBuffer, render batch , and call MapBuffer
batch.DrawSprite(); //Same as above
batch.EndBatch(); //Call UnmapBuffer, render batch

This is both more optimal and completely circumvents the problem you initially illustrated where things go all pear-shaped if you forget to call your "end()" method


If the high level systems (sprite batcher, particle emitter, etc) defer their draw calls to a Render System I kind of don't see how the original problem is fixed. I mean don't get me wrong, I like the idea of having everything in one spot and I understand that the Render System will / should sort all the queued draw calls for best performance, but in the case I presented I don't think that will ever fix original problem. The particle emitter [focusing only on the render portion of the emitter, because the update portion does not matter for this case] has its own separate vertex shader and has no tie to the sprite batcher in anyway. The Buffer Object used for the vertex data is the TBO that is bound as a uniform. The true draw call here (the one that uses the IBO) does not use a traditional VBO

I guess what I'm trying to say is that the particle emitter can be seen as its own batch. In terms of the Render System, the batch should be able to be treated as a normal batch, but also I would think that any batch from any system would need to be able to trigger the "End batch" for all the systems. Otherwise the original problem still exists, This may come down to my above question about the batch finalization as the following could be is a fix:


 
//Uses the painter algo. Use of texture atlas in example
 
//Because of the EndBatch calls the original problem is fixed. Due to the batches ending all batches are considered properly
//Draw the background
batch.BeginBatch();
batch.DrawSprite(textureAtlas, clipRect1);
batch.EndBatch();
 
//Draw fire particles for the campfire 
emitter.DrawParticles();
 
//Draw a camp fire site
batch.BeginBatch();
batch.DrawSprite(textureAtlas, clipRect2);
batch.EndBatch();


//Broken explanation (original issue) break down:

//The drawing order is the same as the above. Background then particles, then campsite site on top of the particles
//Problem: since the batch does not end (The buffer has plenty of space and there is no texture swap) until the EndBatch Call.
//It is never rendered till then, messing up the draw order which can lead to sprites being covered up when they should not be
batch.BeginBatch();
batch.DrawSprite(textureAtlas, clipRect1); //Draw the background
emitter.DrawParticles(); //Draw fire particles for the campfire 
batch.DrawSprite(textureAtlas, clipRect2); //Draw a camp fire site
batch.EndBatch();

 

When I imagine the Render System I think of something like:


 
//FlushRenderCommandPipeline handles everything automatically in terms of draw calls (additional sort for best performance and etc)
//Preservation of the draw order is done automatically
batcher.DrawSprite(textureAtlas, clipRect1);
emitter.DrawParticles();
batcher.DrawSprite(textureAtlas, clipRect1);

renderer.FlushRenderCommandPipeline();
 

Maybe I am wrong and its more combo of the two:


 
//FlushRenderCommandPipeline handles everything automatically in terms of draw calls (additional sort for best performance and etc)
//Preservation of the draw order is done automatically
 
//Draw the background
batch.BeginBatch();
batch.DrawSprite(textureAtlas, clipRect1);
batch.EndBatch();
 
//Draw fire particles for the campfire 
emitter.DrawParticles();
 
//Draw a camp fire site
batch.BeginBatch();
batch.DrawSprite(textureAtlas, clipRect2);
batch.EndBatch();

renderer.FlushRenderCommandPipeline();
 

What Josh is describing is a retained-mode API, where the graphics library (RenderSystem) maintains its own internal model of the scene. The application adds and removes "renderables" from that scene, however beyond that the graphics library handles how and when everything is drawn. What you have right now is an immediate-mode API, where the application maintains the model and issues drawing commands to the graphics library.

In the retained-mode API, the application builds the scene using the aforementioned renderable primitives, but that's it. The graphics library does the rest. There are no batchers issuing draw commands to the API. If you want to add and remove a sprite to and from the scene, it would look something like this:


// when the game object represented by the sprite is created
Sprite* sprite = new Sprite("Foo.png", Vector2(100, 100));
scene->addRenderable(sprite);
 
// ...
 
// when the game object represented by the sprite is destroyed
scene->removeRenderable(sprite);

That's it. As far as batching is concerned, that can and should be automatically detected by the renderer. If all your sprites use the same vertex format, same shader, etc., the renderer will know and can combine those vertices into the same VBO and collapse the draw calls. Same thing with particle systems, or 3D meshes, or anything else that can be rendered in your scene.

This leads to the last point, which is is that the graphics library should own and manage the low-level graphics resources like vertex buffers. The application should only deal in abstractions, and let the graphics library determine what physical resources are required, and how to manage those resources.

What Josh is describing is a retained-mode API, where the graphics library (RenderSystem) maintains its own internal model of the scene. The application adds and removes "renderables" from that scene, however beyond that the graphics library handles how and when everything is drawn. What you have right now is an immediate-mode API, where the application maintains the model and issues drawing commands to the graphics library.

In the retained-mode API, the application builds the scene using the aforementioned renderable primitives, but that's it. The graphics library does the rest. There are no batchers issuing draw commands to the API. If you want to add and remove a sprite to and from the scene, it would look something like this:

// when the game object represented by the sprite is created
Sprite* sprite = new Sprite("Foo.png", Vector2(100, 100));
scene->addRenderable(sprite);
 
// ...
 
// when the game object represented by the sprite is destroyed
scene->removeRenderable(sprite);
That's it. As far as batching is concerned, that can and should be automatically detected by the renderer. If all your sprites use the same vertex format, same shader, etc., the renderer will know and can combine those vertices into the same VBO and collapse the draw calls. Same thing with particle systems, or 3D meshes, or anything else that can be rendered in your scene.

This leads to the last point, which is is that the graphics library should own and manage the low-level graphics resources like vertex buffers. The application should only deal in abstractions, and let the graphics library determine what physical resources are required, and how to manage those resources.


Ok, so we really are talking about something like my original psudo code (second post by me).
It just that instead of the Render System being a wrapper for everything else, everything else contributes to the Render System.
Where the Render System only sees "batches" and handles those accordingly based on some criteria. And while a Render System won't fix the issue I had originally (they rely on separate shaders and etc), it does have the befits of having one uniform spot to handle anything that is fed to the system

This chunk, actually really helped me out a lot:

If you want to add and remove a sprite to and from the scene, it would look something like this:


// when the game object represented by the sprite is created
Sprite* sprite = new Sprite("Foo.png", Vector2(100, 100));
scene->addRenderable(sprite);
 
// ...
 
// when the game object represented by the sprite is destroyed
scene->removeRenderable(sprite);

Ok, so we really are talking about something like my original psudo code (second post by me).

Not quite, since methods like DrawSprite and DrawParticles (if I'm referring to the correct post) imply that calling code tells the renderer what to render, and when to render it, based on the order and presence of draw commands. Rendering is guided by procedures, which is at the core of an immediate-mode API.

In a retained-mode API, calling code only manipulates a model, i.e. data, and the render system handles what should be rendered and when. You're not calling into the render system every frame telling it to render something. It already knows about your sprite because you added it to a list (via addRenderable), and it will stay there until you remove it (removeRenderable). The correct render order of the sprites, both among themselves and versus other renderables like particles, can be entirely data-driven. Provided that you give the render system enough information to perform the sorting (Z-order, transparency, etc.), it should solve any of the issues you have.

In a retained-mode API, calling code only manipulates a model, i.e. data, and the render system handles what should be rendered and when. You're not calling into the render system every frame telling it to render something. It already knows about your sprite because you added it to a list (via addRenderable), and it will stay there until you remove it (removeRenderable). The correct render order of the sprites, both among themselves and versus other renderables like particles, can be entirely data-driven. Provided that you give the render system enough information to perform the sorting (Z-order, transparency, etc.), it should solve any of the issues you have.


When we talk about addRenderable, would a renderable have the data for the VBO / IBO. Where this data will actually be put into the buffer. Where the render system has a giant VBO / IBO that grows / shrinks when renderables are added. Or would it be that each renderable has its own individual VBO / IBO (the actual GPU buffers) that is used by the render system?

When we talk about addRenderable, would a renderable have the data for the VBO / IBO. Where this data will actually be put into the buffer. Where the render system has a giant VBO / IBO that grows / shrinks when renderables are added. Or would it be that each renderable has its own individual VBO / IBO (the actual GPU buffers) that is used by the render system?

For sprite-based rendering, you shouldn't need to worry about vertex data at all. The render system can trivially generate vertex data for sprites because they're just unit quads centered at the origin, and since the render system is also managing texture resources, it would need to "own" the UV data for the vertices anyway. All the renderable would need to specify is translation, rotation, scale, and image resource of the sprite. Multiple renderables that reference the same image resource can reuse the same vertices. A clever render system could atlas multiple sprites into the same texture resource, so that the only state that needs to change between rendering individual sprites is the transform.

This topic is closed to new replies.

Advertisement