Design: Managers VS Single Entities

Started by
15 comments, last by Zipster 9 years ago

The other day I finally got to the point where I can say my two main systems (a sprite batcher and full GPU particle emitters) are working!

So I decided it would be a good idea to test them both... at the same time, which is where I ran into a minor glitch. Turns out if I do


 
batch.Begin(projectionMatrix);
batch.Draw(sprite1, 150.0f, 150.0f);
emitter.DrawParticles(projectionMatrix, 150.0f, 150.0f);
batch.End();

The emitter's particles will always get covered, regardless of the order. I soon found out it came down to the current batch of my sprite batcher not ending at the point of the emitter's draw call. Which makes sense, because they are separate systems and handle drawing separately.

Then I pondered a solution: If I had a render manager then I could do things like flag every Draw**** method for each subsystem. Then when a Draw**** calls are made I could flush and render whatever I need.

So this brings me down to my questions:

Is this something I should even consider / whats the best approach?

Should I have done this in the first place?

Would it just be better to have the separate entities and "flush/draw" when I know I need?

Does the Singleton pattern apply here / will I fall into it's "trap" (I kind of don't really understand the Singleton pattern trap)?

Advertisement
Is this something I should even consider / whats the best approach?

"Best" is subjective, but having a renderer interface responsible for properly sequencing all draw requests is a pretty common pattern. I'd say it's worth looking in to, as if nothing else is does afford you the opportunity to make it harder to misuse your rendering architecture in the way you've currently demonstrated. It also implies that your rendering code (the stuff that actually deals with OpenGL or Direct3D underneath your API layers) will be more centralized in this rendering interface rather than scattered about over "sprite batches" and "particle emitters."
You can still have higher-level render objects like that, they can just be written in terms of your rendering interface so the onus of the sequencing (and begin/end pairing) is not on the user of the code any longer.
Should I have done this in the first place?

If you did, you may never have stumbled across the flaw in your initial design that taught you the alternative your thinking about might be better, and thus you would not have understand the relative pros and cons as well.
Would it just be better to have the separate entities and "flush/draw" when I know I need?

It's generally better to have a single renderer that knows how to draw "renderable objects" rather than spread rendering code all over other entities. It centralizes the functionality for extension and optimization and removes responsibilities from other interfaces that may not need them.
Does the Singleton pattern apply here / will I fall into it's "trap" (I kind of don't really understand the Singleton pattern trap)?

You can trivially avoid the pitfalls of singletons by simply never considering them as a design option. You'll be perfectly fine doing so; there is no need for a rendering singleton here.
If I had a render manager

"Manager" is often a bad word to use. It's too broad, implies too much fuzziness in the individual responsibilities of an interface. There are usually better, more-specific words you can use to describe the name of such a thing, and you should use those if possible. In this case what you are describing is an interface which takes draw requests and then flushes them to the card in the proper order with the proper set up and tear down. It would not be out of place to simply call this a "renderer," but if you are using that name already, other common names for the kind of thing you're talking about are "render queue," "render command stream," and "render sequencer."

having a renderer interface responsible for properly sequencing all draw requests is a pretty common pattern.


Are we talking about a literal interface here that each "sub system" implements?
Or are we talking about some kind of larger "render system object", it has where all the "sub systems" as members and then we have wrapper functions?

EG Something like:



class RenderSystem
{

    public:
    DrawSprite(const Sprite &sprite, float posX, float posY);
    DrawParticles(const ParticleEmitter &emitter, float posX, float posY);

    private:
    SpriteBatcher batcher;
    bool pipeHasItems;

    /* Other members */
};

//=====================================================================================

void RenderSYstem::DrawSprite(const Sprite &sprite, float posX, float posY)
{  
    if(pipeHasItems)
       FlushRenderPipe();

    pipeHasItems = true;
    batcher.Draw(sprite, posX, posY);
}


/* Similar method for particle emitter */

/* -- Other render systems methods -- */

No, more like a "RenderSystem" object that methods to "enqueue" and "flush" objects of type "Renderable." A renderable object is a simple structure (not the base of some class hierarchy) that has references to geometry (vertex and index data), textures, shaders, and whatever other metadata you need to render (such as some flags to indicate sort order requirements, and so on). The render system simply enqueues these, sorting them as appropriate based on the data, and flushes them when needed (or demanded). This puts all the logic for dealing with batching, sorting, et cetera in one place.

Higher-level systems (like a "particle visualizer") are built on top of that medium-level system. The ParticleVisualizer interface can take a *logical* particle system description and produce the appropriate renderable instance or instances, which can then be fed to the RenderSystem.

No, more like a "RenderSystem" object that methods to "enqueue" and "flush" objects of type "Renderable." A renderable object is a simple structure (not the base of some class hierarchy) that has references to geometry (vertex and index data), textures, shaders, and whatever other metadata you need to render (such as some flags to indicate sort order requirements, and so on). The render system simply enqueues these, sorting them as appropriate based on the data, and flushes them when needed (or demanded). This puts all the logic for dealing with batching, sorting, et cetera in one place.

Higher-level systems (like a "particle visualizer") are built on top of that medium-level system. The ParticleVisualizer interface can take a *logical* particle system description and produce the appropriate renderable instance or instances, which can then be fed to the RenderSystem.

This sounds a lot like my sprite batcher in a sense. It takes in the all my data, manipulates it, gets it all ready for the render pipe, and etc. The only unfit puzzle piece is my emitters, since I created them with a dramatically different rendering method in mind. I guess its just a matter of fusing them together / building a system that can handle them both properly

Leaving me with a few questions, but mainly does this mean my "render system" is going to be tightly coupled to my current rendering objects or any future one's I create? Or am I missing the point here? And that I should be rewriting my current systems into one giant one instead of trying to keep them separated?

The way I see it is that my render system is the highest level and the subsystems (sprite batchers, emitter, light, etc) are the medium level. Maybe I'm thinking of this wrong?

I think, you did not understand he wants you to ditch the idea of a huge RenderSystem with dependencies on "everything", make the most basic one by avoiding it having any dependence on game objects and just take simple structures, which are still descriptive enough to render everything, as they contain only low-level data.

Then you can have any number of independent higher level systems, which only know of one type of game object and the RenderSystem interface and do the job of translating the data into the mentioned simple structures and feeding it into the RenderSystem.

but mainly does this mean my "render system" is going to be tightly coupled to my current rendering objects or any future one's I create?

No, exactly the opposite. Creating new things -- text, sprites, particle emitters, whatever -- involves appropriately crafting the data of an Renderable instance or instances and sending them to the RenderSystem. The RenderSystem only understands Renderables, and thus is not coupled to any specific use of them.

And that I should be rewriting my current systems into one giant one instead of trying to keep them separated?

Yes, you should strive to have your rendering pipeline unified. Differences should be handled by data, not code, for as deep along the pipeline as you can. Your approach right now appears to have each type of visual thing directly know how to render itself through manipulation of the appropriate D3D interfaces, et cetera. Not only does this create the sort of state synchronization problems you original encountered, it's usually less maintainable because it's usually a lot more repetitive boilerplate code, and changing or adding features can involve having to touch a significant amount of that code.

The way I see it is that my render system is the highest level and the subsystems (sprite batchers, emitter, light, etc) are the medium level. Maybe I'm thinking of this wrong?

In the approach I am describing, you have OpenGL or D3D at the bottom of the stack. On top of that is the "RenderSystem," which abstracts away those APIs allowing you to speak in terms of "draw this geometry with these flags at the correct time." On top of that you build the highest-level visualizers for transforming you logical game data (tile maps, sprites, particle emitters) into appropriate commands or objects for the RenderSystem to consume.

I think, you did not understand he wants you to ditch the idea of a huge RenderSystem with dependencies on "everything", make the most basic one by avoiding it having any dependence on game objects and just take simple structures, which are still descriptive enough to render everything, as they contain only low-level data.
Then you can have any number of independent higher level systems, which only know of one type of game object and the RenderSystem interface and do the job of translating the data into the mentioned simple structures and feeding it into the RenderSystem.

In the approach I am describing, you have OpenGL or D3D at the bottom of the stack. On top of that is the "RenderSystem," which abstracts away those APIs allowing you to speak in terms of "draw this geometry with these flags at the correct time." On top of that you build the highest-level visualizers for transforming you logical game data (tile maps, sprites, particle emitters) into appropriate commands or objects for the RenderSystem to consume.


I do understand what Josh means, but I think the thing that is tripping me up is the way I currently have it VS the way I should have it.

The "Render System" Josh describes actually truly is my sprite batcher, it handles all my intended render calls for my sprites, tiles, etc.

But here is where the issue, the trip up I have, comes into play. Its that the system I wanted afterwards, my particle emitter, have been built on a whole different way of rendering.

It so drastically different that it did not/does not make sense to make it apart of my sprite batcher.

The reason these systems are so different / hard to fit together is what everything is when its boiled down.

The sprite batcher takes vertex data, manipulates it, fills a VBO, then if the batch needs to end because of a texture swap or we ran out of space then it is and new batch is started. Then you have my emitters, these are full on GPU emitters. In a nutshell, they get configured once on creation and you let them go. The GPU handles everything: it is responsible for creating new particles, killing particles, etc. The "renderable object" in turn gets handled internally by the hardware.

The two systems have very different vertex data footprints and ways of rendering. The batcher has its "normal" use of VBOs / a few uniforms and makes quads; where the emitter uses TBOs filled with point / particle property data, then the emitter's vertex shader "expands them" into quads.

That is why I am falling apart here :(

I can't make sense of having something that already handles itself and forcing it to be handled by something else that really does nothing for it

But here is where the issue, the trip up I have, comes into play. Its that the system I wanted afterwards, my particle emitter, have been built on a whole different way of rendering.

Which is generally bad. Now you have divergent render paths that must communicate to ensure reasonable state assumptions are consistent.

The sprite batcher takes vertex data, manipulates it, fills a VBO, then if the batch needs to end because of a texture swap or we ran out of space then it is and new batch is started.

A "batch" is a "renderable." Geometry, associated textures, flags (et cetera). A sprite batcher produces n renderables.

Then you have my emitters, these are full on GPU emitters. In a nutshell, they get configured once on creation and you let them go. The GPU handles everything: it is responsible for creating new particles, killing particles, etc. The "renderable object" in turn gets handled internally by the hardware.

The "renderable object" cannot be "handled internally by the hardware" unless there is some kind of draw call. Otherwise, nothing happens. So then, a "renderable" is that draw call. You must set up some kind of buffer, shader and texture state before you emit that draw call. Factor that out with the state that the sprite rendering is using. It's not going to be possible to give you further advice without more specifics about what you find so different between these two things.

The two systems have very different vertex data footprints and ways of rendering. The batcher has its "normal" use of VBOs / a few uniforms and makes quads; where the emitter uses TBOs filled with point / particle property data, then the emitter's vertex shader "expands them" into quads.

A "renderable" has buffers (VBOs, TBOs), properties (uniforms) and shaders...

A "batch" is a "renderable." Geometry, associated textures, flags (et cetera). A sprite batcher produces n renderables.

A "renderable" has buffers (VBOs, TBOs), properties (uniforms) and shaders..


I see this differently, maybe I have thinking of this concept wrong, to me a renderable is a object such as a Quad.
Just the data that would be in a VBO (The XYZ positions, Texture Coords, assigned texture, etc), the VBO is just the medium to get it to the GPU

Maybe this is cause my focus is 2D

It's not going to be possible to give you further advice without more specifics about what you find so different between these two things.

I hope this will explain things better

For an instance of SpriteBatcher they have:

Renders Quads. Single vertex Data is:

XYZ position

UV tex cords

Single VAO:

Contains a VBO and IBO

Single Shader Program:

Contains a Vertex Shader and Fragment Shader

The VBO is filled using Draw methods that map directly into the buffer. Vertices are pre transformed

Drawing [When a batch's end has been signaled using the End() method or forced (ran out of space) through the VBO space check)]:

Bind the shader program

Bind the batch's texture

upload uniforms (MVP matrix / texture)

Bind the VAO

True draw call that uses the IBO to render all the data that was placed in the VBO

For an instance of Particle Emitter they have:

Renders quads transformed from point data. TBO Data:

XY position

XY acceleration

XY Velocity...

all use vec2 in shaders. 8 vec2s total

Two VAOs

VAO 1: contains only a TBO

VAO 2: contains only a IBO

Two Shader Programs.

Shader Program 1 (used for updating particles):

Contains a Vertex Shader and Geometry Shader

Shader Program 2 (used for rendering particles):

Contains a Vertex Shader and Fragment Shader

Pertains to Updating [thought I would include just in case]:

TBOs are initial given one "generator" particle. This particle created more particles based on uniforms that are used to describe the properties a particle can have (velocity, acceleration, etc). Particles are generated over time in the Geo shader, where they are also discarded if they are "dead". The vertex shader is the one that actually updates the positions etc

Pertains to Rendering [Explicitly done through a DrawParticles call]:

Bind Shader Program 2 (render program)

upload uniforms (MVP matrix / texture for particles / bind texture for TBO use)

Bind VAO 2 (one with IBO only)

True draw call that uses the IBO. Magic happens in the Vertex Shader where the TBO is iterated over to take each Point Data (single vertex and its data that makes up the particle) and expand it into a quad. Number of particles is based on the actual number of point data in the TBO

If you have any questions let me know

This topic is closed to new replies.

Advertisement