Rendering and Resource Ownership

Started by
8 comments, last by Hodgman 7 years ago

In looking how to design a 2D renderer, I've done a fair amount of research, and all I've found seems to coalesce to two points:

1. There's a lot of ways to do it.

2. Objects shouldn't know how to draw themselves.

This is about all I've found on the subject. Most other articles/tutorials/books on rendering center on the whole graphics pipeline shtick, which is of course important to know but not what I'm after.

Obviously, at some level the renderer needs a list of sprites' texture data and metadata on how to draw them (for instance, position data and texture format). Who owns that data? A resource manager god object? Where do I stage what needs to be rendered at a given time before actually submitting it to the renderer? It certainly doesn't make sense to stage that sort of information in a resource manager.

Furthermore, what's the ideal way to lay out memory for something like this? should I allocate a sprite's metadata contiguously with it's texture data, or does that not make sense for this kind of application? Or is memory layout of renderables not usually a significant bottleneck?

Advertisement

Obviously, at some level the renderer needs a list of sprites' texture data and metadata on how to draw them (for instance, position data and texture format). Who owns that data?

A sprite object?

should I allocate a sprite's metadata contiguously with it's texture data,

No. If you avoid pessimization you probably won't have any trouble. If you start to see performance issues then you can vectorize your sprite data in your renderer and use the external sprite class as an interface, which allows streamlining your sorting and culling.

void hurrrrrrrr() {__asm sub [ebp+4],5;}

There are ten kinds of people in this world: those who understand binary and those who don't.
A sprite object?

Sure, but who owns the sprite? Does every object own its sprite data and pass that to the renderer? That would mean all of my game logic can talk to the renderer, which seems a bit ridiculous.

Obviously if I don't have to worry about memory layout it makes the most sense for the resource system to own the texture data, and the instance data (i.e. position) to be stored elsewhere and just opaquely reference the texture. What I'm having trouble wrapping my head around is the high-level API between the game logic and the renderer. If render just looks like this:

void Render(Sprite[]);

How do I get a sprite array set up in the first place? Is there some intermediate "gather" layer where you queue up things to be rendered, or should I just make the renderer stateful and defer calls to Render() to happen all at once?

In nearly every game I've worked with, multiple systems are involved and each has their own ownership.

A common pattern (under various names) I've seen in all the big games I've worked on is to have a store, proxy, cache, and loader. The store serves as the hub, the controller, the thing many code bases call a "manager" (although I hate that name). The Proxy is what most people act on and it is what the code thinks of as the actual resource, this is what would be called a Sprite class or Texture class or Model class or AudioClip class or whatever. The cache decides how items are actually loaded in memory, decides what is loaded and helps with loading resources over time so the system doesn't block execution. The loader is what pulls the resources in at any time, and might have different ways of loading data out of multiple types of files, or monitoring file changes, or streaming across networks, etc.

Some set of code requests GetWhatever() from the store. The store create a proxy and sends it over to the the cache for data. The cache might point it to live data or to a placeholder object, returns a proxy object filled with some valid data, the proxy object gets returned to the main system. The gameplay code doesn't know (nor does it care) what is inside the proxy object; it is told the object is a Sprite and as far as it cares it is a fully functional sprite. In the background the cache uses a placeholder object, such as an alpha-ed out single pixel sprite in release build or a big red square in the debug build, and eventually the loader loads the resource and the placeholder is replaced with the actual object. This also allows multiple instances to share the same data because the cache can decide to share the data among as many systems as they want.

The rendering system works with the store to figure out all the instances that are active. The Store owns all the Proxy objects, it knows how many instances of objects are around, and controls the lifetime of when they are created and destroyed. The rendering system works with the store -- in this case a sprite store -- to draw all the sprites that need to be drawn.

There are naturally many more ways to do it, but this one works extremely well.

There are naturally many more ways to do it, but this one works extremely well.

I had a similar system in mind for my resource management since I want to be able to load things asynchronously (and thus need a proxy or handle anyway).

So in the system that you're describing, the renderer doesn't actually take a list of things to render as a parameter, but is tightly coupled with the resource system to derive what to render? Does that mean you're storing instance data (for example, the position of each sprite) in the resource store as well?

The two tend to grow at the same time as the systems develop.

The store is the owner of the resources, the store is responsible for creation, destruction, and other lifetime ownership tasks. The loader, cache, and proxy objects are all inner details that external systems don't see.

The gameplay code requests instances from the store.

The rendering code has some functions to request collections of all active objects. The information the renderer needs to render the resources gets placed in the collection's detail of the store. The two have inter-related behavior, but should still be independent of each other with their own single responsibility. Exactly what information gets stored is up to the system, whatever makes sense for it. Generally the proxy objects have information about their position, but it doesn't have to be.

I see. I had always considered resource management as independent from gameplay (and therefore things like position and liveness), but it looks like you're abstracting on top of the normal loading and caching behavior to make it a sort of general renderable database.

If I'm understanding all this correctly, what you're describing at a high level might work out something like this:

gameplay object requests a sprite resource from the resource loader, gets a proxy. This proxy internally stores the necessary state changes and texture data of the sprite in question (though the gameplay code doesn't know that) and the world transformations.

gameplay object updates its own position, either directly via a pointer or indirectly via a opaque handler and function call. In either case, this update exists inside the resource system as well.

eventually, the renderer runs, talks directly to the resource system, and renders the live sprite proxies that pass its basic occlusion test.

Is that right?

Yes, correct as a high level view.

The implementation details have been different on every system I've worked on. Exactly where the information is stored, how it is stored, what code updates what other code, those all may vary from game system to game system.

Also, while I called it a proxy above, the gameplay code doesn't usually call it a proxy, they call it the resource itself. As far as gameplay is concerned, it is the model, it is the sprite, it is the texture, it is the audio clip. The object doesn't actually contain the actual sprite data, or model data, or texture, or audio, but that detail is irrelevant. It contains everything the gameplay code needs to work with the resource, so it is the resource for those purposes.

Yes, correct as a high level view. The implementation details have been different on every system I've worked on. Exactly where the information is stored, how it is stored, what code updates what other code, those all may vary from game system to game system.

That's fine. I'm exclusively concerned with the high-level here. I expect I'll have to fiddle with things a lot to figure them out, but as long as I get the high level separations figured out its not too hard.

Thanks for the information.

Game objects own simple renderable objects, e.g. sprites. The scene of game objects can be observed by a render queue to collect a list of sprites. The renderer can then consume the sprite queue.

Alternatively: Game objects own simple renderable objects and bounding shapes. A sprite-batch renderer maintains weak ownership of the same, and an association of bounding shape/sprite pairs. A bounding shape collection maintains weak ownership of bounding shapes. With no knowledge of the game objects or renderer, the bounding shape system can cull the shapes and produce a list of visible shapes. You can pass this list to the sprite batching system, which can convert it into a list of sprites, and then use that to populate a render queue (perhaps by sorting sprites by material and creating batched draw calls). You can then pass that queue to the renderer.
The renderer, sprite batcher, bounds collection and gameplay objects have no knowledge of each other's existence. Your high level main loop constructs this data flow between them with a half-dozen line procedure.

This topic is closed to new replies.

Advertisement