Seer

Object Resource Acquisition

Recommended Posts

For a sprite based 2D RPG style game, how should assets such as textures, animations, music and sound effects be distributed to and accessed by game objects?

Assuming you have a resource manager dedicated to loading and containing the resources, should this manager be passed indiscriminately to wherever there is a need for a resource(s)?

Should the resource manager segregate various textures and animations from sprite sheets as they are loaded and hold segregated sets (maps/lists) of these textures and animations based on a pre-determined need by different game objects so that they can be quickly set up? By this I mean, where you are certain object A needs only a set of X animations and Y sound effects throughout the lifetime of the program.

If all textures, animations, music and sound effects should not be segregated as they are loaded and should instead each be stored in one big container of their respective type for all objects to freely choose from, how should the objects retrieve the resources they need efficiently and with minimal lines of code? Any object could hold any combination of textures, animations or sound effects held by the resource manager. I believe it would be easy with this approach to have very many lines of code dedicated to retrieving all the resources an object will need, for every object.

Finally, should objects hold resources at all? At this point I don't see why objects shouldn't hold their own graphical resources since the objects contain the data such as position, width and height that the resources would need in order to be rendered correctly. However, I'm not so sure about resources such as sound effects, since these types of resources don't need any information from the object in order to play. Should they be held and controlled elsewhere, by dedicated music and sound effect playing systems?

 

 

Share this post


Link to post
Share on other sites

Thanks frob.

To see if I understand you correctly:

  • Game objects should not hold references to resources
  • Game objects should never come in contact with the resource manager
  • Systems such as a Renderer can interact with the resource manager to retrieve the resources they need, probably at startup
  • Game objects should send messages to the appropriate systems so that operations such as drawing a texture or animation or playing a sound are performed by those systems

Is that right?

When sending messages to a system, would it be appropriate for the game object sending the message to do so via a held reference to the system? For example:

public class Player
{
	private Renderer renderer;

	public class Player(Renderer renderer)
	{
		this.render = renderer;
	}

	public void update()
	{
		// If I want to draw something
		renderer.render(Animation.NPC1_WALK_DOWN, position.x, position.y);
	}

	// render() method begone!
}

Here the call to render() would add an animation to a Map of Animations based on the Enum in the first parameter. When all the game objects have finished updating, the Renderer would then render all the things objects this frame have told it to render. Once it's done, it would clear the Map and await further instructions in the next frame. And so on.

This is just a rough idea based on my current understanding of what you recommend. Does it have merit or am I thinking about this incorrectly? I know that adding to and then clearing from a Collection of things every frame could become expensive depending on the number of draw calls but it's all I can think of that would probably work at the moment. If it's definitely not suitable, what else would you recommend doing instead?

Share this post


Link to post
Share on other sites
9 minutes ago, Seer said:

To see if I understand you correctly:

  • Game objects should not hold references to resources
  • Game objects should never come in contact with the resource manager
  • Systems such as a Renderer can interact with the resource manager to retrieve the resources they need, probably at startup
  • Game objects should send messages to the appropriate systems so that operations such as drawing a texture or animation or playing a sound are performed by those systems

Is that right?

Fairly close.

For the first one, game objects generally should not hold resources directly. Holding a reference to a resource is generally fine, but there must be rules in place to ensure the reference (handle, pointer, proxy, whatever) is valid. 

Game objects should rarely directly hit a resource manager. They can do it indirectly, send messages, request handles to items.

14 minutes ago, Seer said:

public void update() { renderer.render(Animation.NPC1_WALK_DOWN, position.x, position.y); }

Usually not, no.  It certainly isn't something you would want to call every frame like that.  In most games rendering and simulation processing are decoupled, there can be multiple updates for each render.

If you're looking to animate something, you would tell your animation system to animate it. The animation system and rendering system work together.

So you might (completely depending on your game engine) request a handle to an animation, request a handle to the model, and then use the animation system to play the animation on the model. The game object certainly should not be instructing the renderer to render. You might enable or disable rendering occasionally, you might reposition something, but usually the systems should operate completely independently. 

 

Share this post


Link to post
Share on other sites
On 25.9.2017 at 4:25 AM, frob said:

not a hobby project that doesn't have the manpower

Depends on time and motivation of the hobbyist ;)

 

Maybe I could go into some more detail from experience with fiddling arround with rendering/audio and also professional penetrating the render system. Gameobjects themselfs are mostly instances of certain combination of mesh/texture with or without any animation attached to them depending how the engine handles them. Unity heavily uses components to stack meshes, materials (not single textures), audio and animation together and I think Unreal uses some kind object composition to do something similar so the loading process tells asset manager to create certain instance where scene objects are loaded immediatly and prefabs are loaded dynamically when referenced. They are some kind of serialized to the scene data/prefab data and back assembled from there by the resource manager that keeps track of references. I use similar to Unreal a resource handle in my solution that is background reference counted and swapped out when caching for other resources when not used any more and/or the entire scene is unloaded. I have some kind of resource management that is similar to memory management keeping track of objects (where objects here mean Mesh, Texture, Animation-Data instances) and interact with the graphics units storage (and also fences, locks and other stuff that may be out of scope for now). "GameObjects" (I dislike this term) are then attached those resource handles for rendering. Game code might trigger some resource loading especially when on-demand changing some linked instances (changing textures, materials, meshes) and immediatly gets an empty handle back that is ready to use when loading thread (a Task inside the task system) finishes in the background and for so long contains none(default) or old data (that might be these eye breaking purple texture often seen in a game :D ).

I keep this resource handle option open for procedural generation of meshes like maps or some toy implementation for dynamic vegetation often used in prototypes.

Sound system is a completely different approach (as already mentioned). Soundbuffers are used to be filled immeadetly up with chunks of data when reaching out of where you otherwise would get nasty audio artifacts. Usually those sound buffers are implemented as single data streams that are mixed by the software into one or more concourent ring buffer(s) shifting the read pointer arround (what is the reason for crashing games to playback the last bits of audio any and any again when audio thread is not shutting down). This is because audio calculations for your scene might be an as well complicated task as rendering is. Audio sources might be static or position based so those ones must to be valued down depending on the distance to your recipients position, any object between them and a couple of things more where audio also could be clipped as for example meshes can be clipped by frustum when there are a bunch of louder audio sources so that the quite once dont need to be mixed up.

Resources might also be implementation dependent managed by the audio system or resource manager where calling an audio file for playing is just messaging into the audio system. This will then enqueue/dequeue data for playback depending on the audio source or mark buffers as suspended for the command of pausing sound or whole mixed buffers just when switching into pause menu all in-game audio stops from playback except some feedback sounds and music.

I personally like the idea to trigger audio from animation FSMs so this will prevent extra work for the gameplay programmer

Share this post


Link to post
Share on other sites
On 9/26/2017 at 9:08 PM, frob said:

Holding a reference to a resource is generally fine, but there must be rules in place to ensure the reference (handle, pointer, proxy, whatever) is valid. 

Then is something like this actually okay:

public class Player
{
    private Map<AnimationName, Animation> playerAnimations;
    private Map<SoundName, Sound> playerSounds;
    
    public void update()
    {
        // Control resources
    }
}

If that is fine then why not, for a simple game, allow the objects to render themselves? They have all the information they need to do so in this case and so long as the rendering takes place after all the updates I do not see why there should be a problem.

 

On 9/26/2017 at 9:08 PM, frob said:

Game objects should rarely directly hit a resource manager.

Does passing resources from a resource manager when constructing the object count?

 

If objects can hold their own resources (as references like in Java) and receive them from the resource manager at construction, what would be a good way for them to get just what they need from the manager without it costing many lines of code (assuming the manager has not pre-emptively tailored different sets of resources to specific objects, which I'm beginning to think is a silly thing to do)?

 

On 9/26/2017 at 9:08 PM, frob said:

 In most games rendering and simulation processing are decoupled, there can be multiple updates for each render.

That won't be the case for me, not right now anyway. Maybe when I have more experience and I see a need. Right now everything that needs to render renders after everything has updated, in the same frame every frame.

 

On 9/26/2017 at 9:08 PM, frob said:

The game object certainly should not be instructing the renderer to render.

That call 

renderer.render(Animation.NPC1_WALK_DOWN, position.x, position.y);

was just an example of the Player sending a message to the Renderer to add the Animation described by the passed Enum to its Map of Animations to render in its own time, which would be after everything has updated that frame. It's not saying to the Renderer, "render this now while the updates are happening". I could have called it something better, like addToRenderMapToRenderLater(). Well maybe not better, but more explicit anyway :)

If that is still not a good idea, then how should objects send messages to the various systems to tell them to do something?

 

Share this post


Link to post
Share on other sites

If this were in For Beginners the replies would be a little different.  In those tiny little games you do anything you can to make the game work.  In the bigger gameplay parts of the site, we make the assumption that the game is going for more substance.

 

For the topics of objects rendering themselves ...

When I see the words that "an object should render itself", my impression is that the individual game object knows about its own texture, model, and shaders, will position the various rendering matrices to whatever is needed, switch textures, switch shaders, set the appropriate shader parameters, then render the model.  In your 2D game that model will likely be a textured quad, but it still incurs the costs.

Since you're talking about a 2D based rendering, each object drawing itself ignores depth ordering and again ignores the passes required for effects like transparency / translucence / bloom /etc.  If you draw them in arbitrary order then depth is going to be a serious issue as 2D objects have a critical need for proper ordering to appear correctly.

If you were talking about 3D rendering it has issues with those same effects and also with situations like reflection and shadow processing, and with performance as shaders and textures will probably seldom (if ever) reused.

In either of those cases, those commands will quickly saturate the bus to the card, and those particular commands are slow to run.  You could probably get away with that if you've got a handful of game items and no complex graphics or features like transparency / translucence / bloom / etc. 

In game engines there is a rendering system that controls all the items to be rendered, sorts them based on visual properties like transparency, shaders, textures, models, depth, and so on, then renders them in an order that has minimal draw calls and minimal data transfers to/from the card.  

Do you mean something different with that?

 

The questions regarding resource management, that primarily boils down to the issues of object lifetimes and of efficient creation time.  In games (and in all software, really) managing the lifetime of objects is very important.

Objects need to be created, used, and destroyed in reliable patterns.  When those patterns are violated the resources are leaked, and eventually the program will crash. The larger the resource and the more constrained the system the faster that will happen.  A dedicated system for each type of resource can ensure that each object can be managed through its entire lifetime. 

On the parallel task of creation time, a simple example is the time to create a model object.  In naive code creating a game model will require finding the file, opening it, reading and parsing the model contents, opening one or more texture files, reading them, parsing the images, and so on. These operations are all slow. Opening something on disk might happen to be fast, but it could also take multiple seconds to open, multiple seconds to read, etc.  Rather than waiting for potentially enormous times, the typical pattern is to respond near-instantly with a proxy object which has enough information to be usable, but which has the contents pending.  It works in conjunction with the resource controller to load it in to the correct hardware as it is eventually loaded, and to call back when it is ready.

Again, in the smallest games you don't need that kind of thing.  For your hobby project, certainly you can go parse all the files one at a time as they are loaded, and if loading the game takes a minute or so that is still faster than the major AAA games loading hundreds of thousands of resources.

 

The message you are sending can work fine for a very small game, but will not scale well. Instead of notifying a single system about an event, the typical pattern is a broadcast where you notify all interested listeners about the event. Typically there are very few things registered to respond, so the array of calls may hit zero, one, or some other small number of active listeners, but the purpose is that it extends to any number of interested game systems.

Share this post


Link to post
Share on other sites

At what point is a game no longer considered small enough to be able to implement simplified patterns, such as objects referencing resources? Would games with the scope of GBA era titles such as Pokemon Emerald, Golden Sun or Dragonball Z: The Legacy of Goku 2 be considered sufficiently small enough in this regard? It is only up to the scope of games such as these that I am interested in making for now. I have no interest in making 3D games. I care about how best to handle sprite sheets, textures extracted from sprite sheets, animations made using those textures, sound effects, background music and fonts. That's basically it.

I have a few questions related to these:

  1. Should a given sprite sheet containing sets of animations, such as character animations, have an accompanying data file specifying values such as X position, Y position, width, height, X offset from origin and Y offset from origin for every frame in the sheet, so that you can properly extract the frames with necessary data when loading the sprite sheet?
  2. Should sprite sheets be loaded individually? I ask because I have heard of something called an atlas which packs sets of individual sprite sheets into one big sprite sheet and supposedly it is what you should use.
  3. To be absolutely clear, if a resource manager is what loads and initially holds the various resources, should it only hold resources in separate containers of that type of resource? For example, all textures would go in a container holding only textures, all animations would go in a container holding only animations, and so on for every type of resource. Then, when a system needs the resources its domain governs, it just takes a flat container of those resources from the resource manager and works from there.

 

With resource loading, generally what I do is load all resources at the start of the program so that they are all available and ready to be used. Only once everything has loaded does the program continue. For the scope of the kind of game I am interested in making, is this an acceptable approach? If so, at what scope, if any, does it become unacceptable or problematic?

 

Would you recommend the observer pattern for sending messages from objects to systems? If so, would you recommend implementing it with a kind of one for all approach where every observing system acts on the same function call, such as notify(Event, Object), differing in how they respond based only on the Event Enum which is passed? This way, each systems notify() definition would be comprised of a switch over the Events which are relevant to it. Object would be passed so that any type of data that needs to be sent can be wrapped in an Object and sent that way. Object could be replaced by Data, passing derived Data types instead I think.

If not that then would you recommend using a set of disparate Listener interfaces instead, where the type of Listener can be different with more pertinent, sculpted function names and signatures for the relevant system. I have not used this type of observer pattern before as I have never been able to get my head around it being so used to doing it the way I described above, so if this is how you would recommend implementing the observer pattern would you mind explaining how?

 

On the issue of ordering objects for rendering, I can see how a dedicated Renderer system would be good because it can take care of rendering the objects in a certain order as specified by whatever value or values are meaningful to you. For example, in 2D games where sprites don't take up an entire tile for themselves and are able to stand behind one another, you would probably want to order the objects by their Y position value so that those who are "above" are rendered before those who are "below" and appear behind those below them should they enter the same space.

Even in this case however, the Renderer, once finished ordering the objects, can still simply render them by doing something like:

for(GameObject gameObject : gameObjects)
{
	gameObject.render(...);
}

In this case the Renderer has properly ordered all the objects and all it then does is tell them to render themselves. Is that still not a good solution?

Share this post


Link to post
Share on other sites

@frob knows what he is talking about, I would listen to his experience.  You seem obsessed with game objects rendering themselves, but even in small games this is highly inefficient and a poor design, which can lead to bad behaviors as a programmer which are difficult to change later.  You become limited very quickly with what you can do with objects rendering themselves honestly (from past experience here too which required a complete rewrite of the game and it's underlying rendering tech...).  If each object renders itself you loose the ability to batch render, which is something very good for efficiency which is always good.  Otherwise you're wasting GPU/CPU power/cycles for zero benefit other then your perceived one which doesn't actually exist.

Even simple games can benefit from stateless rendering systems, see these couple articles and the ones they reference also:

http://realtimecollisiondetection.net/blog/?p=86

This is a 3 part series, the links are at the bottom of this article:

https://blog.molecular-matters.com/2014/11/06/stateless-layered-multi-threaded-rendering-part-1/

5 hours ago, Seer said:

Should sprite sheets be loaded individually? I ask because I have heard of something called an atlas which packs sets of individual sprite sheets into one big sprite sheet and supposedly it is what you should use.

A texture atlas and a sprite sheet are check alike honestly.  They both tightly pack other images/textures and you need to know where they are from data files.  I honestly create the largest texture I can (on the GPU) that is a bit bigger then I need to load all my textures/images/sprites do I have zero requirements to change my working texture.  Only the shader needs to change then.

 

5 hours ago, Seer said:

Would you recommend the observer pattern for sending messages from objects to systems?

This is what I use, templated for the data being sent to keep type safety.

Share this post


Link to post
Share on other sites

Going through them:

6 hours ago, Seer said:

Should a given sprite sheet containing sets of animations, such as character animations, have an accompanying data file specifying values such as X position, Y position, width, height, X offset from origin and Y offset from origin for every frame in the sheet, so that you can properly extract the frames with necessary data when loading the sprite sheet?

It needs to have some of that data somewhere, yes.  Exactly where is an implementation detail.

Otherwise how will you know the contents of your sprite sheets?  How will you know if a 2560x2560 sheet contains a 10x10 array of 256x256 images, or if it contains a 20x20 array of 128x128 images?

Usually there is even more meta-information, such as which index to use in an animation sequence, and possibly flags like flip/rotate.  Even more, sprites don't always need to be the same size, you may have a small sprite which includes an offset from the position of the sprite.

On sprite-based hardware there were often flags to flip horizontally and vertically, and to rotate 90, 180, and 270 degrees. This meant that a single knife could have one or two sprites yet still have an animation cycle that was 8 or 16 frames long.

You will need information in the animation to state where the base of the image is located relative to the image data. Allowing the sprites to be different sizes allows for smaller sprite sheets since not everything is the same size. This lets you animate motion without requiring a new sprite, such as an item spiraling around on the screen without changing the sprite, or allow you to have images that go wider or narrower, shorter or taller, without making all sprites expand to the maximum extents.

6 hours ago, Seer said:

Should sprite sheets be loaded individually? I ask because I have heard of something called an atlas which packs sets of individual sprite sheets into one big sprite sheet and supposedly it is what you should use.

A texture atlas is a solution to a different problem mentioned earlier.  I noted above that switching textures takes some time on the graphics card.  One solution to that is a texture atlas or megatexture or various other names. 

By making one giant texture of 32K pixels square you use a full gigabyte of texture memory but don't need to change textures in your rendering pipeline. 

Larger games will add this as a build step where tools try to pack the giant texture full of all the content. It can add complexity, so based on what you've been discussing so far, it probably is not something you would want to add at this time.

6 hours ago, Seer said:

To be absolutely clear, if a resource manager is what loads and initially holds the various resources, should it only hold resources in separate containers of that type of resource? For example, all textures would go in a container holding only textures, all animations would go in a container holding only animations, and so on for every type of resource. Then, when a system needs the resources its domain governs, it just takes a flat container of those resources from the resource manager and works from there.

Let me reverse the scenario and see if that idea makes sense to you:

Does it make sense to you to have a container that mixes all the resources?  Does it make sense to have an array of 50 textures, then one audio file, then 50 more textures?  To me that is nonsensical. It means you cannot treat the collection as a uniform set of objects.

In my mind, and as I've seen in many game engines, they are each separate. Textures belong as resources managed on the graphics card. Audio clips belong in a sound pool. Shaders belong as compiled buffers bound to the graphics context. Each type of resource has different needs, with different lifetimes, with different expectations. 

 

6 hours ago, Seer said:

For the scope of the kind of game I am interested in making, is this an acceptable approach? If so, at what scope, if any, does it become unacceptable or problematic?

Loading everything at the beginning can work.  It becomes a problem when "everything" is larger than the system can support.

On the PC it is easier to ignore the realities of the hardware since so much has been abstracted away.  You can allocate and load many more gigabytes of data than the physical memory available thanks to virtual memory.  With the right graphics API calls you can load up more textures than you have texture memory and the drivers will magically hide the details from you. 

Even so, there are limits to what the hardware supports. Usually performance plummets once you switch over to the abstracted-away virtualized version. Reading from active memory versus reading from something swapped to disk has many orders of magnitude different performance. You'll notice because the game goes from running quickly to barely crawling. 

On systems and resources that don't support that kind of abstract virtualization, you'll know because it crashes for being out of resources. 

6 hours ago, Seer said:

In this case the Renderer has properly ordered all the objects and all it then does is tell them to render themselves. Is that still not a good solution?

That gets back to the question about what it means for objects to render themselves.

If each object causes its own texture swap, causes its own shader swap, then draws the quad unique to it, you're going to hit performance limits very quickly. A large screen filled with many small tiles will become a slide show as you saturate your data bus to the graphics card with draw calls.  A small screen with only a small number of very large tiles may be able to render rapidly enough. 

If instead you mean that you have an independent rendering system, and you notify the system that when it comes time to drawing, it should draw the specific sprite at those locations and that it should draw it at a proper ordering to preserve z-order and transparency, that is something that could be worked with.  The rendering system could accumulate all the calls, figure out the order that preserves rendering order, eliminates/minimized z fighting, and ensures transparency, translucency, and holes are all handled properly with a minimum number of state changes and rendering calls.

7 hours ago, Seer said:

Would you recommend the observer pattern for sending messages from objects to systems? If so, would you recommend implementing it with a kind of one for all approach where every observing system acts on the same function call, such as notify(Event, Object), differing in how they respond based only on the Event Enum which is passed?

It is commonly called an "event bus". Google finds lots of resources.

Effectively it serves as a broadcast/notification system. The key is that it needs to be used carefully, use it for events to broadcast and notify that things have happened so listeners can respond.  

Rendering does not typically use that interface, but many events do.  Rendering is usually completely decoupled from the game object's code.  Sometimes a game object will request something different to happen, such as jumping to a different render state, but that's not the typical code path.

Share this post


Link to post
Share on other sites

Thank you for addressing my questions frob.

 

However, I am still curious to know; could you successfully develop games with the scope of those I suggested above, such as Pokemon or Golden Sun or some typical GBA level 2D RPG, using sub optimal or plain bad design practices such as having objects refer to resources and having their own render function? Would it be possible? Would it be painful?

I am not saying that I would want to willfully program a game poorly or advocate doing so if you have the capabilities not to, but if your skill is such that you don't know how to properly decouple rendering from game objects or implement an event bus (like me), would it be possible?

 

At this point what I have mainly taken away from this discussion is to not allow objects to hold references to resources and instead have systems hold the resources, to not give game objects their own render function, and to set up some kind of messaging system using an event bus or observer-like pattern to keep objects decoupled from systems. If all of that is correct, then what I now need is to know how to do all of that. This means having implementation details be explained to me.

If you don't want to demonstrate how such an architecture would be implemented I understand as I'm sure it would be quite a lot to go through. In that case, can you suggest any tutorials, books or other material that go through developing a quality architecture for 2D games such as the kind I am interested in? I would like to stress if I may that I am only interested in 2D, not 3D. There may be resources out there which explain and step you through implementing fine architectures for 3D games which may be applicable to 2D games, but I would like the focus to be exclusively on 2D if possible. The reason is because learning these concepts and how to implement them is difficult enough and I don't want the added complexities of 3D on top of that. I hope that's not being too demanding. Thank you.

 

Share this post


Link to post
Share on other sites
21 minutes ago, Seer said:

However, I am still curious to know; could you successfully develop games with the scope of those I suggested above, such as Pokemon or Golden Sun or some typical GBA level 2D RPG, using sub optimal or plain bad design practices such as having objects refer to resources and having their own render function? Would it be possible? Would it be painful?

I am not saying that I would want to willfully program a game poorly or advocate doing so if you have the capabilities not to, but if your skill is such that you don't know how to properly decouple rendering from game objects or implement an event bus (like me), would it be possible?

Its certainly possible, as good as a metric that is. People have written successful 3d-games all in one file with horrible spaghetti-code:

So you could certainly write such a type of RPG without decoupled rendering etc. . Some of the problems that usually come by suboptimal designs wouldn't matter for a 1990-area-style 2D game. On the other hand, since such an RPG does have a large scale by means of content, certain suboptimal design choices can bite you in the ass later on. I had to abort one of my first c++-games (a super-mario world clone) because at a certain point, it became impossible to add new features etc... without hours of mindnumbing, uncessary work.

So yeah, what you ask is certainly possible - but I'd say the same if you asked "Can I get to work without walking on my feet"? Sure, if you don't care to use your legs and rather crawl, but why not just try to do it the right way? Unless you're very experienced,  you won't have a perfect design anyway, but if you at least try to take the advice given you'll learn & see for yourself why its better to do it that way.

Share this post


Link to post
Share on other sites
6 hours ago, Seer said:

could you successfully develop games with the scope of those I suggested above, such as Pokemon or Golden Sun or some typical GBA level 2D RPG, using sub optimal or plain bad design practices such as having objects refer to resources and having their own render function? Would it be possible? Would it be painful?

A key thing to note is that the GBA hardware was designed for that type of game.  The APIs were tile based, and as I mentioned, the API included bits to automatically rotate and flip the tiles.  In several systems the sprites were loaded and indexed through the system, and rendering the entire screen meant passing a buffer of sprite indices and locations. You could pass a single buffer about 800 bytes long that would describe sprite placement and the hardware would do the rest.

In many ways it is harder to develop that style of tile-based and sprite-based game on a modern PC. The graphics APIs of today are designed around a radically different style of rendering. The seemingly direct replacement of drawing textured quads for all the sprites (with one texture change and one individual quad per tile) can easily hit bottlenecks.  Cards that hand handle tens of millions of polygons in modern games can struggle to draw thousands of square sprites when implemented that way.

You can read up on techniques to merge those calls.  Megatextures / texture atlases can work well with command batches, where the rendering of quads is accumulated in a VBO and then issued as a single call, it can potentially drop the number of calls tremendously. Hardware instancing can work if it matches your actual rendering bottlenecks. 

Sometimes you can even work similar to the oldest sprite-based systems by creating all your sprites as a font, and using the text to be rendered in exactly the same way old games used an index buffer to render.  Thus "A" becomes one sprite, "B" and "C" additional sprites, and the entire screen can be rendered as a very large wide-character string.

 

Drawing on a 240x160 screen and with hardware support for fast sprite rendering is different than writing to a 1920x1080 or 3840x2160 display with hardware support focusing on vertex clouds.

And as for drawing that type of game during that era, yes, it was still difficult.  It was a different kind of difficulty, mostly due to the constraints of the era of space and speed, but still difficult.

Share this post


Link to post
Share on other sites

What books, tutorials or other resources would you recommend for learning good architecture design for 2D games, where the concepts are well explained with clear implementation details and if possible where an actual game is developed using these design principles, being gradually built up in a step-by-step manner?

Share this post


Link to post
Share on other sites

One fundamental concept is the "single responsibility", what means that a piece of software should fulfill just one specific task. For example, animation is one task and rendering is another one. Having those two (and several more) implemented in a single object violates the single responsibility principle. This generally leads to maintenance problems.

Here is one way to decouple things:

Think of the actual game object as a container for several data, without doing anything at all with them. Then, a separate piece of software that has access to the game objects can iterate them and (e.g.) alter their placement data (position + orientation) due to movement data. Voila, we have a motion sub-system (can be seen as part of the animation sub-system). After the motion sub-system has worked on all moveable objects, they are all updated w.r.t. their placement. Now, in this state, another piece of software a.k.a. the rendering sub-system can just use the current placement when drawing each object. Notice that both sub-systems do not need to know a bit of the other one; each has its own responsibility. The "glue" between them is just data stored in the game objects. On the other hand, game objects are no longer "active"; they are just data holders. Of course, due to the rendering sub-system just assumes that all placements are up-to-date ("ready to render", so to say), a specific order of running the sub-systems is needed, here the motion sub-system before the rendering sub-system. This is organized by placing the sub-systems accordingly into the game's run loop.

Things can be driven further. For example, the rendering sub-system's core need not know about movement data, because it deals with the current state of the placement only. So we are able to manage the different data blocks of game objects separately. This leads to a splitting of data and storing them at individual places, usually the sub-systems themselves. However, this is perhaps too much for now. But you'll notice that different sub-systems have different requirements for data from game objects. One of this is even the selection of game object's that participate. For example, an early step in rendering is to iterate all active game objects and to perform view culling, i.e. selecting just those game objects that are visible for the coming visualization. This step results in a subset of game objects, perhaps organized as some list with references to the selected game objects. Well, we can store more information than just the references into the list's nodes. For example, we can store information that allows for the grouping/batching mentioned by frob above. Such a list can then (after collection has finished) be sorted on some criteria based on the said information, and the subsequent actual rendering then does not work on the game objects per se but on the sorted list of selection.

The essence of the paragraph above is that different sub-systems usually work best with their own data structure. This again is a kind of decoupling, because here the rendering core need not know how the scene is organized because it just works on a list of drawable objects.

Share this post


Link to post
Share on other sites

The boiled down point of what frob is saying (as far as I see) isn't necessarily that it is inherently wrong for things to draw themselves, but that they have serious performance issues on modern hardware. Draw calls can be a big bottleneck with tiles because from the perspective of the software, draw calls are designed to be called to essentially render a ton of stuff. For instance, Unreal in most cases uses one draw call per material on an object, so if you have a 20k triangle weapon with two materials, that is barely 2 draw calls. Flipping that around, when a sprite(a tile maybe) is drawing itself, it is essentially using one draw call per tile, which can easily end up in the thousands of draw calls range, per frame, just to render a tile map.

Of course that is assuming you are doing everything the extremely primitive way, you could certainly create objects that draw themselves, but batch more content into the draw calls, in that case you would gain significant performance without changing the architecture of the system. But in general, if objects draw themselves, they are operating in a bubble, they draw like they are the only thing that exists in the system. That's why it helps to have a middleman object that actually handles sorting information on what needs to be rendered during the frame, so it can make more intelligent decisions.

You definitely CAN make a game of relative complexity with objects drawing themselves, but you're going to hit performance issues eventually, it doesn't scale well, and you'll essentially be adding patchy changes to make it work whenever you hit a performance wall. So the issue isn't that you are somehow violating a law of the architecture police, but that the system isn't conducive to knowing the information it needs to intelligently render things.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now