Object Resource Acquisition

Started by
16 comments, last by Satharis 6 years, 6 months ago

Going through them:

6 hours ago, Seer said:

Should a given sprite sheet containing sets of animations, such as character animations, have an accompanying data file specifying values such as X position, Y position, width, height, X offset from origin and Y offset from origin for every frame in the sheet, so that you can properly extract the frames with necessary data when loading the sprite sheet?

It needs to have some of that data somewhere, yes.  Exactly where is an implementation detail.

Otherwise how will you know the contents of your sprite sheets?  How will you know if a 2560x2560 sheet contains a 10x10 array of 256x256 images, or if it contains a 20x20 array of 128x128 images?

Usually there is even more meta-information, such as which index to use in an animation sequence, and possibly flags like flip/rotate.  Even more, sprites don't always need to be the same size, you may have a small sprite which includes an offset from the position of the sprite.

On sprite-based hardware there were often flags to flip horizontally and vertically, and to rotate 90, 180, and 270 degrees. This meant that a single knife could have one or two sprites yet still have an animation cycle that was 8 or 16 frames long.

You will need information in the animation to state where the base of the image is located relative to the image data. Allowing the sprites to be different sizes allows for smaller sprite sheets since not everything is the same size. This lets you animate motion without requiring a new sprite, such as an item spiraling around on the screen without changing the sprite, or allow you to have images that go wider or narrower, shorter or taller, without making all sprites expand to the maximum extents.

6 hours ago, Seer said:

Should sprite sheets be loaded individually? I ask because I have heard of something called an atlas which packs sets of individual sprite sheets into one big sprite sheet and supposedly it is what you should use.

A texture atlas is a solution to a different problem mentioned earlier.  I noted above that switching textures takes some time on the graphics card.  One solution to that is a texture atlas or megatexture or various other names. 

By making one giant texture of 32K pixels square you use a full gigabyte of texture memory but don't need to change textures in your rendering pipeline. 

Larger games will add this as a build step where tools try to pack the giant texture full of all the content. It can add complexity, so based on what you've been discussing so far, it probably is not something you would want to add at this time.

6 hours ago, Seer said:

To be absolutely clear, if a resource manager is what loads and initially holds the various resources, should it only hold resources in separate containers of that type of resource? For example, all textures would go in a container holding only textures, all animations would go in a container holding only animations, and so on for every type of resource. Then, when a system needs the resources its domain governs, it just takes a flat container of those resources from the resource manager and works from there.

Let me reverse the scenario and see if that idea makes sense to you:

Does it make sense to you to have a container that mixes all the resources?  Does it make sense to have an array of 50 textures, then one audio file, then 50 more textures?  To me that is nonsensical. It means you cannot treat the collection as a uniform set of objects.

In my mind, and as I've seen in many game engines, they are each separate. Textures belong as resources managed on the graphics card. Audio clips belong in a sound pool. Shaders belong as compiled buffers bound to the graphics context. Each type of resource has different needs, with different lifetimes, with different expectations. 

 

6 hours ago, Seer said:

For the scope of the kind of game I am interested in making, is this an acceptable approach? If so, at what scope, if any, does it become unacceptable or problematic?

Loading everything at the beginning can work.  It becomes a problem when "everything" is larger than the system can support.

On the PC it is easier to ignore the realities of the hardware since so much has been abstracted away.  You can allocate and load many more gigabytes of data than the physical memory available thanks to virtual memory.  With the right graphics API calls you can load up more textures than you have texture memory and the drivers will magically hide the details from you. 

Even so, there are limits to what the hardware supports. Usually performance plummets once you switch over to the abstracted-away virtualized version. Reading from active memory versus reading from something swapped to disk has many orders of magnitude different performance. You'll notice because the game goes from running quickly to barely crawling. 

On systems and resources that don't support that kind of abstract virtualization, you'll know because it crashes for being out of resources. 

6 hours ago, Seer said:

In this case the Renderer has properly ordered all the objects and all it then does is tell them to render themselves. Is that still not a good solution?

That gets back to the question about what it means for objects to render themselves.

If each object causes its own texture swap, causes its own shader swap, then draws the quad unique to it, you're going to hit performance limits very quickly. A large screen filled with many small tiles will become a slide show as you saturate your data bus to the graphics card with draw calls.  A small screen with only a small number of very large tiles may be able to render rapidly enough. 

If instead you mean that you have an independent rendering system, and you notify the system that when it comes time to drawing, it should draw the specific sprite at those locations and that it should draw it at a proper ordering to preserve z-order and transparency, that is something that could be worked with.  The rendering system could accumulate all the calls, figure out the order that preserves rendering order, eliminates/minimized z fighting, and ensures transparency, translucency, and holes are all handled properly with a minimum number of state changes and rendering calls.

7 hours ago, Seer said:

Would you recommend the observer pattern for sending messages from objects to systems? If so, would you recommend implementing it with a kind of one for all approach where every observing system acts on the same function call, such as notify(Event, Object), differing in how they respond based only on the Event Enum which is passed?

It is commonly called an "event bus". Google finds lots of resources.

Effectively it serves as a broadcast/notification system. The key is that it needs to be used carefully, use it for events to broadcast and notify that things have happened so listeners can respond.  

Rendering does not typically use that interface, but many events do.  Rendering is usually completely decoupled from the game object's code.  Sometimes a game object will request something different to happen, such as jumping to a different render state, but that's not the typical code path.

Advertisement

Thank you for addressing my questions frob.

 

However, I am still curious to know; could you successfully develop games with the scope of those I suggested above, such as Pokemon or Golden Sun or some typical GBA level 2D RPG, using sub optimal or plain bad design practices such as having objects refer to resources and having their own render function? Would it be possible? Would it be painful?

I am not saying that I would want to willfully program a game poorly or advocate doing so if you have the capabilities not to, but if your skill is such that you don't know how to properly decouple rendering from game objects or implement an event bus (like me), would it be possible?

 

At this point what I have mainly taken away from this discussion is to not allow objects to hold references to resources and instead have systems hold the resources, to not give game objects their own render function, and to set up some kind of messaging system using an event bus or observer-like pattern to keep objects decoupled from systems. If all of that is correct, then what I now need is to know how to do all of that. This means having implementation details be explained to me.

If you don't want to demonstrate how such an architecture would be implemented I understand as I'm sure it would be quite a lot to go through. In that case, can you suggest any tutorials, books or other material that go through developing a quality architecture for 2D games such as the kind I am interested in? I would like to stress if I may that I am only interested in 2D, not 3D. There may be resources out there which explain and step you through implementing fine architectures for 3D games which may be applicable to 2D games, but I would like the focus to be exclusively on 2D if possible. The reason is because learning these concepts and how to implement them is difficult enough and I don't want the added complexities of 3D on top of that. I hope that's not being too demanding. Thank you.

 

21 minutes ago, Seer said:

However, I am still curious to know; could you successfully develop games with the scope of those I suggested above, such as Pokemon or Golden Sun or some typical GBA level 2D RPG, using sub optimal or plain bad design practices such as having objects refer to resources and having their own render function? Would it be possible? Would it be painful?

I am not saying that I would want to willfully program a game poorly or advocate doing so if you have the capabilities not to, but if your skill is such that you don't know how to properly decouple rendering from game objects or implement an event bus (like me), would it be possible?

Its certainly possible, as good as a metric that is. People have written successful 3d-games all in one file with horrible spaghetti-code:

So you could certainly write such a type of RPG without decoupled rendering etc. . Some of the problems that usually come by suboptimal designs wouldn't matter for a 1990-area-style 2D game. On the other hand, since such an RPG does have a large scale by means of content, certain suboptimal design choices can bite you in the ass later on. I had to abort one of my first c++-games (a super-mario world clone) because at a certain point, it became impossible to add new features etc... without hours of mindnumbing, uncessary work.

So yeah, what you ask is certainly possible - but I'd say the same if you asked "Can I get to work without walking on my feet"? Sure, if you don't care to use your legs and rather crawl, but why not just try to do it the right way? Unless you're very experienced,  you won't have a perfect design anyway, but if you at least try to take the advice given you'll learn & see for yourself why its better to do it that way.

6 hours ago, Seer said:

could you successfully develop games with the scope of those I suggested above, such as Pokemon or Golden Sun or some typical GBA level 2D RPG, using sub optimal or plain bad design practices such as having objects refer to resources and having their own render function? Would it be possible? Would it be painful?

A key thing to note is that the GBA hardware was designed for that type of game.  The APIs were tile based, and as I mentioned, the API included bits to automatically rotate and flip the tiles.  In several systems the sprites were loaded and indexed through the system, and rendering the entire screen meant passing a buffer of sprite indices and locations. You could pass a single buffer about 800 bytes long that would describe sprite placement and the hardware would do the rest.

In many ways it is harder to develop that style of tile-based and sprite-based game on a modern PC. The graphics APIs of today are designed around a radically different style of rendering. The seemingly direct replacement of drawing textured quads for all the sprites (with one texture change and one individual quad per tile) can easily hit bottlenecks.  Cards that hand handle tens of millions of polygons in modern games can struggle to draw thousands of square sprites when implemented that way.

You can read up on techniques to merge those calls.  Megatextures / texture atlases can work well with command batches, where the rendering of quads is accumulated in a VBO and then issued as a single call, it can potentially drop the number of calls tremendously. Hardware instancing can work if it matches your actual rendering bottlenecks. 

Sometimes you can even work similar to the oldest sprite-based systems by creating all your sprites as a font, and using the text to be rendered in exactly the same way old games used an index buffer to render.  Thus "A" becomes one sprite, "B" and "C" additional sprites, and the entire screen can be rendered as a very large wide-character string.

 

Drawing on a 240x160 screen and with hardware support for fast sprite rendering is different than writing to a 1920x1080 or 3840x2160 display with hardware support focusing on vertex clouds.

And as for drawing that type of game during that era, yes, it was still difficult.  It was a different kind of difficulty, mostly due to the constraints of the era of space and speed, but still difficult.

What books, tutorials or other resources would you recommend for learning good architecture design for 2D games, where the concepts are well explained with clear implementation details and if possible where an actual game is developed using these design principles, being gradually built up in a step-by-step manner?

One fundamental concept is the "single responsibility", what means that a piece of software should fulfill just one specific task. For example, animation is one task and rendering is another one. Having those two (and several more) implemented in a single object violates the single responsibility principle. This generally leads to maintenance problems.

Here is one way to decouple things:

Think of the actual game object as a container for several data, without doing anything at all with them. Then, a separate piece of software that has access to the game objects can iterate them and (e.g.) alter their placement data (position + orientation) due to movement data. Voila, we have a motion sub-system (can be seen as part of the animation sub-system). After the motion sub-system has worked on all moveable objects, they are all updated w.r.t. their placement. Now, in this state, another piece of software a.k.a. the rendering sub-system can just use the current placement when drawing each object. Notice that both sub-systems do not need to know a bit of the other one; each has its own responsibility. The "glue" between them is just data stored in the game objects. On the other hand, game objects are no longer "active"; they are just data holders. Of course, due to the rendering sub-system just assumes that all placements are up-to-date ("ready to render", so to say), a specific order of running the sub-systems is needed, here the motion sub-system before the rendering sub-system. This is organized by placing the sub-systems accordingly into the game's run loop.

Things can be driven further. For example, the rendering sub-system's core need not know about movement data, because it deals with the current state of the placement only. So we are able to manage the different data blocks of game objects separately. This leads to a splitting of data and storing them at individual places, usually the sub-systems themselves. However, this is perhaps too much for now. But you'll notice that different sub-systems have different requirements for data from game objects. One of this is even the selection of game object's that participate. For example, an early step in rendering is to iterate all active game objects and to perform view culling, i.e. selecting just those game objects that are visible for the coming visualization. This step results in a subset of game objects, perhaps organized as some list with references to the selected game objects. Well, we can store more information than just the references into the list's nodes. For example, we can store information that allows for the grouping/batching mentioned by frob above. Such a list can then (after collection has finished) be sorted on some criteria based on the said information, and the subsequent actual rendering then does not work on the game objects per se but on the sorted list of selection.

The essence of the paragraph above is that different sub-systems usually work best with their own data structure. This again is a kind of decoupling, because here the rendering core need not know how the scene is organized because it just works on a list of drawable objects.

The boiled down point of what frob is saying (as far as I see) isn't necessarily that it is inherently wrong for things to draw themselves, but that they have serious performance issues on modern hardware. Draw calls can be a big bottleneck with tiles because from the perspective of the software, draw calls are designed to be called to essentially render a ton of stuff. For instance, Unreal in most cases uses one draw call per material on an object, so if you have a 20k triangle weapon with two materials, that is barely 2 draw calls. Flipping that around, when a sprite(a tile maybe) is drawing itself, it is essentially using one draw call per tile, which can easily end up in the thousands of draw calls range, per frame, just to render a tile map.

Of course that is assuming you are doing everything the extremely primitive way, you could certainly create objects that draw themselves, but batch more content into the draw calls, in that case you would gain significant performance without changing the architecture of the system. But in general, if objects draw themselves, they are operating in a bubble, they draw like they are the only thing that exists in the system. That's why it helps to have a middleman object that actually handles sorting information on what needs to be rendered during the frame, so it can make more intelligent decisions.

You definitely CAN make a game of relative complexity with objects drawing themselves, but you're going to hit performance issues eventually, it doesn't scale well, and you'll essentially be adding patchy changes to make it work whenever you hit a performance wall. So the issue isn't that you are somehow violating a law of the architecture police, but that the system isn't conducive to knowing the information it needs to intelligently render things.

This topic is closed to new replies.

Advertisement