Going through them:
6 hours ago, Seer said:Should a given sprite sheet containing sets of animations, such as character animations, have an accompanying data file specifying values such as X position, Y position, width, height, X offset from origin and Y offset from origin for every frame in the sheet, so that you can properly extract the frames with necessary data when loading the sprite sheet?
It needs to have some of that data somewhere, yes. Exactly where is an implementation detail.
Otherwise how will you know the contents of your sprite sheets? How will you know if a 2560x2560 sheet contains a 10x10 array of 256x256 images, or if it contains a 20x20 array of 128x128 images?
Usually there is even more meta-information, such as which index to use in an animation sequence, and possibly flags like flip/rotate. Even more, sprites don't always need to be the same size, you may have a small sprite which includes an offset from the position of the sprite.
On sprite-based hardware there were often flags to flip horizontally and vertically, and to rotate 90, 180, and 270 degrees. This meant that a single knife could have one or two sprites yet still have an animation cycle that was 8 or 16 frames long.
You will need information in the animation to state where the base of the image is located relative to the image data. Allowing the sprites to be different sizes allows for smaller sprite sheets since not everything is the same size. This lets you animate motion without requiring a new sprite, such as an item spiraling around on the screen without changing the sprite, or allow you to have images that go wider or narrower, shorter or taller, without making all sprites expand to the maximum extents.
6 hours ago, Seer said:Should sprite sheets be loaded individually? I ask because I have heard of something called an atlas which packs sets of individual sprite sheets into one big sprite sheet and supposedly it is what you should use.
A texture atlas is a solution to a different problem mentioned earlier. I noted above that switching textures takes some time on the graphics card. One solution to that is a texture atlas or megatexture or various other names.
By making one giant texture of 32K pixels square you use a full gigabyte of texture memory but don't need to change textures in your rendering pipeline.
Larger games will add this as a build step where tools try to pack the giant texture full of all the content. It can add complexity, so based on what you've been discussing so far, it probably is not something you would want to add at this time.
6 hours ago, Seer said:To be absolutely clear, if a resource manager is what loads and initially holds the various resources, should it only hold resources in separate containers of that type of resource? For example, all textures would go in a container holding only textures, all animations would go in a container holding only animations, and so on for every type of resource. Then, when a system needs the resources its domain governs, it just takes a flat container of those resources from the resource manager and works from there.
Let me reverse the scenario and see if that idea makes sense to you:
Does it make sense to you to have a container that mixes all the resources? Does it make sense to have an array of 50 textures, then one audio file, then 50 more textures? To me that is nonsensical. It means you cannot treat the collection as a uniform set of objects.
In my mind, and as I've seen in many game engines, they are each separate. Textures belong as resources managed on the graphics card. Audio clips belong in a sound pool. Shaders belong as compiled buffers bound to the graphics context. Each type of resource has different needs, with different lifetimes, with different expectations.
6 hours ago, Seer said:For the scope of the kind of game I am interested in making, is this an acceptable approach? If so, at what scope, if any, does it become unacceptable or problematic?
Loading everything at the beginning can work. It becomes a problem when "everything" is larger than the system can support.
On the PC it is easier to ignore the realities of the hardware since so much has been abstracted away. You can allocate and load many more gigabytes of data than the physical memory available thanks to virtual memory. With the right graphics API calls you can load up more textures than you have texture memory and the drivers will magically hide the details from you.
Even so, there are limits to what the hardware supports. Usually performance plummets once you switch over to the abstracted-away virtualized version. Reading from active memory versus reading from something swapped to disk has many orders of magnitude different performance. You'll notice because the game goes from running quickly to barely crawling.
On systems and resources that don't support that kind of abstract virtualization, you'll know because it crashes for being out of resources.
6 hours ago, Seer said:In this case the Renderer has properly ordered all the objects and all it then does is tell them to render themselves. Is that still not a good solution?
That gets back to the question about what it means for objects to render themselves.
If each object causes its own texture swap, causes its own shader swap, then draws the quad unique to it, you're going to hit performance limits very quickly. A large screen filled with many small tiles will become a slide show as you saturate your data bus to the graphics card with draw calls. A small screen with only a small number of very large tiles may be able to render rapidly enough.
If instead you mean that you have an independent rendering system, and you notify the system that when it comes time to drawing, it should draw the specific sprite at those locations and that it should draw it at a proper ordering to preserve z-order and transparency, that is something that could be worked with. The rendering system could accumulate all the calls, figure out the order that preserves rendering order, eliminates/minimized z fighting, and ensures transparency, translucency, and holes are all handled properly with a minimum number of state changes and rendering calls.
7 hours ago, Seer said:Would you recommend the observer pattern for sending messages from objects to systems? If so, would you recommend implementing it with a kind of one for all approach where every observing system acts on the same function call, such as notify(Event, Object), differing in how they respond based only on the Event Enum which is passed?
It is commonly called an "event bus". Google finds lots of resources.
Effectively it serves as a broadcast/notification system. The key is that it needs to be used carefully, use it for events to broadcast and notify that things have happened so listeners can respond.
Rendering does not typically use that interface, but many events do. Rendering is usually completely decoupled from the game object's code. Sometimes a game object will request something different to happen, such as jumping to a different render state, but that's not the typical code path.