12 hours ago, Seer said:
Assuming you have a resource manager dedicated to loading and containing the resources, should this manager be passed indiscriminately to wherever there is a need for a resource(s)?
No, do not pass them around indiscriminately. First off, few systems should need to know about them. Game objects themselves have no need to know about it.
Game objects rarely render themselves. Game objects usually have a handle to their models, textures, or animations, and they can request the various graphics systems do something different, but usually it is extremely inefficient for the game objects themselves to be involved with the rendering or the resource details.
But that doesn't directly answer your question...
Among the best models is to only pass the data to systems on a "need to know" basis. It takes some discipline both in design and implementation, but it can be done. Again, since the game objects themselves don't do the drawing, they should generally have no need to know the details of how they are rendered. They might need to switch to different models or textures, such as a "damaged" or "inactive" version, but that should generally be handled as a message to the other subsystems and not through direct manipulation.
Unfortunately in many games there is less discipline, people make sloppy decisions, and time is more critical than implementation quality. When you must make this sad choice, the typical model is a "well known instance". The typical implementation is a global structure that contains pointers to the active instances of some key libraries, such as logging, audio, rendering, and a few others. The instances themselves are only modified at well-defined times, such as only being modified during game initialization or only being modified when the entire game is outside the simulation loop.
13 hours ago, Seer said:
Should the resource manager segregate various textures and animations from sprite sheets as they are loaded and hold segregated sets (maps/lists) of these textures and animations based on a pre-determined need by different game objects so that they can be quickly set up? By this I mean, where you are certain object A needs only a set of X animations and Y sound effects throughout the lifetime of the program.
Things that are not the same should not be treated the same. Things that are the same should be treated the same. A texture is not a model. A sprite-sheet is not a regular texture. Animations and sounds are not the same as the others on the list.
As for loading the things being needed, it is moderately common to have a prefetching system, depending on the game. Elements can have data that says "I need this at startup", and other data that says "I need this eventually". The first needs to get loaded up front, but the rest can wait until the main game is running.
The exact details of such a system depend on the game and its needs. For example, in a major game (not a hobby project that doesn't have the manpower) all the expected audio can be pulled from the animation events that use them, and the build tools extract the list of audio that can be triggered. Since audio is something that needs to be instantly responsive it is generally best to load it from disk in advance. A smarter system can pull them up after the main load, continuing a background load as the level becomes playable.
13 hours ago, Seer said:
Any object could hold any combination of textures, animations or sound effects held by the resource manager.
Generally no. The individual game object does not hold any of those.
A subsystem that controls rendering controls the textures and models, both as sub-subsystems, and they do it in a way that best fits how they will be used, with different size buffers and resource caches, and different rules for resource proxies as needed, generally storing the resources directly on the video card, all focusing on how to render quickly. A subsystem that controls animations is quite separate, with completely different access patterns, different buffering system, different resource caches, different proxy services, designed around quickly processing the animations for the rapid series of matrix multiplies and transfer to the cards that must take place. Sound effects are similarly handled differently, kept in different areas of memory for fast audio playback.
An individual game object may hold a handle or proxy given to them by the subsystem, but the game object themselves typically don't own those resources at all.
13 hours ago, Seer said:
At this point I don't see why objects shouldn't hold their own graphical resources since the objects contain the data such as position, width and height that the resources would need in order to be rendered correctly. However, I'm not so sure about resources such as sound effects, since these types of resources don't need any information from the object in order to play.
Mostly covered above.
The rendering system can be built to handle rendering. It can sort objects based on how they must be rendered. Commonly this means based on material orders, shaders, transparency/translucency, and other factors. The rendering system can also use the information to render multiple instances of the data with a single call. Such systems can build and maintain a collection of rendering order keys using bitmasks which are trivially sorted to greatly improve rendering speed.
If each game object itself owned this then drawing would require much more processing, visiting every game object to discover all those properties, and re-sorting instead of keeping cached sort keys.
Audio is problematic when it must be mixed or processed. If you're mixing together a bunch of positional audio information, it is horribly inefficient to query each object every frame to find out if the audio has changed, to mix only the audio through the single simulation step, and so on. Audio processing is handled radically different from graphics, and different from the simulation. Say you've got some 44.1KHz audio, some 192KHz audio, and you want to play them together. But if you're mixing them per graphics frame, you're mixing together perhaps 7056 samples and 30720 samples, and things get difficult. Similar issues happen if you try to mix them by simulation step since often simulation happens at irregular times, often many rapid simulation steps to catch up, followed by a delay waiting for rendering or other processing. It's even worse when something causes graphics frames.
Instead an audio processing system can handle all the work in a separate processing thread, often in its own CPU core quietly humming away. The system can operate as best fits the audio hardware, keeping the audio buffers fed completely independently of what game objects and rendering systems are doing. The audio system can listen for events that impact audio and handle them when makes best sense for that subsystem, such as when the next round of audio buffer updates take place.
However, for a small hobby game, you're unlikely to have the manpower to do most of that stuff. You'll get better performance with such systems, but they take time and effort to develop. If you're using an established game engine or good middleware tools they'll handle it for you, and that's highly recommended if your goal is to complete an actual game.