Jump to content
  • Advertisement
Sign in to follow this  
CombatWombat

Rendering Architecture

This topic is 2530 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I think this is more a software engineering type problem. I don't know how the overall structure of my rendering bits should function. I don't know that I even have a specific question to ask, but sometimes I find typing this nonsense out here will help me wrap my own head around it... so...

What is a good way to organize your drawing code. There are a few key bits I think of when I say "Drawing code".
1) There's the actual "draw" call.
2) There is setup before the draw call. Select the proper texture. Select the right shaders. Etc
3) There is *what* to draw. Model. Vertex Data. Index Buffers, etc.

One option is for game (Or UI, or particle, or whatever) objects to keep their own Render function. They could also know of their own model data, texture, etc. This does not lend itself to sorting and batching of draw calls. Additionally, it puts too much responsibility on the game objects. Should they *care* what their model is? Should they care about index buffers, etc. Should that not be delegated elsewhere?

So, I could do a centralized rendering function/object/whatever. Problem here, is that now it needs to know about all these object types in order to draw them. It needs their position, what texture to use, what render states to set, etc. Modularity is doomed here, I think.

Right now I have a combination of the two, which is a disaster waiting to happen. I am getting myself confused over where to keep my model data, when to load shaders, etc. How do I manage all of this? I'd like to fix it before I get further along.

So in summary, my question amounts to this:
Who is responsible for (possibly shared) items like model vertex data? My game consists almost entirely of square blocks. So many items share the same vertex buffer / index buffer, but use their own unique textures.
How do I create a rendering function/object/whatever that can draw without having to know the deepest intricate secrets of the objects it must draw?
Alternately, how can I do the above without the objects having to know the details of the rendering function?

I am using DirectX9 / C++, but I think this question is pretty generic.

Share this post


Link to post
Share on other sites
Advertisement
I'm going to have to read that thread a few more times. Most of it is over my head right now.

The gist seems to be: create an object which encapsulates the data needed to render things. And encode some bits that explain how the rendering should be done. I think. So now the renderer only needs to understand the details of the encapsulating object, and not the many other objects that will pass data into it.

I'm still not sure on how to handle having many textures, models, etc. Up to now, I've only really done simple demos or followed tutorials which typically just explicitly load one or two textures.
Is there a point where just outright loading every texture into a ginormous array of texture pointers becomes unworkable? How do I know when I hit that point, and what is the next stage of development?

Share this post


Link to post
Share on other sites
I share textures depending on the sub-system. Models share textures between themselves, sprites have their own shared textures (even if they happen to be duplicates of those in the model pool), etc.
The reason is because it is too sweeping to have a system that shares all textures loaded anywhere. Textures loaded by the sprites might be subsequently modified by those sprites, which would cause changes on models.

Additionally, I have a copy-on-write system. If I load a model, look at its diffuse color and see that it has an RGB texture for its diffuse color, then I look at its transparency color and see that it has an image with alpha, I could keep them as two images and take the RGB from texture 0 and the A from texture 1, but that wastes a texture slot and requires more bandwidth. So I create an RGBA texture by combining the two textures into one, which causes a copy to be made. It also goes into the pool and can be shared, so that the same copy is not made many times.


You can identify textures by file name and by checksum. For files that were loaded from disk, the full path is enough to determine copies. Files that are embedded into model files will not have file names, so you use checksums for those. Textures with filenames also have checksums, so if the same texture is loaded once from disk and then again as embedded media, it will still be stored in memory only once.

Creating a stable-yet-sturdy resource manager can be less than trivial. Think hard about your implementation.


L. Spiro

Share this post


Link to post
Share on other sites
One option is for game (Or UI, or particle, or whatever) objects to keep their own Render function. ... So, I could do a centralized rendering function/object/whatever.
Yes, this option is generally pursued by "simple" applications. Based on my experience (this is opinion), very few objects have the right to actually own their own Render functions. Most of the time, this is much better exploited in a subsystem aggregating the various components.[font="arial, verdana, tahoma, sans-serif"]It's not necessarily "everyone by itself" or "fully centralized", there's a whole range of possibilities. I'm currently fine with a semi-centralized approach (there are currently 4 "centers").[/font][font="arial, verdana, tahoma, sans-serif"] [/font]
No need to know about the specific object type. It's not the rendering routines that sync, it's object's responsibility see this thread for a specific example.
[font="arial, verdana, tahoma, sans-serif"] [/font][font="arial, verdana, tahoma, sans-serif"]Sync'ing the other way around is possible but doing that without going crazy will require you to either go through multi-inheritance trees, mixins, templating or component-based design. None of which are trivial. I'm going with the latter currently and I can tell that's very clean but the engineering effort involved is pretty high.[/font]
[font="arial, verdana, tahoma, sans-serif"] [/font][font="arial, verdana, tahoma, sans-serif"]
Who is responsible for (possibly shared) items like model vertex data? ... So many items share the same vertex buffer / index buffer, but use their own unique textures.[/quote]Sharing must happen by using an external manager. A concept which might help you is the difference between the "shader" and the "shader+parameters". Sharing involves separation of the parameters so you can tell whatever two objects are "compatible" or not. I'm not sure I am being clear enough here.[/font][font="arial, verdana, tahoma, sans-serif"] [/font][font="arial, verdana, tahoma, sans-serif"]There was a thread some months ago about managing the "shader parameters". Unfortunately, it seems I cannot find it anymore so I'll try to sum it up.[/font][font="arial, verdana, tahoma, sans-serif"]The point is that unless you're dealing with very specific shaders, the format of the parameters in implied by the shader itself. If a shader uses an uniform you'll have to provide it, if it needs a texture, you'll need to bind it.[/font][font="arial, verdana, tahoma, sans-serif"]Therefore, it is possible to write all shaders in the form of (shader, opaqueBlob) pair. This opaque blob is somehow unpacked by some subcomponent of the material system which provides the resulting D3D9 set up. It makes sharing way easier.[/font]
Models share textures between themselves
You mean instances of the same model? Or different models? I'd consider that quite smart...

Share this post


Link to post
Share on other sites

Additionally, I have a copy-on-write system. If I load a model, look at its diffuse color and see that it has an RGB texture for its diffuse color, then I look at its transparency color and see that it has an image with alpha, I could keep them as two images and take the RGB from texture 0 and the A from texture 1, but that wastes a texture slot and requires more bandwidth. So I create an RGBA texture by combining the two textures into one, which causes a copy to be made. It also goes into the pool and can be shared, so that the same copy is not made many times.


Sounds like the sort of thing you'd want to do in your content pipeline, not at runtime...

Share this post


Link to post
Share on other sites
[quote name='YogurtEmperor' timestamp='1315884492' post='4860947']Models share textures between themselves
You mean instances of the same model? Or different models? I'd consider that quite smart...[/quote]
Instances share the master models, and the master models share textures between each other.
The master models also have the copy-on-write feature so that if the engine user wants to change one instance substantially without changing the others, the master model will be duplicated and allow for full modification without changing any instance but one.
Naturally, users of my engine should try to find a way around this when they can, but I am creating a fully featured next-generation commercial game engine, so I still need to support even the rare cases. Think about that new game where the player can slash across the screen and cut the enemies in half along any axis.


Sounds like the sort of thing you'd want to do in your content pipeline, not at runtime...

It is and it isn’t.
My pipeline is not done yet, but I plan to have an option to move it offline.
But the engine still needs to have run-time support for it because engine users are free to swap diffuse (and any other) textures out at run-time. If they swap the diffuse or alpha, but not the other, I still need to have run-time recombining.
If this causes jitter I also plan to provide the option to keep the textures separate and have them uploaded in two texture slots.


L. Spiro

Share this post


Link to post
Share on other sites
Woosh Kersplat: Is the sound of much of this flying over my head and sticking to a wall somewhere. I do appreciate the help. I'm going to sit down and draw some diagrams to try and suss this out better.


I share textures depending on the sub-system. Models share textures between themselves, sprites have their own shared textures (even if they happen to be duplicates of those in the model pool), etc.


So you'd have some kind of structure like "Model Textures Pool" that is an array/vector of pointers to textures? Individual Models keep a pointer to the texture in that pool that they actually use?
What triggers the loading of textures into this pool? I'm not doing a generic engine, but rather a single game, so I (will) have a pretty good handle on what objects are used in it.

Additionally, you have all mentioned concepts like Materials, State-Groups, and grouping of shaders and "shader parameters". Are these all different names for roughly the same ideas? So a model might contain a Material structure, which says things like "use this shader, with this texture(s), and setup rendering states as such..."



No need to know about the specific object type. It's not the rendering routines that sync, it's object's responsibility see this thread for a specific example.
[font="arial, verdana, tahoma, sans-serif"] [/font][font="arial, verdana, tahoma, sans-serif"] [/font]

What do you mean by "sync". Do you mean things like filling cbuffers with the latest and greatest modelspace matrix? So you are saying each game object is responsible for dumping it's own position into a State-Group-Thing's cbuffer That makes sense to me, since the object knows best when it's own position needs to be updated.

Share this post


Link to post
Share on other sites
For the sake of simplicity you can probably get away with using a map. You should prefer a map over a vector because it allows quick searching for matches.

I use a multi-map because my keys are file names and checksums. Matching file names are considered immediate matching criteria, but checksums only prove textures not to be matches. To handle that 0.00001% case where two textures may have the same checksum but not be the same texture, I use a multi-map to allow multiple textures with the same checksum, if the final memory compare reveals that they are indeed not matches.
Sounds like a hassle, but only the worst case is slow. Most textures will have file names associated with them, so I can usually prove a texture to be unique or not immediately.


Models keeps shared pointers to their textures. My custom shared pointers do not delete the textures as soon as they have a 0 reference, because they are still known to exist by the model texture manager. I allow textures to reach a 0 reference count during run-time so that they do not cause jitter when they are deleted. 0-reference textures can be released at any time by the user, and the engine does it automatically between game states (where the main menu, options menu, bonus round, game screen, etc., are all different states).


What causes them to be added to the model texture manager/pool? Simply being loaded by a model. Whether as an embedded resource or from disk. Any model that requests loads any texture will have the texture sent through the texture manager for models.
And again, sprites have another texture manager.


L. Spiro

Share this post


Link to post
Share on other sites
[font="arial, verdana, tahoma, sans-serif"]
Instances share the master models, and the master models share textures between each other.
Like a texture atlas of some sort?

What do you mean by "sync". Do you mean things like filling cbuffers with the latest and greatest modelspace matrix? So you are saying each game object is responsible for dumping it's own position into a State-Group-Thing's cbuffer That makes sense to me, since the object knows best when it's own position needs to be updated.
It seems you are correct, but let me elaborate for extra safety. Consider for example physics APIs: they don't give you the chance to pull out simulated object's world transform (at least Bullet won't, you can pull out last tick's transform at best) but they provide callbacks so they upload the new transform for you.
Main problem in terms of design is to figure out a way to group the various updates in a temporally coherent, meaningful manner. Hopefully you won't need to deal with this for a while...
[/font]

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!