• Advertisement
Sign in to follow this  

Rendering Architecture

This topic is 2350 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I think this is more a software engineering type problem. I don't know how the overall structure of my rendering bits should function. I don't know that I even have a specific question to ask, but sometimes I find typing this nonsense out here will help me wrap my own head around it... so...

What is a good way to organize your drawing code. There are a few key bits I think of when I say "Drawing code".
1) There's the actual "draw" call.
2) There is setup before the draw call. Select the proper texture. Select the right shaders. Etc
3) There is *what* to draw. Model. Vertex Data. Index Buffers, etc.

One option is for game (Or UI, or particle, or whatever) objects to keep their own Render function. They could also know of their own model data, texture, etc. This does not lend itself to sorting and batching of draw calls. Additionally, it puts too much responsibility on the game objects. Should they *care* what their model is? Should they care about index buffers, etc. Should that not be delegated elsewhere?

So, I could do a centralized rendering function/object/whatever. Problem here, is that now it needs to know about all these object types in order to draw them. It needs their position, what texture to use, what render states to set, etc. Modularity is doomed here, I think.

Right now I have a combination of the two, which is a disaster waiting to happen. I am getting myself confused over where to keep my model data, when to load shaders, etc. How do I manage all of this? I'd like to fix it before I get further along.

So in summary, my question amounts to this:
Who is responsible for (possibly shared) items like model vertex data? My game consists almost entirely of square blocks. So many items share the same vertex buffer / index buffer, but use their own unique textures.
How do I create a rendering function/object/whatever that can draw without having to know the deepest intricate secrets of the objects it must draw?
Alternately, how can I do the above without the objects having to know the details of the rendering function?

I am using DirectX9 / C++, but I think this question is pretty generic.

Share this post


Link to post
Share on other sites
Advertisement
I'm going to have to read that thread a few more times. Most of it is over my head right now.

The gist seems to be: create an object which encapsulates the data needed to render things. And encode some bits that explain how the rendering should be done. I think. So now the renderer only needs to understand the details of the encapsulating object, and not the many other objects that will pass data into it.

I'm still not sure on how to handle having many textures, models, etc. Up to now, I've only really done simple demos or followed tutorials which typically just explicitly load one or two textures.
Is there a point where just outright loading every texture into a ginormous array of texture pointers becomes unworkable? How do I know when I hit that point, and what is the next stage of development?

Share this post


Link to post
Share on other sites
I share textures depending on the sub-system. Models share textures between themselves, sprites have their own shared textures (even if they happen to be duplicates of those in the model pool), etc.
The reason is because it is too sweeping to have a system that shares all textures loaded anywhere. Textures loaded by the sprites might be subsequently modified by those sprites, which would cause changes on models.

Additionally, I have a copy-on-write system. If I load a model, look at its diffuse color and see that it has an RGB texture for its diffuse color, then I look at its transparency color and see that it has an image with alpha, I could keep them as two images and take the RGB from texture 0 and the A from texture 1, but that wastes a texture slot and requires more bandwidth. So I create an RGBA texture by combining the two textures into one, which causes a copy to be made. It also goes into the pool and can be shared, so that the same copy is not made many times.


You can identify textures by file name and by checksum. For files that were loaded from disk, the full path is enough to determine copies. Files that are embedded into model files will not have file names, so you use checksums for those. Textures with filenames also have checksums, so if the same texture is loaded once from disk and then again as embedded media, it will still be stored in memory only once.

Creating a stable-yet-sturdy resource manager can be less than trivial. Think hard about your implementation.


L. Spiro

Share this post


Link to post
Share on other sites
One option is for game (Or UI, or particle, or whatever) objects to keep their own Render function. ... So, I could do a centralized rendering function/object/whatever.
Yes, this option is generally pursued by "simple" applications. Based on my experience (this is opinion), very few objects have the right to actually own their own Render functions. Most of the time, this is much better exploited in a subsystem aggregating the various components.[font="arial, verdana, tahoma, sans-serif"]It's not necessarily "everyone by itself" or "fully centralized", there's a whole range of possibilities. I'm currently fine with a semi-centralized approach (there are currently 4 "centers").[/font][font="arial, verdana, tahoma, sans-serif"] [/font]
No need to know about the specific object type. It's not the rendering routines that sync, it's object's responsibility see this thread for a specific example.
[font="arial, verdana, tahoma, sans-serif"] [/font][font="arial, verdana, tahoma, sans-serif"]Sync'ing the other way around is possible but doing that without going crazy will require you to either go through multi-inheritance trees, mixins, templating or component-based design. None of which are trivial. I'm going with the latter currently and I can tell that's very clean but the engineering effort involved is pretty high.[/font]
[font="arial, verdana, tahoma, sans-serif"] [/font][font="arial, verdana, tahoma, sans-serif"]
Who is responsible for (possibly shared) items like model vertex data? ... So many items share the same vertex buffer / index buffer, but use their own unique textures.[/quote]Sharing must happen by using an external manager. A concept which might help you is the difference between the "shader" and the "shader+parameters". Sharing involves separation of the parameters so you can tell whatever two objects are "compatible" or not. I'm not sure I am being clear enough here.[/font][font="arial, verdana, tahoma, sans-serif"] [/font][font="arial, verdana, tahoma, sans-serif"]There was a thread some months ago about managing the "shader parameters". Unfortunately, it seems I cannot find it anymore so I'll try to sum it up.[/font][font="arial, verdana, tahoma, sans-serif"]The point is that unless you're dealing with very specific shaders, the format of the parameters in implied by the shader itself. If a shader uses an uniform you'll have to provide it, if it needs a texture, you'll need to bind it.[/font][font="arial, verdana, tahoma, sans-serif"]Therefore, it is possible to write all shaders in the form of (shader, opaqueBlob) pair. This opaque blob is somehow unpacked by some subcomponent of the material system which provides the resulting D3D9 set up. It makes sharing way easier.[/font]
Models share textures between themselves
You mean instances of the same model? Or different models? I'd consider that quite smart...

Share this post


Link to post
Share on other sites

Additionally, I have a copy-on-write system. If I load a model, look at its diffuse color and see that it has an RGB texture for its diffuse color, then I look at its transparency color and see that it has an image with alpha, I could keep them as two images and take the RGB from texture 0 and the A from texture 1, but that wastes a texture slot and requires more bandwidth. So I create an RGBA texture by combining the two textures into one, which causes a copy to be made. It also goes into the pool and can be shared, so that the same copy is not made many times.


Sounds like the sort of thing you'd want to do in your content pipeline, not at runtime...

Share this post


Link to post
Share on other sites
[quote name='YogurtEmperor' timestamp='1315884492' post='4860947']Models share textures between themselves
You mean instances of the same model? Or different models? I'd consider that quite smart...[/quote]
Instances share the master models, and the master models share textures between each other.
The master models also have the copy-on-write feature so that if the engine user wants to change one instance substantially without changing the others, the master model will be duplicated and allow for full modification without changing any instance but one.
Naturally, users of my engine should try to find a way around this when they can, but I am creating a fully featured next-generation commercial game engine, so I still need to support even the rare cases. Think about that new game where the player can slash across the screen and cut the enemies in half along any axis.


Sounds like the sort of thing you'd want to do in your content pipeline, not at runtime...

It is and it isn’t.
My pipeline is not done yet, but I plan to have an option to move it offline.
But the engine still needs to have run-time support for it because engine users are free to swap diffuse (and any other) textures out at run-time. If they swap the diffuse or alpha, but not the other, I still need to have run-time recombining.
If this causes jitter I also plan to provide the option to keep the textures separate and have them uploaded in two texture slots.


L. Spiro

Share this post


Link to post
Share on other sites
Woosh Kersplat: Is the sound of much of this flying over my head and sticking to a wall somewhere. I do appreciate the help. I'm going to sit down and draw some diagrams to try and suss this out better.


I share textures depending on the sub-system. Models share textures between themselves, sprites have their own shared textures (even if they happen to be duplicates of those in the model pool), etc.


So you'd have some kind of structure like "Model Textures Pool" that is an array/vector of pointers to textures? Individual Models keep a pointer to the texture in that pool that they actually use?
What triggers the loading of textures into this pool? I'm not doing a generic engine, but rather a single game, so I (will) have a pretty good handle on what objects are used in it.

Additionally, you have all mentioned concepts like Materials, State-Groups, and grouping of shaders and "shader parameters". Are these all different names for roughly the same ideas? So a model might contain a Material structure, which says things like "use this shader, with this texture(s), and setup rendering states as such..."



No need to know about the specific object type. It's not the rendering routines that sync, it's object's responsibility see this thread for a specific example.
[font="arial, verdana, tahoma, sans-serif"] [/font][font="arial, verdana, tahoma, sans-serif"] [/font]

What do you mean by "sync". Do you mean things like filling cbuffers with the latest and greatest modelspace matrix? So you are saying each game object is responsible for dumping it's own position into a State-Group-Thing's cbuffer That makes sense to me, since the object knows best when it's own position needs to be updated.

Share this post


Link to post
Share on other sites
For the sake of simplicity you can probably get away with using a map. You should prefer a map over a vector because it allows quick searching for matches.

I use a multi-map because my keys are file names and checksums. Matching file names are considered immediate matching criteria, but checksums only prove textures not to be matches. To handle that 0.00001% case where two textures may have the same checksum but not be the same texture, I use a multi-map to allow multiple textures with the same checksum, if the final memory compare reveals that they are indeed not matches.
Sounds like a hassle, but only the worst case is slow. Most textures will have file names associated with them, so I can usually prove a texture to be unique or not immediately.


Models keeps shared pointers to their textures. My custom shared pointers do not delete the textures as soon as they have a 0 reference, because they are still known to exist by the model texture manager. I allow textures to reach a 0 reference count during run-time so that they do not cause jitter when they are deleted. 0-reference textures can be released at any time by the user, and the engine does it automatically between game states (where the main menu, options menu, bonus round, game screen, etc., are all different states).


What causes them to be added to the model texture manager/pool? Simply being loaded by a model. Whether as an embedded resource or from disk. Any model that requests loads any texture will have the texture sent through the texture manager for models.
And again, sprites have another texture manager.


L. Spiro

Share this post


Link to post
Share on other sites
[font="arial, verdana, tahoma, sans-serif"]
Instances share the master models, and the master models share textures between each other.
Like a texture atlas of some sort?

What do you mean by "sync". Do you mean things like filling cbuffers with the latest and greatest modelspace matrix? So you are saying each game object is responsible for dumping it's own position into a State-Group-Thing's cbuffer That makes sense to me, since the object knows best when it's own position needs to be updated.
It seems you are correct, but let me elaborate for extra safety. Consider for example physics APIs: they don't give you the chance to pull out simulated object's world transform (at least Bullet won't, you can pull out last tick's transform at best) but they provide callbacks so they upload the new transform for you.
Main problem in terms of design is to figure out a way to group the various updates in a temporally coherent, meaningful manner. Hopefully you won't need to deal with this for a while...
[/font]

Share this post


Link to post
Share on other sites

Woosh Kersplat: Is the sound of much of this flying over my head and sticking to a wall somewhere.


Well, the good thing about all this is you don't always have to understand it all the first time.

1. Add 'something' that works to your project, and code it as best you understand it.
2. Make it easier to understand and maintain.
3. Rinse and repeat.

At the end of all that, you have a 'rendering architecture' that suits your project. Stuff that's over your head now will inform the step#1s and 2s all along the way.

Share this post


Link to post
Share on other sites
I have not added anything to this topic yet, as what I would say has already been said, but more eloquently :P .

I just have one query regarding YogurtEmperor's post, I would like to say that I like how your system sounds, you seem do alot the same way as I, also you have done many things the way I intend to do them, you know when, I get around to it.

But regarding..

I use a multi-map because my keys are file names and checksums. ........stuff.............. Most textures will have file names associated with them, so I can usually prove a texture to be unique or not immediately.


Are you saying you check through the pixels of a texture to determine a match? Oh I'm assuming that the textures without file names are generated or composites. If that is the case then would you not just give these textures, a fake filename, based on their construction?
Maybe I'm misreading but this is the only place where I have disagreed with specifics on your implementation.
Oh and yes I realize that this is a rare case, but still seems like a bad practice to moi.

Share this post


Link to post
Share on other sites

[font="arial, verdana, tahoma, sans-serif"][quote name='YogurtEmperor' timestamp='1315902111' post='4860992']Instances share the master models, and the master models share textures between each other.
Like a texture atlas of some sort?[/font][/quote]
Texture atlasing is a different area of optimization apart from the resource manager.
I haven’t had time to add this yet, but the plan is that atlases could be constructed on a per-master-model basis, so the textures within a single model could become atlased. This would always be done offline. Then once again that creates a nameless texture that once again will be referenced by checksum and shared. Another model with the same atlas embedded into it will use the existing resource instead of uploading a duplicate.



I have not added anything to this topic yet, as what I would say has already been said, but more eloquently :P .

I just have one query regarding YogurtEmperor's post, I would like to say that I like how your system sounds, you seem do alot the same way as I, also you have done many things the way I intend to do them, you know when, I get around to it.

But regarding..
[quote name='YogurtEmperor' timestamp='1315962831' post='4861335']
I use a multi-map because my keys are file names and checksums. ........stuff.............. Most textures will have file names associated with them, so I can usually prove a texture to be unique or not immediately.


Are you saying you check through the pixels of a texture to determine a match? Oh I'm assuming that the textures without file names are generated or composites. If that is the case then would you not just give these textures, a fake filename, based on their construction?
Maybe I'm misreading but this is the only place where I have disagreed with specifics on your implementation.
Oh and yes I realize that this is a rare case, but still seems like a bad practice to moi.
[/quote]
Checksums don’t actually verify a match, so I need some way to be sure. Memory compares are a bit slow, but in most scenarios it will never get to that.
However I have been considering an alternative that is still not 100%, but theoretically close enough: Double checksums.
I plan to make a second checksum routine. It is unlikely that any texture that passes both checksums is not a match. If I integrate this, I will keep the final memory compare as an #ifdef option, so that engine users can activate it if they ever encounter a problem with the double checksums.


L. Spiro

Share this post


Link to post
Share on other sites
I don't understand this memory compare thing -- if you're able to perform a memory compare, then you've already got two copies of the texture in memory, right? How does that occur in the first place?

Share this post


Link to post
Share on other sites
Yes, but only once per file (though I agree it is annoying anyway).

#1: Embedded media already have a checksum (soon to be a double checksum) so these do not need to be unpacked unless they are unique.
#2: But files on disk currently need to be loaded if their file names are not duplicates. They have to be checked against unnamed media, so they are loaded, checksums made, and checked against unnamed images. If a match is found, the unnamed media is then given the name of the file so that it cannot cause a useless load more than once, and the file from the disk is thrown away. I believe it was Socrates who said, “It is better to have loaded and lost than to have overloaded.”

When I finish my toolchains I will fix #2 by having each game be a project, and building the game is done entirely (or nearly) through the project tool. The user will add textures to the game through the project tool, and that will give me a chance to create a dictionary associating file names with double checksums. This way I can perform the double-checksum without loading the file.


Luckily it is rare that a file is a duplicate of embedded media, so the problems with #2 actually don’t occur too often. That is, when a file is loaded from disk, it is very rarely immediately tossed out.
In addition, post-load duplicate-discovery can only happen once per file, so although this method can be streamlined by a complete pipeline, it still works well in practice.


L. Spiro

Share this post


Link to post
Share on other sites
I was meaning that for any resources that do not have real file names you just give them a fake one. Now for me textures without file names are either generated random noise,
normal maps generated from height maps or when several source images are packed into one. I make up file names for these anyway as I will cache the result, (later this will be improved to only caching appropriate files, but currently everything is cached).
I'm assuming your resources without file names are of similar make up, so I if you makeup file names for these you can have an exclusive test without a memory comparison.

On a side note I would be very interested to see what too different textures could actually satisfy to checksums.

Share this post


Link to post
Share on other sites
That is what I did in my first game engine.
The problem is, how do you uniquely identify the new texture?
Here are the thoughts that lead me to my current implementation:

Let’s say you combine a diffuse texture and an alpha texture on a given model. The media was originally embedded, so there is no file name for either texture.
Even embedded media have names, so you could combine the names of the textures to create the name of the new one.
suit_diffuse + marble_alpha = suit_diffuse|marble_alpha

That makes a unique name within the scope of the textures referenced by that model.
What about another model? It could make the same name using different textures.


So to avoid having another model overwrite your texture, you could add the name of the model to the texture.
So ship0|suit_diffuse|marble_alpha will not be overwritten by ship1|suit_diffuse|marble_alpha.


But then what if ship0|suit_diffuse|marble_alpha and ship1|suit_diffuse|marble_alpha are actually the same? They should be shared, but instead they will be duplicated.

What about a texture that was embedded (no file name) and then what if that same texture is loaded from disk? Unless I randomly generated the disk file name for the unnamed embedded file, I would have a duplicate on my hands.


I realized I needed a naming system that is not based off how the texture came into existence.
What is the only way to really know if two textures are the same?
Memory compares!
*cough*
So what is the next best way?
Checksums!


If all of your files are coming from disk, you can rely on combining file names to create unique names. And noise textures are unlikely to be duplicated so you could just generate random characters for their names.
But if you are using embedded media or for any other reason will not always have a texture file name handy, you need to be careful about how you uniquely identify textures.


L. Spiro

Share this post


Link to post
Share on other sites
I might..might quite possibly have overlooked textures of the embedded variety, thanks for clearing that up.
I'm going back my corner now.

P.s I was rushed on my earlier posting, did not have time to give due kudos for "I believe it was Socrates who said, “It is better to have loaded and lost than to have overloaded.” " was quite good.
P.s.s @CombatWombat - If your still here, sorry for derailing the thread, I kind of assumed you resigned from it.

Share this post


Link to post
Share on other sites

P.s.s @CombatWombat - If your still here, sorry for derailing the thread, I kind of assumed you resigned from it.


Yeah... I'm still here. Going to give re-organizing my drawing code a go this weekend. Still coming to grips with where to start...
Right now I just explicitly load individual textures from disc. The path is hard coded right into the source file. Naive, but I don't think it *needs* to be more complicated (yet?)

I'm going to try creating a bit-field which encodes how to draw different things so that I can just pass "Render jobs" to the renderer, then let it sort out the details of draw order and which shaders to choose.

Thinking out loud again...

I've written up a list of things that my game will need to draw. I have...
Blocks. The main game element. Vertex data is hard-coded in the source. Can be opaque or alpha blended. Right now I use hardware instancing and I'd like to keep doing so.
Sky-box. Self explanatory. Just an inside-out cube. I set Z depth to max in the shader.
Player model. Not done yet. Probably will be a model loaded from disc. Will need to support crude animations.
Projectile models. Not done yet. Probably loaded from disc.
Particle effects. Not started yet. Probably will be looking to perform the integration on GPU.
GUI elements. Just textured quads with Z depth set to min. Also Alpha blending.

So at it's simplest, my renderer will need to know
1) What to draw - vertex data. index data. texture selection.
2) Where to draw it - transform/matrix data. special z-depth conditions.
3) How. Lighting? Alpha?

So I could make a render-packet that contains flags to say HOW to draw, and pointers to necessary geometry data to say WHAT to draw. Then populate a list every frame. Then the renderer sorts the packets and selects shaders and groups draw calls based on the input data. True? Or completely off base? Is repopulating and sorting this list absurd in the overhead department?

Share this post


Link to post
Share on other sites
I don’t think the topic has been derailed. Even if the original poster does not need such a complicated system, there is a lot of useful information here from many people.


CombatWombat, read this.
You should definitely submit information for a render queue to sort, but keep it minimal to avoid transfer overload. If you use the sorting method I proposed there, sorting will be the same speed regardless of how large the render queue data is, but still you do have the initial upload of render data, and in that area bigger = slower.

So you don’t want to send whole matrices to the render queue. Some people offload the entire render to the render queue. It helps them to avoid a virtual function call which is costly on PlayStation 3, but they have to send more data which is slower.
In my implementation, I felt that sending matrices would be slower than invoking the wrath of a virtual function, so the render queue does not receive a matrix and instead delegates the final render task to the object that submitted the render queue packet.
Since the submitter of the packet will be responsible for the final render anyway, I can reduce the amount of data I send to the render queue even further by not sending pointers to index buffers and vertex buffers.


So what do I submit? Well, logically I submit only what is important for the actual sorting process. That is the whole point of the render queue after all.
As mentioned in my article, texture uploads, shader swaps, and render states need to be kept to a minimum, so this is the type of information I send to the render queue. I send a shader ID, a set of texture ID’s, and some render states.

After the render queue sorts these packets, it runs over objects in sorted order and calls a virtual function on the objects that sent the packets to perform the final render. This means of course that I also submit a pointer to the object that sent the packet.


L. Spiro

Share this post


Link to post
Share on other sites
@Wombat
I am currently redoing my own rendering kernal, so I won't give specific advice on this till I see how well it works.

However i general I would say a good approach is too start simple and progressively refactor, until your happy with it.
You seem past this point, but to serve as an example, say you start with simple hard coded spinning cube, then you go to having multiple cubes, then you make each cube shaded differently, or instanced, skinned, etc.
When you go to make a very different component ie; add particles or an animated mesh, remove or hide as much previous stuff like cube rendering code as you can, then work out how your particles are going to work, most specifially what interaction they require from the rest of your project.
After you have these two components working nicely seperatley you, make them work together, likely the components will have similar areas which you should turn into a common function.
And you continue this until your happy, or if your an eternal pessimist lie myself then continue until you get fed up and move on something else.

This is obviously alot of work, and it would probably be easier just to copy an existing good quality design, but I think the above is a better learning experience you will make a good system but also will understand why each section is the way it is, I would guess that most of the good designs of, well everything, are offsping from much poorer designs.

And you should continue sharing your plans and ideas in the forum so people can point the "obvious when known" flaws, for instance
Also if you have spare time I would also recomend dismantleing some various open source scene manager and rendering engine(s) and seeing how it/they work, give your self ideas.

"Sky-box. Self explanatory. Just an inside-out cube. I set Z depth to max in the shader.
....
GUI elements. Just textured quads with Z depth set to min. Also Alpha blending."

Instead of setting Z depth to max or min, just set it to allways fail or pass, respectively.

Oh and arn't "Projectile models" particles? I assume you mean like bullets and stuff.

Share this post


Link to post
Share on other sites
Thank you both, Jim & Emperor of Yogurt.


When you go to make a very different component ie; add particles or an animated mesh, remove or hide as much previous stuff like cube rendering code as you can, then work out how your particles are going to work, most specifially what interaction they require from the rest of your project.


This is kind of what I am already doing. I started with a box. Then jumped onto hardware instancing to get *many boxes*. Then it was a struggle when I tried to add my skybox. Now another struggle now that I want to have many different textured boxes and GUI elements, etc. I want to create a foundation that is at least based upon proven technique, even if my own implementation won't be the most efficient in the world.


Also if you have spare time I would also recomend dismantleing some various open source scene manager and rendering engine(s) and seeing how it/they work, give your self ideas.
[/quote]
I do this as a hobby, and unfortunately I'm not a good enough coder to decipher other people's code, particularly if it's a design I'm not at all familiar with. I get lost in the forest because all the trees are in the way. For instance, the entire idea of a virtual function was new to me until I just googled it... or rather I had been using it, but didn't know what it was called!


Oh and arn't "Projectile models" particles? I assume you mean like bullets and stuff.
[/quote]
Eh, more referring to "larger" projectiles like maybe rockets, missiles, etc. So they would probably have particle effects stuck to their tails.

Share this post


Link to post
Share on other sites
Your welcome,


Eh, more referring to "larger" projectiles like maybe rockets, missiles, etc. So they would probably have particle effects stuck to their tails.
Yeah, that seems some what obvious now.

Share this post


Link to post
Share on other sites
I'm still not quite there, but getting closer.

Here is a shot of my "game" as it stands, to get some idea of what I'm trying to accomplish.
http://www.cargoclas...at/Untitled.png
The spaceship is made up of blocks.
All blocks share the same geometry.
Blocks can have unique textures, material properties, etc based upon type.

http://www.cargoclas...nderDiagram.pdf
I'm trying to map out where the different structures live. I have a block manager who's job it is to create and destroy blocks, as well as maintain a spatial partition listing all the blocks contained in the game world. So I can pass this master block list to the frustrum culler along with the camera's frustrum.

Blocks which survive the cull, can be passed into a render bucket.
Question 1: The render bucket should accept anything drawable, correct?
Void pointers, or should all drawable objects inherit from some "drawable" class?

Following L.Spiro's article...
Shaders are assigned some unique identification number/tag. The object-to-be-drawn knows the ID of the shaders it needs to use. That is part of the sort key.
The object-to-be-drawn knows it's own textures. A texture ID is part of the sort key.
A pointer to the object-to-be-drawn is kept in the sort key.
All my blocks use the same mesh data. Should that be part of the sort key to allow for hw instancing?

Once sorted, the render function calls the object's (or object manager's) render functions. Is it morally wrong for the object manager (that already has the task of create/destroy/list) to now draw as well? I don't know where else to put that, since the object manager is the only thing that knows about ALL the blocks, and therefor has access to the necessary instance data.

Does it make sense to group shaders with a vertex declaration?


Instead of setting Z depth to max or min, just set it to allways fail or pass, respectively.
[/quote]
So in the case of a skybox, I would always set the Z-test to fail? And therefor anywhere there is already any value in the zbuffer, nothing will be drawn?

Apologize for being thick, but again, writing this stuff out really does help me to understand it. Hell. Maybe it'll help someone else along the way.

Share this post


Link to post
Share on other sites

Blocks which survive the cull, can be passed into a render bucket.
Question 1: The render bucket should accept anything drawable, correct?
Void pointers, or should all drawable objects inherit from some "drawable" class?

In my system, renderable objects all inherit from a base class that the render manager understands.
The render manager also defines the types of things it needs to make its sort.

Renderable objects and the renderable manager meet when the scene manager is told to render the scene. It gathers objects within the frustum and sends those objects a request for the data needed by the render manager (specifically, data for the render queue).
Once it has done this it tells the render queue (by the way, send opaque and transparent objects to different render queues) that no more objects will be added, and the render queue(s) sort their contents.
Don’t make the mistake of making the sort and the render of that sorted data part of the same function. In a multi-pass system you will want to sort once, then render, render, render.

So when the scene manager is content that it has sorted renderables, it calls, as its leisure the render queue’s render method, which runs over the objects in sorted order and tells each object to render itself.
In a fixed-function pipeline, the render queue itself should also activate lighting or fog, etc. These kinds of render states will be needed for the sort, so you will be sending this information.
For shader-based systems you are really only passing the shader ID, some texture ID’s, etc.
Since you are not passing the actual shader pointer, the render queue can’t really set up much for you, so your renderable object will have to do it all when it is told (by the render queue) to make its final render.

So now the renderable object has been told it is time to actually draw. It activates its own shader and textures and renders.
Since each object will activate its own shader, it is imperative that you have a system in place to return immediately the current shader ID matches the one being set. Don’t try to re-apply the same shader.
Same goes for textures.





Following L.Spiro's article...
Shaders are assigned some unique identification number/tag. The object-to-be-drawn knows the ID of the shaders it needs to use. That is part of the sort key.
The object-to-be-drawn knows it's own textures. A texture ID is part of the sort key.
A pointer to the object-to-be-drawn is kept in the sort key.
All my blocks use the same mesh data. Should that be part of the sort key to allow for hw instancing?

So many ways to do this.
I suggest having a second way to submit things to the render queue. By default, objects use Add() to add themselves to the queue. For instances that want to be used with hardware instancing, you could add a function called AddForInstancing() or such. This puts them into a different list within the render queue.
When submitting this way, you must also submit a reference to a structure that contains the translation, scale, and position, etc., and indeed a mesh ID.
When rendering from this list, the render queue can figure out how many instances should be batched into one call on its own. When the list is sorted, objects with the same mesh ID will implicitly be tightly packed together, so it can just count how many objects from its current position in the array have the same mesh ID.
The render queue will create the index buffer needed for indexing on its own. I suggest keeping one per unique mesh ID so that frame coherency can be maintained as much as possible.

The final render still needs to be delegated to the render objects, but of course not to all of them in the instance. Since any of them will suffice, you may as well pick the first one.





Once sorted, the render function calls the object's (or object manager's) render functions. Is it morally wrong for the object manager (that already has the task of create/destroy/list) to now draw as well? I don't know where else to put that, since the object manager is the only thing that knows about ALL the blocks, and therefor has access to the necessary instance data.

If I understand what you are asking correctly, look above to my first wall of text.




Does it make sense to group shaders with a vertex declaration?

Why? Where?






Instead of setting Z depth to max or min, just set it to allways fail or pass, respectively.

So in the case of a skybox, I would always set the Z-test to fail? And therefor anywhere there is already any value in the zbuffer, nothing will be drawn?

Apologize for being thick, but again, writing this stuff out really does help me to understand it. Hell. Maybe it'll help someone else along the way.
[/quote]
Although I am trying to keep my render data submissions small, I have in the past found it valuable to have a priority value also submitted to the render queue.
This way you can force objects to be drawn before other objects.
With this in place, you can render your skybox before everything else, and have it disable depth compare during its render.


L. Spiro

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement