Code design question

Started by
17 comments, last by L. Spiro 12 years, 2 months ago

Do you still consider my design wrong?

I do. You were more careful about not using the terms “mesh” and “model” interchangeably, and while your mesh class has no collision data etc., it is still not really a useful class to even have at all.

Mainly your post has made it apparent that you really don’t know the full scope of 3D programming, which is fine. It is a huge subject and it takes many years to learn, and the only way to get there is to try, try, and try again, as long as each time you fail you see why you could have done better.

The first thing that gives me that feeling is that you have a mesh class at all. I will explain why shortly.

The second is that you suggested using a mesh for terrain.
Terrain is not a mesh. A mesh could be suitable for a small area of terrain, but in any area of usefulness terrain is a very specific method of combining index buffers, vertex buffers, shaders, and textures such that detail decreases in the distance. It is a type of renderable object that uses constant modification/swapping of parts/LOD changes in order to maintain a reasonable level of performance. It often requires streaming data and updating in real-time.

A mesh can’t do that (practically).

This is one example of special ways things draw themselves, and there are many more. Which is why a mesh class is useless.
I couldn’t even use one for my own model class. I keep my vertex buffer broken into multiple streams to avoid sending useless data during shadow-map generation. They all share the same index buffer. Sometimes some vertex buffers are enabled and sometimes others are.

A mesh class is restrictive. You will never be able to handle all the cases for ways in which things want to draw themselves, so it is hopeless to even try.


As I said before, the only things a graphics module needs to provide are vertex buffers, index buffers, textures, shaders, and a few helper functions such as a render queue. No meshes.



This eliminates the need for any factories as well.


You put a lot of emphasis on keeping things easy to use. Again, this is fine, but it is in the wrong place.
A mesh class may be use to use, but it is restrictive as can be. With today’s demands on graphics, you simply can’t find a use for such a class. New techniques require all kinds of different combinations of vertex buffers etc.

The graphics library only needs to provide the components. The vertex buffers themselves, not a mesh simplifier.

Move the simplification over to the actual models. And I am talking about the objects that contain physics information as well as a graphics data.
A shared model, or master model, is loaded only once. Instances are spawned from it, sharing its graphics data. And the simplification for drawing models happens inside the model class.


You say it is simple to call Graphics::DrawMesh().
I think it is simple to call Model::Draw().
You say it is simple to call Graphics::DrawWorld().
I think it is simple to call SceneManager::Draw().

The point is all the same simplifications are there, just moved around.

Your graphics engine is doing too much.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

Advertisement
Ok it's been many years since doing 3d engine things, but I have a question:

You say it is simple to call Graphics::DrawMesh().
I think it is simple to call Model::Draw().
You say it is simple to call Graphics::DrawWorld().
I think it is simple to call SceneManager::Draw().

So a Model is not only responsible for manipulating its underlying data, but it is also responsible for knowing how to draw itself? Is this really a typical design these days?
There are many ways to organize an engine, but there are many more ways that objects may want to be rendered. Trying to make a one-fits-all solution is a lesson in futility.
Models know how models need to be drawn.
Terrain knows how terrain needs to be drawn.
Sprites know how sprites need to be drawn.
Particles know how particles need to be drawn.
Volumetric fog knows how volumetric fog needs to be drawn.

And since each of these objects is only using the basic rendering components (index buffers, vertex buffers, etc.), the way in which they are drawn can easily be modified to support new techniques and special effects.

In other words, yes.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid


Mainly your post has made it apparent that you really don’t know the full scope of 3D programming, which is fine. It is a huge subject and it takes many years to learn, and the only way to get there is to try, try, and try again, as long as each time you fail you see why you could have done better.


Yes, you are right about that, and I'm not ashame to admit it. My main focus in game programming was always more on game logic rather than graphic. While I got some nice shader effects like hdr, atmo-scattering, postprocessing, I didn't do much regarding vertex data manipulation like terrain LOD that you mentioned. Maybe it's because I'm lacking an artist providing me with up-to-date graphics data, yet I hope to still learn how to handle that properly.


A mesh can’t do that (practically).

This is one example of special ways things draw themselves, and there are many more. Which is why a mesh class is useless.
I couldn’t even use one for my own model class. I keep my vertex buffer broken into multiple streams to avoid sending useless data during shadow-map generation. They all share the same index buffer. Sometimes some vertex buffers are enabled and sometimes others are.

A mesh class is restrictive. You will never be able to handle all the cases for ways in which things want to draw themselves, so it is hopeless to even try.



I belive you are right about that concering a classic mesh-interface like the ID3DXMesh. My mesh class on the other hand is just a wrapper around what you described: Having individual objects render themself individually. So instead of doing this:


Terrain::Render(){
//set material(s?)
//begin effect
//set vertex/index-data/streams
//draw
}


I have this:


TerrainModel::Render(){
//set material(s)
//begin effect
//draw TerrainMesh
}
TerrainMesh::Draw(){
//set vertex/index-data/streams
//draw
}


Its just hides away the actual renderin from the model-class. It might be obligate, but I'm feeling kinda comfortable with it.. its interesting however that you said its not possible to achieve such things like modern terrains with that. From what I see, both cases are basically the same, just with different implentations. Is there something I oversee? Obviously from what I understand you could do the things you described with both my and your implentation. Well, maybe I am wrong..?

Anyway, just a note, why do you say a Terrain is not a mesh anyway? I see that a terrain uses multiple textures, maybe effects to achieve what you described. But a mesh, in my definition, is a collection of vertex/index-data. And thats a part of a terrain too, isn't it?


As I said before, the only things a graphics module needs to provide are vertex buffers, index buffers, textures, shaders, and a few helper functions such as a render queue. No meshes.


Okay, that makes sence to me. I already have wrappers for textures, shaders, and so on, I need to add a render queue and a direct wrapper for vertex/index buffers too. I've seperated all that code into a graphics module. My mesh-class won't be part of the module, it will rather be part of another layer of my particular engine, as it fits for its needs. The factories will also be part of the layer. I am planning to reuse the graphics module in future games (of course enchancing it every time), but the other layers will probably change completely with my needs/knowledge.

Still got any comments to improve the design even further? I'll be drawing out the basic design in some sort of graph when I get time. And even though I'm going to use the mesh interface, seperatet from the main graphics layer, I'd still be glad to hear what you have to say about it. There might be a huge misunderstanding of things on my side..

EDIT:

Oh, one more thing. If you advice me against factories (well, at least you said they were unneccassary using your implentation), how else should I load e.g. my models? My last game made models load themself, but that made them somewhat awkward to use. Is there anything really against using factories? I feel like they are very easy to use, for example I can easily load different file-formats, load specialiced models (characters, obstacles, environment) in their own directoriers without having to write that directorie everywhere I use it, etc... disadvantages/alternatives?
I took so long to reply because I wrote a fully detailed post, nearly finished it when a friend came over. As were talking my PC crashed randomly and it took a full day to bring myself to even try again, and it still won’t be as detailed.






Its just hides away the actual renderin from the model-class. It might be obligate, but I'm feeling kinda comfortable with it.. its interesting however that you said its not possible to achieve such things like modern terrains with that. From what I see, both cases are basically the same, just with different implentations. Is there something I oversee? Obviously from what I understand you could do the things you described with both my and your implentation. Well, maybe I am wrong..?

My implementation involves dynamically combining index buffers and vertex buffers, so that during a normal render vertex buffers A and B are active, and during the creation of a shadow map, only B is active, etc.
Trying to centralize all the different ways in which things can be drawn will just create a mess.
Sure, a mesh class could allow multiple vertex buffers and allow manual setting of different combinations, but:
#1: Since it is a convenience class, that is no more beneficial than just keeping the various index/vertex buffers and setting them manually, as long as your wrappers for index/vertex buffers are also good.
#2: When do you stop adding features to support various new drawing methods and finally just say, “This is just too messy and bloated and in order to get all this flexibility I have made it either hard to use or generalized so much that all render types work but none are particularly fast.”

The reason you would not want to centralize your drawing code will become more apparent when terrain is discussed.



Anyway, just a note, why do you say a Terrain is not a mesh anyway? I see that a terrain uses multiple textures, maybe effects to achieve what you described. But a mesh, in my definition, is a collection of vertex/index-data. And thats a part of a terrain too, isn't it?

A strict definition of a mesh is not particularly useful, since it could encapsulate so many things.
Instead, common themes between meshes include rigid objects that don’t move except for by animations.
Terrain is a render form that is constantly changing to support various LOD’s etc.

To make the point clear, let’s look at one of the most uniquely drawn terrain types: Geo Clipmaps.
It is not just about encapsulating which shader goes with which mesh.
Geo Clipmaps take their height value from a texture which must also be handled in a very special manner on the GPU, updating sections etc.
Then there is the specific arrangement of each of the tile sets, which are meshes.
Arranged in a very specific manner, they provide decreasing level-of-detail in all directions and prevent rendering of data behind the player or otherwise out of view.

All of this data is arranged at a macro level, not at the micro level. The way in which all of the components interacts must be gracefully managed by the terrain class.
You aren’t simply going to add a new render method to some convenience class to get this done.

To a lesser degree, the same thing applies to models. There are a lot of macro-level things they can do to orchestrate the way in which they are rendered. They are just less obvious.






Oh, one more thing. If you advice me against factories (well, at least you said they were unneccassary using your implentation), how else should I load e.g. my models? My last game made models load themself, but that made them somewhat awkward to use. Is there anything really against using factories? I feel like they are very easy to use, for example I can easily load different file-formats, load specialiced models (characters, obstacles, environment) in their own directoriers without having to write that directorie everywhere I use it, etc... disadvantages/alternatives?


Factories are useful for creating a subset of types natively supported by the engine, but as I explained supporting such a small subset of types of meshes etc. is fairly useless. It just seems out of place here.

For models, factories could be used to load a subset of model types, as long as custom types are still allowed.

I see no advantage in being able to load multiple types of model files. Just make a format that is designed for your engine and convert to that from FBX or COLLADA.
Of course characters, obstacles, terrain, and buildings may each have their own formats. But only 1 is necessary for each.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid


I took so long to reply because I wrote a fully detailed post, nearly finished it when a friend came over. As were talking my PC crashed randomly and it took a full day to bring myself to even try again, and it still won’t be as detailed.





[quote name='The King2' timestamp='1328470676' post='4909901']
Its just hides away the actual renderin from the model-class. It might be obligate, but I'm feeling kinda comfortable with it.. its interesting however that you said its not possible to achieve such things like modern terrains with that. From what I see, both cases are basically the same, just with different implentations. Is there something I oversee? Obviously from what I understand you could do the things you described with both my and your implentation. Well, maybe I am wrong..?

My implementation involves dynamically combining index buffers and vertex buffers, so that during a normal render vertex buffers A and B are active, and during the creation of a shadow map, only B is active, etc.
Trying to centralize all the different ways in which things can be drawn will just create a mess.
Sure, a mesh class could allow multiple vertex buffers and allow manual setting of different combinations, but:
#1: Since it is a convenience class, that is no more beneficial than just keeping the various index/vertex buffers and setting them manually, as long as your wrappers for index/vertex buffers are also good.
#2: When do you stop adding features to support various new drawing methods and finally just say, “This is just too messy and bloated and in order to get all this flexibility I have made it either hard to use or generalized so much that all render types work but none are particularly fast.”

The reason you would not want to centralize your drawing code will become more apparent when terrain is discussed.
[/quote]

This is horrible when you are submitting this from your model, the model is now responsible for the way it is drawn which means you can only use a graphics wrapper to render it. You can do no more ordering on these render calls from your model either, say you have a terrain with water on it. You have to render the water last to get the transparency to work properly, you are forcing a dependency in how these two models are now rendering. If they actually give a render instance back that tells the renderer which pieces they would like to use, the renderer then can determine that the transparant items need to be rendered last, but you can submit them to the renderer in any order.
There is a reason why you have your render queue and why a renderer performs sorts on that queue as the renderer knows more about the scene it is about to submit to the actuall device than the scene manager or model need to know.

When you come down to it all the renderer needs to do is set the correct states for rendering, shaders, textures and all that. But the model should not be responisible for making these calls, it should be responsible for telling the renderer how it wants to be rendered but not by setting this on the device.
The renderer is then responsible for going through the renderlists and rendering them correctly.

Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, theHunter, theHunter: Primal, Mad Max, Watch Dogs: Legion


[quote name='YogurtEmperor' timestamp='1328618002' post='4910472']
I took so long to reply because I wrote a fully detailed post, nearly finished it when a friend came over. As were talking my PC crashed randomly and it took a full day to bring myself to even try again, and it still won’t be as detailed.





[quote name='The King2' timestamp='1328470676' post='4909901']
Its just hides away the actual renderin from the model-class. It might be obligate, but I'm feeling kinda comfortable with it.. its interesting however that you said its not possible to achieve such things like modern terrains with that. From what I see, both cases are basically the same, just with different implentations. Is there something I oversee? Obviously from what I understand you could do the things you described with both my and your implentation. Well, maybe I am wrong..?

My implementation involves dynamically combining index buffers and vertex buffers, so that during a normal render vertex buffers A and B are active, and during the creation of a shadow map, only B is active, etc.
Trying to centralize all the different ways in which things can be drawn will just create a mess.
Sure, a mesh class could allow multiple vertex buffers and allow manual setting of different combinations, but:
#1: Since it is a convenience class, that is no more beneficial than just keeping the various index/vertex buffers and setting them manually, as long as your wrappers for index/vertex buffers are also good.
#2: When do you stop adding features to support various new drawing methods and finally just say, “This is just too messy and bloated and in order to get all this flexibility I have made it either hard to use or generalized so much that all render types work but none are particularly fast.”

The reason you would not want to centralize your drawing code will become more apparent when terrain is discussed.
[/quote]

This is horrible when you are submitting this from your model, the model is now responsible for the way it is drawn which means you can only use a graphics wrapper to render it. You can do no more ordering on these render calls from your model either, say you have a terrain with water on it. You have to render the water last to get the transparency to work properly, you are forcing a dependency in how these two models are now rendering. If they actually give a render instance back that tells the renderer which pieces they would like to use, the renderer then can determine that the transparant items need to be rendered last, but you can submit them to the renderer in any order.
There is a reason why you have your render queue and why a renderer performs sorts on that queue as the renderer knows more about the scene it is about to submit to the actuall device than the scene manager or model need to know.

When you come down to it all the renderer needs to do is set the correct states for rendering, shaders, textures and all that. But the model should not be responisible for making these calls, it should be responsible for telling the renderer how it wants to be rendered but not by setting this on the device.
The renderer is then responsible for going through the renderlists and rendering them correctly.
[/quote]
I completely agree.
But I did not want to confuse the original poster further.

What I said and what you said are not mutually exclusive. My engine has models/terrain/sprites drawing themselves, but with a render queue provided by the graphics library. However the concept of having each type of object render themselves, “in the order specified by the render queue”—previously omitted—does not change.

And there are even architectural managers above the render queue to perform culling etc., but again these topics were omitted for brevity.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

@YoghurtEmperor:

Ok, I think I see the point. Thinking about it the past days I also realized myself the having this mesh class really isn't helpful in the long run. I havn't spent too much time developing this, so I think it pays out to turn around and re-write most of the code once again. At this point, I would like to completly wrap the whole graphics-thing into a module or libary, offering just basic functionality like vertex/indexbuffers, etc.. like you suggested. I've got some ideas, but I'd like to hear your opinion on it.

- 1. I would encapsulate whole graphics functionality into a class called e.g. Graphics. This class would hold some "modules" like RenderQueue, Textures, .. The user would then just create an instance of this to use my graphics module.
OR
2. should I rather have all modules seperated entirely? So the user would need to create RenderQueue, Textures, etc.. on its own

Obviously 1. would be easier to use, and I had a lot less things to pass around seperatley. From what I know APIs like XNA do that too. However 2. would allow the user to (easier) use their own e.g. render-queue, and they would only have the things where they are really needed. What would you suggest me?

If 1, I've got a few more questions, if not, I got to re-think the whole thing:
- 1. I would accessing wrappers like the RenderQueue through getters, like Graphics.RenderQueue, Graphics.Textures, Graphics.Render etc.. and use their functions: Graphics.Render.SetTarget,etc..
or
2. Should I write top-level-functions for the Graphics-class, e.g. Graphics.SetRenderTarget

I would tend to use 1, as 2 is more or less wasted effort, but I'm not sure..

I've got some more questions, but they depend on the outcoming of these.. Thanks in advance!
I tried 3 times to reply to this topic and thanks to fucking retarded keyboard shortcuts enabled when Num Lock is not active I not only lost all 3 but also somehow my clipboard back of my 50% post.
I have never had so much trouble replying to a topic before and it is pissing me off.

The short version of all that I had typed:


This shot shows my own organization and I expanded the graphics module to give you more insight into how that could be organized and what it should have/do.
[attachment=7114:Modules.png]

Each project represents a module/library. A single solution binds them all together. While some just make a single project and use folders to separate modules, this organization has served me extremely well.

The Fnd folder contains the CFnd class which acts as the interface to the graphics API as far as setting states, such as the viewport, alpha testing, culling, etc.
In my case, it is not an instance-based class; all methods and members are static. Instance-based is handy when you want to make tools with Qt and have to deal with multiple OpenGL contexts, but static-based is otherwise just a tiny bit faster.
The rest (index buffers, render queues, textures, etc.) are instance-based.

It doesn’t make sense to make a module that is specifically for render queues or textures. These are classes, within the graphics module.


L. Spiro

I restore Nintendo 64 video-game OST’s into HD! https://www.youtube.com/channel/UCCtX_wedtZ5BoyQBXEhnVZw/playlists?view=1&sort=lad&flow=grid

This topic is closed to new replies.

Advertisement