Jump to content

  • Log In with Google      Sign In   
  • Create Account


Code design question


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
18 replies to this topic

#1 Juliean   GDNet+   -  Reputation: 2453

Like
0Likes
Like

Posted 31 January 2012 - 01:16 AM

Hi,

I've got an question about my overall code design. A simple example, where I want to pass a camera varible to my 3d render class.

void Graphics3D::Render(Camera* pCamera)
{
m_pEffect->SetMatrix("ViewProj",pCamera->ViewProjectionMatrix());
}

vs.
void Graphics3D::Render(D3DXMATRIX mViewProjection)
{
m_pEffect->SetMatrix("ViewProj",&mViewProjection);
}

which way would you recommend? This would apply to many places in my code, its basically the
decision: "Should I pass my own classes or just single variables wherever possible?". Thinking about that I might want to pass more than 1 parameter per call. I might add seperate View/Projection-Matrices, Inverse/Transposed, etc.. Any advices?

Sponsor:

#2 RulerOfNothing   Members   -  Reputation: 1160

Like
0Likes
Like

Posted 31 January 2012 - 01:46 AM

If you are just wrapping the single variable in a class, then I would suggest using the second method, but I think that your Graphics3D class should keep track of camera information rather than the game loop passing it in.

#3 Tribad   Members   -  Reputation: 841

Like
0Likes
Like

Posted 31 January 2012 - 01:57 AM

I would use the first version because it abstracts the camera from the specific DX implementation.
Makes it easier to expand the code later on.

#4 NightCreature83   Crossbones+   -  Reputation: 2754

Like
0Likes
Like

Posted 31 January 2012 - 03:23 AM

If you are just wrapping the single variable in a class, then I would suggest using the second method, but I think that your Graphics3D class should keep track of camera information rather than the game loop passing it in.


It could but isn't exactlly nesecary if you have multiple views in you system it is easier to have a camera manager cope with the camera because switching becomes easier. This is also solvable by haveing a pointer to the camera in the render class and updating the pointer. I usually manage the camera in the render system that also contains the actual renderer as this manages more then just issuing render calls.

It is also the place in my system where it decides wheter to pick a D3D(9,10 or 11) or OpenGL device, which are all decisions that shoul be taken at a higher level then your renderer.
Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, Mad Max

#5 L. Spiro   Crossbones+   -  Reputation: 13392

Like
3Likes
Like

Posted 31 January 2012 - 05:13 AM

Neither.
A graphics wrapper class is not responsible for drawing the scene and has no idea what a camera even is.
Rendering is, on a high level, governed by the scene manager, which tells objects in the game world they are about to be rendered and each object is allowed to decide how that should happen by using the API created by the graphics wrapper.
Any command, any order (though arbitrary orders may not be very useful, and you will generally want to stick to a limited subset of orders, such as set the shader first, then set its uniforms/data).

A wrapper class does understand what a view matrix is, but a function designed to set the view matrix would not be called “Render()”, it would be called “SetViewMatrix()”.

If this function is really intended to draw the whole game scene and you have left that detail out for the sake of a simple post:
#1: Again, it would be moved to a scene manager.
#2: In that case you would pass the camera, not the matrix.


You are looking for something more along the lines of this:

void CSceneManager::Render( const CCamera &_cCam,  ) {
	 Graphics3D::SetViewMatrix( _cCam.ViewMatrix() );	 // Assumes the wrapper is a bunch of static function calls.
	 // Other steps needed to render the scene.
}

or this:

void CSceneManager::Render( Graphics3D * pgGraphics, const CCamera &_cCam,  ) {
	 pgGraphics->SetViewMatrix( _cCam.ViewMatrix() );
	 // Other steps needed to render the scene.
}


Etc.





If you are just wrapping the single variable in a class, then I would suggest using the second method, but I think that your Graphics3D class should keep track of camera information rather than the game loop passing it in.

A graphics class/wrapper has no idea what a camera is. This is a frustratingly common mistake people make early in their game-programming “careers”.
People think that because a camera class provides information so vital to rendering a scene that it must then be part of the rendering system.

Think about what a camera does and what information it has to do that.
It will probably keep a copy of the last field-of-view and aspect ratios it was sent for automatic resizing when the window resolution changes.
It will have a position and an orientation/view direction, which will consist of at minimum a forward and up vector, often a right vector.
It has a view frustum for culling composed of six planes.
Since it can be parented by scene objects and have scene objects as children, it will probably need to inherit from your CNode or CActor or CEntity or whatever base class is used by all of the objects in the scene.

Now how much of that data does the renderer actually need?
It needs the view matrix and the projection matrix. Neither of which are actually even used by the camera at all—they have to be generated for the soul purpose of giving the renderer what it needs.

It is already clear that the camera is not actually related to the renderer at all, but wait, there is more.
Not only is there no reason at all for a graphics class to know what a camera is, it is a horribly bad idea for it to know what one is.
By knowing what a camera is, suddenly the graphics engine knows what a frustum is. Suddenly your “matrix library” is no longer enough, and you have to import a full-sized math library to get those CPlane3 and CFrustum classes. The same library may very well have the CAabb and CSphere classes, along with the math for testing those against the frustum.
Suddenly the graphics engine knows what scene nodes are. It knows what actors or entities—whatever you want to call them—are, which is part of the core functionality of managing an entire scene and all of the objects in it.
With all these extra libraries it has to import, it probably learned what oct-trees are and probably learned somewhere along the way how to do physics.
Where does the graphics engine end and the game engine begin?

And what did it really want out of all of this?
A view matrix and a projection matrix.


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#6 wqking   Members   -  Reputation: 756

Like
-1Likes
Like

Posted 31 January 2012 - 08:23 AM

In my mind
void Graphics3D::Render(D3DXMATRIX mViewProjection)
Win!

Reason: a function should have minimum knowledge to accomplish its function.
The less knowledge the better.
Because it reduces coupling.

If you pass camera to Render, then Render can only serve for your camera (tight coupling).
If you pass only the D3DXMATRIX, then Render can serve for any classes/modules that can provide that matrix (loose coupling).

http://www.cpgf.org/
cpgf library -- free C++ open source library for reflection, serialization, script binding, callbacks, and meta data for OpenGL Box2D, SFML and Irrlicht.
v1.5.5 was released. Now supports tween and timeline for ease animation.


#7 NightCreature83   Crossbones+   -  Reputation: 2754

Like
0Likes
Like

Posted 31 January 2012 - 08:33 AM

In my mind
void Graphics3D::Render(D3DXMATRIX mViewProjection)
Win!

Reason: a function should have minimum knowledge to accomplish its function.
The less knowledge the better.
Because it reduces coupling.

If you pass camera to Render, then Render can only serve for your camera (tight coupling).
If you pass only the D3DXMATRIX, then Render can serve for any classes/modules that can provide that matrix (loose coupling).


Yes but in this case it's part of that data of a render object and as such the function should read:
void Graphics3D::Render(const RenderObject& object)

But that's wrong because the Graphics layer should only be concerned with submitting vertex buffers and setting up the correct render states, without having to reflect the underlaying hardware.
Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, Mad Max

#8 Juliean   GDNet+   -  Reputation: 2453

Like
0Likes
Like

Posted 31 January 2012 - 10:56 AM

Wow, quite a lot of feedback. Thanks to all of you, I got some good ideas, but I think I need to further explain how Graphics3D & my renderer works.

Graphics3D is basically just the combination of a factory and storage for my models. A model consists of a mesh, a material and an effect. I create models calling m_Graphics3D.NewXXX, which calls new, returns the model and stores it inside a vector. This class does some nice stuff for me in the first place: It sorts models based on effect, material, and vertex buffer. It will get some more optimized functionality like automatically handling models with instanced meshes, or static geometry with shared vertex buffers etc.. to allow efficient rendering.
So calling Render() does nothing else than to render every 3D-Model. However I'd like to be able to easily render everything from different cameras, to different rendertargets, etc.. so thats why I decided to pass eigther the camera or the viewmatrix in. I'm working on being able to supply additional parameters, like the rendertargets etc. to the Render-MethodThat way I can do stuff like this:

pGraphics3D->Render(CameraMatrix,vRenderTargets,MATERIAL|NORMAL|POSITION); // render material, normals and world position to render targets from camera view
//do some other stuff
pGraphics3D->Render(LightMatrix,lpShadowBuffer,DEPTH); //render depth to shadow buffer from lights camera
//do some more stuff
for(int i = 0;i<6;i++)
{
	 pGraphics3D->RenderToCube(mCube[i],lpCubeMap,i,MATERIAL); //render material to side i of a cubemap
}

You see? In my point of view, this makes rendering, especially for special effects, very effecient. All that work is done inside a render class. A scene graph will take care of parent<->child dependencies, and View-Frustum-Culling will be done somewhere else, too. What do you think of it now?

#9 L. Spiro   Crossbones+   -  Reputation: 13392

Like
2Likes
Like

Posted 31 January 2012 - 12:26 PM

What do you think of it now?

It is basically the same situation I explained about why the graphics engine knowing about your camera is bad, but expanded to include models as well.
A model, like a camera, is a higher-level structure than just the graphics data contained within it. The graphics library doesn’t care what an AABB is or whatever physics or collision information may accompany the model, nor does it care about animation data. You may not have that data nor plan to, but considering theoretically that you did, it would help you realize that a model is much more than just graphics data, and that the graphics data wanted by the graphics library is a very small sub-set of the entire object.

So, your graphics engine contains your models.
So how do you plan to handle terrain? Just throw that into the gargantuan graphics library? Duplicate it but specialized for rendering terrain?

You may not even be planning to ever have terrain, but theoretically asking yourself these kinds of questions helps you understand where things really need to be and what they really need to do.
You thought the graphics library was the appropriate place for your models because you considered that they were the only thing you would render (and that a model does nothing else but be rendered).

Then you also need to ask yourself what others would expect from your graphics library should they want to use it (again, only hypothetically). If I use your graphics library, am I required to use your model format? All I want is an easier interface with the hardware. Why do I need to take in this model format? I have my own.
Your graphics library renders every model, but I already have a library whose job is specifically to efficiently manage the scene and all the objects in it, and thanks to some of the spatial partitioning information it keeps (for example) it can cull and render objects more quickly than a graphics library that simply renders everything.
By using your library, I forfeit my efficient rendering?

If you had considered both, “What if I were to theoretically add terrain?”, and, “What do people want by using my library?”, you may have arrived at the more-suitable conclusion: “I could add terrain to the graphics library, but since I myself am not planning to use terrain, it is likely others using my library also probably don’t want terrain. I should make my graphics library more abstract and have it provide an easy-to-use interface that other libraries can use for whatever kind of rendering they want to do. Then I can have models and terrain as separate libraries that can both access the graphics library by themselves without the graphics library knowing about all kinds of things it shouldn’t. Probably the best organization is the following:
#1: Graphics library: Provides a convenient interface with Direct3D, OpenGL, whatever. Has wrappers for textures, vertex buffers, index buffers, and shaders. It is not a crime for it to also link to some kind of vector/quaternion/matrix math library, if necessary.
#2: Model library: Higher-level then the graphics library. Graphics and rendering are only a small subset of what a model actually is and does. It not only includes the graphics library, but also the physics library etc. (if you were ever to add physics).
#3: Terrain library: Same level as the model library. Same exact concept. Just a different way of loading/handling its data, and rendering it.
#4: Engine library: This is the core library that stands above all others. It knows what everything is, and it brings all of these other libraries together in peace and harmony.
Within this library is a scene manager which knows about every single type of entity in the game world, from cameras to models to terrain.
It makes things efficient via spatial partitioning schemes, sweep-and-prune implementation, etc.
It manages the phyics engine directly. When it wants to perform a logical update, it uses its efficient management of the game world to efficiently run over the objects in the world and ask for the data needed for use by the physics engine (which it then passes off to the physics engine).
When it is time to render, it efficiently gathers objects in view, inform them they are about to render, handles special request by the objects to update a reflective cubemap, and finally lets objects render themselves however they decide to do so.
It handles creation of shadow maps, reflection maps, etc.
It is a mastermind that orchestrates everything.”


With such a design, I am free to use your graphics library without needing to use your model format or terrain.
If I want to use your models, I can also use your model library. No harm done.
If I want the scene to be managed efficiently for me, including the generation of those cubemaps you mentioned, I can use your engine library.


The point is that your “convenience”—that is having cubemaps generated automatically, having a way to render all the objects in the scene with one call, etc.—is not the issue.
These things are needed. You just didn’t put them in the right places.


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#10 Juliean   GDNet+   -  Reputation: 2453

Like
0Likes
Like

Posted 02 February 2012 - 11:23 AM

A model, like a camera, is a higher-level structure than just the graphics data contained within it. The graphics library doesn’t care what an AABB is or whatever physics or collision information may accompany the model, nor does it care about animation data. You may not have that data nor plan to, but considering theoretically that you did, it would help you realize that a model is much more than just graphics data, and that the graphics data wanted by the graphics library is a very small sub-set of the entire object.


Obviously, we are not talking about the same thing. Maybe we have a different understanding of what a model is? For me, a model is just a renderable object, with a fixed set of properties: A mesh(vertex+index-buffer,vertex declaration), a material (material-texture,bump-map,specular/diffuse-components) and an effect (shaders+render-states), position, rotation, scaling (as well as the matrices). Further, it will get properties like opacity, visibility, etc... however, my models will most certainly not contain anything dedicatet to physics or such.. What you mean will actually be handled by my Entity-System. An entity like a character will simply get a pointer to a model for modifying its properties.

You thought the graphics library was the appropriate place for your models because you considered that they were the only thing you would render (and that a model does nothing else but be rendered).

Then you also need to ask yourself what others would expect from your graphics library should they want to use it (again, only hypothetically). If I use your graphics library, am I required to use your model format? All I want is an easier interface with the hardware. Why do I need to take in this model format? I have my own.


As you see, a model does not neccessarly mean a character model. So anything made out of vertices can be represented by a model. If you mean "model format" in the way of a file format, than you can eigther add a method LoadMeshFromMyFormat(wstring lpFileName) or modify the existing LoadMeshFromFile-Method. All you have to do is to create a vertex buffer as well as an index-buffer (later on you won't even need this if you don't want to) and a vertex-declaration from your format, and call Mesh* mesh = new Mesh(lpVertexbuffer, lpIndexBuffer, lpDeclaration). In the Graphics3D-Class, you would just modify the call of NewCharacter (or whatever) to use the new mesh factory method.

On the other hand, if you meant model format like an own class you would like to use: this should even be possible. I need to restructure my renderer just a little bit, so you could theoretically skip the Graphics3D-interface and use your own. Graphics3D is really just a simplification interface, allowing me to create a renderable and display it right away, without any overhead. Maybe the title Graphics3D is a bit misleading. I hope you get the idea, and that its by none the core of my graphics-interface. I came up with this because you could basically seperate renderables into two categories: 3d-models and 2d-sprites. So I also have a Graphics2D-interface which handles "sprites" the same way that my Graphisc3D-libary does.

Your graphics library renders every model, but I already have a library whose job is specifically to efficiently manage the scene and all the objects in it, and thanks to some of the spatial partitioning information it keeps (for example) it can cull and render objects more quickly than a graphics library that simply renders everything.
By using your library, I forfeit my efficient rendering?


Now you know that graphics and scene-objects are more or less seperated, it should become clear that you can still use your culling, spatial partitioning, etc.. , at least in theory. Either a function call that tells the model not to render in the next Render()-Call, or a function that takes a bool-array telling what Models to render and what not. Obviously it might be a little bit less performant than if you just didn't call Render() on anything not on screen. I can accept that for now, and maybe later on there will be a way to completely eliminate this little performance issue.

If you had considered both, “What if I were to theoretically add terrain?”, and, “What do people want by using my library?”, you may have arrived at the more-suitable conclusion: “I could add terrain to the graphics library, but since I myself am not planning to use terrain, it is likely others using my library also probably don’t want terrain. I should make my graphics library more abstract and have it provide an easy-to-use interface that other libraries can use for whatever kind of rendering they want to do. Then I can have models and terrain as separate libraries that can both access the graphics library by themselves without the graphics library knowing about all kinds of things it shouldn’t. Probably the best organization is the following:


Well, if you mean terrain from a heightmap, its as easy as loading from your own mesh format: just add a factory method that reads the heightmap in, and threat it as a mesh that gets passed to a model. There will be different types for my models, like static/dynamic/etc., so terrain might fit to a static model. I will also provide methods/interfaces to easily manipulate vertex data, if you want dynamic terrain like in a strategy game editor or such..


With such a design, I am free to use your graphics library without needing to use your model format or terrain.
If I want to use your models, I can also use your model library. No harm done.
If I want the scene to be managed efficiently for me, including the generation of those cubemaps you mentioned, I can use your engine library.


Ok, reading what you understand under graphics-libary makes it even more clear to me that I obviously confused the name for my Graphics3D-class. I think I described everything more clear now. Do you still consider my design wrong? I just think it most convient, because now I entirely splitted graphics and game logic, and allowed me to draw renderables without having to add tons of lines of code for every new game entity I create. I think I will present my whole source code as soon as I have my whole graphics/rendering-stuff done, and ask for some more commentary..

#11 L. Spiro   Crossbones+   -  Reputation: 13392

Like
0Likes
Like

Posted 03 February 2012 - 07:12 AM

Do you still consider my design wrong?

I do. You were more careful about not using the terms “mesh” and “model” interchangeably, and while your mesh class has no collision data etc., it is still not really a useful class to even have at all.

Mainly your post has made it apparent that you really don’t know the full scope of 3D programming, which is fine. It is a huge subject and it takes many years to learn, and the only way to get there is to try, try, and try again, as long as each time you fail you see why you could have done better.

The first thing that gives me that feeling is that you have a mesh class at all. I will explain why shortly.

The second is that you suggested using a mesh for terrain.
Terrain is not a mesh. A mesh could be suitable for a small area of terrain, but in any area of usefulness terrain is a very specific method of combining index buffers, vertex buffers, shaders, and textures such that detail decreases in the distance. It is a type of renderable object that uses constant modification/swapping of parts/LOD changes in order to maintain a reasonable level of performance. It often requires streaming data and updating in real-time.

A mesh can’t do that (practically).

This is one example of special ways things draw themselves, and there are many more. Which is why a mesh class is useless.
I couldn’t even use one for my own model class. I keep my vertex buffer broken into multiple streams to avoid sending useless data during shadow-map generation. They all share the same index buffer. Sometimes some vertex buffers are enabled and sometimes others are.

A mesh class is restrictive. You will never be able to handle all the cases for ways in which things want to draw themselves, so it is hopeless to even try.


As I said before, the only things a graphics module needs to provide are vertex buffers, index buffers, textures, shaders, and a few helper functions such as a render queue. No meshes.



This eliminates the need for any factories as well.


You put a lot of emphasis on keeping things easy to use. Again, this is fine, but it is in the wrong place.
A mesh class may be use to use, but it is restrictive as can be. With today’s demands on graphics, you simply can’t find a use for such a class. New techniques require all kinds of different combinations of vertex buffers etc.

The graphics library only needs to provide the components. The vertex buffers themselves, not a mesh simplifier.

Move the simplification over to the actual models. And I am talking about the objects that contain physics information as well as a graphics data.
A shared model, or master model, is loaded only once. Instances are spawned from it, sharing its graphics data. And the simplification for drawing models happens inside the model class.


You say it is simple to call Graphics::DrawMesh().
I think it is simple to call Model::Draw().
You say it is simple to call Graphics::DrawWorld().
I think it is simple to call SceneManager::Draw().

The point is all the same simplifications are there, just moved around.

Your graphics engine is doing too much.


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#12 achild   Crossbones+   -  Reputation: 1719

Like
1Likes
Like

Posted 03 February 2012 - 09:07 AM

Ok it's been many years since doing 3d engine things, but I have a question:

You say it is simple to call Graphics::DrawMesh().
I think it is simple to call Model::Draw().
You say it is simple to call Graphics::DrawWorld().
I think it is simple to call SceneManager::Draw().

So a Model is not only responsible for manipulating its underlying data, but it is also responsible for knowing how to draw itself? Is this really a typical design these days?

#13 L. Spiro   Crossbones+   -  Reputation: 13392

Like
0Likes
Like

Posted 03 February 2012 - 07:28 PM

There are many ways to organize an engine, but there are many more ways that objects may want to be rendered. Trying to make a one-fits-all solution is a lesson in futility.
Models know how models need to be drawn.
Terrain knows how terrain needs to be drawn.
Sprites know how sprites need to be drawn.
Particles know how particles need to be drawn.
Volumetric fog knows how volumetric fog needs to be drawn.

And since each of these objects is only using the basic rendering components (index buffers, vertex buffers, etc.), the way in which they are drawn can easily be modified to support new techniques and special effects.

In other words, yes.


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#14 Juliean   GDNet+   -  Reputation: 2453

Like
0Likes
Like

Posted 05 February 2012 - 01:37 PM

Mainly your post has made it apparent that you really don’t know the full scope of 3D programming, which is fine. It is a huge subject and it takes many years to learn, and the only way to get there is to try, try, and try again, as long as each time you fail you see why you could have done better.


Yes, you are right about that, and I'm not ashame to admit it. My main focus in game programming was always more on game logic rather than graphic. While I got some nice shader effects like hdr, atmo-scattering, postprocessing, I didn't do much regarding vertex data manipulation like terrain LOD that you mentioned. Maybe it's because I'm lacking an artist providing me with up-to-date graphics data, yet I hope to still learn how to handle that properly.

A mesh can’t do that (practically).

This is one example of special ways things draw themselves, and there are many more. Which is why a mesh class is useless.
I couldn’t even use one for my own model class. I keep my vertex buffer broken into multiple streams to avoid sending useless data during shadow-map generation. They all share the same index buffer. Sometimes some vertex buffers are enabled and sometimes others are.

A mesh class is restrictive. You will never be able to handle all the cases for ways in which things want to draw themselves, so it is hopeless to even try.


I belive you are right about that concering a classic mesh-interface like the ID3DXMesh. My mesh class on the other hand is just a wrapper around what you described: Having individual objects render themself individually. So instead of doing this:

Terrain::Render(){
//set material(s?)
//begin effect
//set vertex/index-data/streams
//draw
}

I have this:

TerrainModel::Render(){
//set material(s)
//begin effect
//draw TerrainMesh
}
TerrainMesh::Draw(){
//set vertex/index-data/streams
//draw
}

Its just hides away the actual renderin from the model-class. It might be obligate, but I'm feeling kinda comfortable with it.. its interesting however that you said its not possible to achieve such things like modern terrains with that. From what I see, both cases are basically the same, just with different implentations. Is there something I oversee? Obviously from what I understand you could do the things you described with both my and your implentation. Well, maybe I am wrong..?

Anyway, just a note, why do you say a Terrain is not a mesh anyway? I see that a terrain uses multiple textures, maybe effects to achieve what you described. But a mesh, in my definition, is a collection of vertex/index-data. And thats a part of a terrain too, isn't it?

As I said before, the only things a graphics module needs to provide are vertex buffers, index buffers, textures, shaders, and a few helper functions such as a render queue. No meshes.


Okay, that makes sence to me. I already have wrappers for textures, shaders, and so on, I need to add a render queue and a direct wrapper for vertex/index buffers too. I've seperated all that code into a graphics module. My mesh-class won't be part of the module, it will rather be part of another layer of my particular engine, as it fits for its needs. The factories will also be part of the layer. I am planning to reuse the graphics module in future games (of course enchancing it every time), but the other layers will probably change completely with my needs/knowledge.

Still got any comments to improve the design even further? I'll be drawing out the basic design in some sort of graph when I get time. And even though I'm going to use the mesh interface, seperatet from the main graphics layer, I'd still be glad to hear what you have to say about it. There might be a huge misunderstanding of things on my side..

EDIT:

Oh, one more thing. If you advice me against factories (well, at least you said they were unneccassary using your implentation), how else should I load e.g. my models? My last game made models load themself, but that made them somewhat awkward to use. Is there anything really against using factories? I feel like they are very easy to use, for example I can easily load different file-formats, load specialiced models (characters, obstacles, environment) in their own directoriers without having to write that directorie everywhere I use it, etc... disadvantages/alternatives?

#15 L. Spiro   Crossbones+   -  Reputation: 13392

Like
0Likes
Like

Posted 07 February 2012 - 06:33 AM

I took so long to reply because I wrote a fully detailed post, nearly finished it when a friend came over. As were talking my PC crashed randomly and it took a full day to bring myself to even try again, and it still won’t be as detailed.





Its just hides away the actual renderin from the model-class. It might be obligate, but I'm feeling kinda comfortable with it.. its interesting however that you said its not possible to achieve such things like modern terrains with that. From what I see, both cases are basically the same, just with different implentations. Is there something I oversee? Obviously from what I understand you could do the things you described with both my and your implentation. Well, maybe I am wrong..?

My implementation involves dynamically combining index buffers and vertex buffers, so that during a normal render vertex buffers A and B are active, and during the creation of a shadow map, only B is active, etc.
Trying to centralize all the different ways in which things can be drawn will just create a mess.
Sure, a mesh class could allow multiple vertex buffers and allow manual setting of different combinations, but:
#1: Since it is a convenience class, that is no more beneficial than just keeping the various index/vertex buffers and setting them manually, as long as your wrappers for index/vertex buffers are also good.
#2: When do you stop adding features to support various new drawing methods and finally just say, “This is just too messy and bloated and in order to get all this flexibility I have made it either hard to use or generalized so much that all render types work but none are particularly fast.”

The reason you would not want to centralize your drawing code will become more apparent when terrain is discussed.


Anyway, just a note, why do you say a Terrain is not a mesh anyway? I see that a terrain uses multiple textures, maybe effects to achieve what you described. But a mesh, in my definition, is a collection of vertex/index-data. And thats a part of a terrain too, isn't it?

A strict definition of a mesh is not particularly useful, since it could encapsulate so many things.
Instead, common themes between meshes include rigid objects that don’t move except for by animations.
Terrain is a render form that is constantly changing to support various LOD’s etc.

To make the point clear, let’s look at one of the most uniquely drawn terrain types: Geo Clipmaps.
It is not just about encapsulating which shader goes with which mesh.
Geo Clipmaps take their height value from a texture which must also be handled in a very special manner on the GPU, updating sections etc.
Then there is the specific arrangement of each of the tile sets, which are meshes.
Arranged in a very specific manner, they provide decreasing level-of-detail in all directions and prevent rendering of data behind the player or otherwise out of view.

All of this data is arranged at a macro level, not at the micro level. The way in which all of the components interacts must be gracefully managed by the terrain class.
You aren’t simply going to add a new render method to some convenience class to get this done.

To a lesser degree, the same thing applies to models. There are a lot of macro-level things they can do to orchestrate the way in which they are rendered. They are just less obvious.





Oh, one more thing. If you advice me against factories (well, at least you said they were unneccassary using your implentation), how else should I load e.g. my models? My last game made models load themself, but that made them somewhat awkward to use. Is there anything really against using factories? I feel like they are very easy to use, for example I can easily load different file-formats, load specialiced models (characters, obstacles, environment) in their own directoriers without having to write that directorie everywhere I use it, etc... disadvantages/alternatives?


Factories are useful for creating a subset of types natively supported by the engine, but as I explained supporting such a small subset of types of meshes etc. is fairly useless. It just seems out of place here.

For models, factories could be used to load a subset of model types, as long as custom types are still allowed.

I see no advantage in being able to load multiple types of model files. Just make a format that is designed for your engine and convert to that from FBX or COLLADA.
Of course characters, obstacles, terrain, and buildings may each have their own formats. But only 1 is necessary for each.


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#16 NightCreature83   Crossbones+   -  Reputation: 2754

Like
0Likes
Like

Posted 07 February 2012 - 08:15 AM

I took so long to reply because I wrote a fully detailed post, nearly finished it when a friend came over. As were talking my PC crashed randomly and it took a full day to bring myself to even try again, and it still won’t be as detailed.






Its just hides away the actual renderin from the model-class. It might be obligate, but I'm feeling kinda comfortable with it.. its interesting however that you said its not possible to achieve such things like modern terrains with that. From what I see, both cases are basically the same, just with different implentations. Is there something I oversee? Obviously from what I understand you could do the things you described with both my and your implentation. Well, maybe I am wrong..?

My implementation involves dynamically combining index buffers and vertex buffers, so that during a normal render vertex buffers A and B are active, and during the creation of a shadow map, only B is active, etc.
Trying to centralize all the different ways in which things can be drawn will just create a mess.
Sure, a mesh class could allow multiple vertex buffers and allow manual setting of different combinations, but:
#1: Since it is a convenience class, that is no more beneficial than just keeping the various index/vertex buffers and setting them manually, as long as your wrappers for index/vertex buffers are also good.
#2: When do you stop adding features to support various new drawing methods and finally just say, “This is just too messy and bloated and in order to get all this flexibility I have made it either hard to use or generalized so much that all render types work but none are particularly fast.”

The reason you would not want to centralize your drawing code will become more apparent when terrain is discussed.


This is horrible when you are submitting this from your model, the model is now responsible for the way it is drawn which means you can only use a graphics wrapper to render it. You can do no more ordering on these render calls from your model either, say you have a terrain with water on it. You have to render the water last to get the transparency to work properly, you are forcing a dependency in how these two models are now rendering. If they actually give a render instance back that tells the renderer which pieces they would like to use, the renderer then can determine that the transparant items need to be rendered last, but you can submit them to the renderer in any order.
There is a reason why you have your render queue and why a renderer performs sorts on that queue as the renderer knows more about the scene it is about to submit to the actuall device than the scene manager or model need to know.

When you come down to it all the renderer needs to do is set the correct states for rendering, shaders, textures and all that. But the model should not be responisible for making these calls, it should be responsible for telling the renderer how it wants to be rendered but not by setting this on the device.
The renderer is then responsible for going through the renderlists and rendering them correctly.
Worked on titles: CMR:DiRT2, DiRT 3, DiRT: Showdown, GRID 2, Mad Max

#17 L. Spiro   Crossbones+   -  Reputation: 13392

Like
0Likes
Like

Posted 07 February 2012 - 06:39 PM


I took so long to reply because I wrote a fully detailed post, nearly finished it when a friend came over. As were talking my PC crashed randomly and it took a full day to bring myself to even try again, and it still won’t be as detailed.






Its just hides away the actual renderin from the model-class. It might be obligate, but I'm feeling kinda comfortable with it.. its interesting however that you said its not possible to achieve such things like modern terrains with that. From what I see, both cases are basically the same, just with different implentations. Is there something I oversee? Obviously from what I understand you could do the things you described with both my and your implentation. Well, maybe I am wrong..?

My implementation involves dynamically combining index buffers and vertex buffers, so that during a normal render vertex buffers A and B are active, and during the creation of a shadow map, only B is active, etc.
Trying to centralize all the different ways in which things can be drawn will just create a mess.
Sure, a mesh class could allow multiple vertex buffers and allow manual setting of different combinations, but:
#1: Since it is a convenience class, that is no more beneficial than just keeping the various index/vertex buffers and setting them manually, as long as your wrappers for index/vertex buffers are also good.
#2: When do you stop adding features to support various new drawing methods and finally just say, “This is just too messy and bloated and in order to get all this flexibility I have made it either hard to use or generalized so much that all render types work but none are particularly fast.”

The reason you would not want to centralize your drawing code will become more apparent when terrain is discussed.


This is horrible when you are submitting this from your model, the model is now responsible for the way it is drawn which means you can only use a graphics wrapper to render it. You can do no more ordering on these render calls from your model either, say you have a terrain with water on it. You have to render the water last to get the transparency to work properly, you are forcing a dependency in how these two models are now rendering. If they actually give a render instance back that tells the renderer which pieces they would like to use, the renderer then can determine that the transparant items need to be rendered last, but you can submit them to the renderer in any order.
There is a reason why you have your render queue and why a renderer performs sorts on that queue as the renderer knows more about the scene it is about to submit to the actuall device than the scene manager or model need to know.

When you come down to it all the renderer needs to do is set the correct states for rendering, shaders, textures and all that. But the model should not be responisible for making these calls, it should be responsible for telling the renderer how it wants to be rendered but not by setting this on the device.
The renderer is then responsible for going through the renderlists and rendering them correctly.

I completely agree.
But I did not want to confuse the original poster further.

What I said and what you said are not mutually exclusive. My engine has models/terrain/sprites drawing themselves, but with a render queue provided by the graphics library. However the concept of having each type of object render themselves, “in the order specified by the render queue”—previously omitted—does not change.

And there are even architectural managers above the render queue to perform culling etc., but again these topics were omitted for brevity.


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums

#18 Juliean   GDNet+   -  Reputation: 2453

Like
0Likes
Like

Posted 08 February 2012 - 10:58 AM

@YoghurtEmperor:

Ok, I think I see the point. Thinking about it the past days I also realized myself the having this mesh class really isn't helpful in the long run. I havn't spent too much time developing this, so I think it pays out to turn around and re-write most of the code once again. At this point, I would like to completly wrap the whole graphics-thing into a module or libary, offering just basic functionality like vertex/indexbuffers, etc.. like you suggested. I've got some ideas, but I'd like to hear your opinion on it.

- 1. I would encapsulate whole graphics functionality into a class called e.g. Graphics. This class would hold some "modules" like RenderQueue, Textures, .. The user would then just create an instance of this to use my graphics module.
OR
2. should I rather have all modules seperated entirely? So the user would need to create RenderQueue, Textures, etc.. on its own

Obviously 1. would be easier to use, and I had a lot less things to pass around seperatley. From what I know APIs like XNA do that too. However 2. would allow the user to (easier) use their own e.g. render-queue, and they would only have the things where they are really needed. What would you suggest me?

If 1, I've got a few more questions, if not, I got to re-think the whole thing:
- 1. I would accessing wrappers like the RenderQueue through getters, like Graphics.RenderQueue, Graphics.Textures, Graphics.Render etc.. and use their functions: Graphics.Render.SetTarget,etc..
or
2. Should I write top-level-functions for the Graphics-class, e.g. Graphics.SetRenderTarget

I would tend to use 1, as 2 is more or less wasted effort, but I'm not sure..

I've got some more questions, but they depend on the outcoming of these.. Thanks in advance!

#19 L. Spiro   Crossbones+   -  Reputation: 13392

Like
0Likes
Like

Posted 09 February 2012 - 06:20 AM

I tried 3 times to reply to this topic and thanks to fucking retarded keyboard shortcuts enabled when Num Lock is not active I not only lost all 3 but also somehow my clipboard back of my 50% post.
I have never had so much trouble replying to a topic before and it is pissing me off.

The short version of all that I had typed:


This shot shows my own organization and I expanded the graphics module to give you more insight into how that could be organized and what it should have/do.
Modules.png

Each project represents a module/library. A single solution binds them all together. While some just make a single project and use folders to separate modules, this organization has served me extremely well.

The Fnd folder contains the CFnd class which acts as the interface to the graphics API as far as setting states, such as the viewport, alpha testing, culling, etc.
In my case, it is not an instance-based class; all methods and members are static. Instance-based is handy when you want to make tools with Qt and have to deal with multiple OpenGL contexts, but static-based is otherwise just a tiny bit faster.
The rest (index buffers, render queues, textures, etc.) are instance-based.

It doesn’t make sense to make a module that is specifically for render queues or textures. These are classes, within the graphics module.


L. Spiro
It is amazing how often people try to be unique, and yet they are always trying to make others be like them. - L. Spiro 2011
I spent most of my life learning the courage it takes to go out and get what I want. Now that I have it, I am not sure exactly what it is that I want. - L. Spiro 2013
I went to my local Subway once to find some guy yelling at the staff. When someone finally came to take my order and asked, “May I help you?”, I replied, “Yeah, I’ll have one asshole to go.”
L. Spiro Engine: http://lspiroengine.com
L. Spiro Engine Forums: http://lspiroengine.com/forums




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS