Jump to content

  • Log In with Google      Sign In   
  • Create Account


suggestions for generic optional drawground routine in game engine


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
7 replies to this topic

#1 Norman Barrows   Crossbones+   -  Reputation: 1968

Like
0Likes
Like

Posted 02 September 2013 - 01:17 PM

well, my adventures in engine design continue.

 

after inverting my testbed engine into boilerplate code and layered libraries, i re-inverted it back into an engine and callback design.

 

now i'm wondering how to best implement a generic ground drawing routine. i have both dynamic mesh and chunk based code developed. i want the routine to be useful for a wide variety of game types, shooters, flight sims, rts games, etc.

 

there will be a few parts to the routine:

1. the method used to draw: dynamic mesh, dynamic quads, static chunks, etc.

2. the heightmap: procedural, bitmap based , etc

3. texture info: tiled, blended, splatted, etc.

 

it would be nice if drawing method could be independent of heightmap and texture info.

 

drawing method can just use a float heightmap(x,z) API call to separate the heightmap from the drawing routine.

 

but texturing method will possibly affect the mesh drawing method.

 

for tiled textures, a get_ground_tex(x,z) API call can be used to separate the texture data from the drawing routine.

 

for blended, i suppose it might be something like get_ground_texture _set(x,z), which would return all textures, normal maps, etc for the quad at x,z.

 

splatting gets a bit more complex though.

 

i suppose that a texturing method needs to be selected first, as that will tend to define how the rest will work.

 

so, blending and splatting? is that pretty much the state of the art, along with mega textures, perhaps?

 

 

 

 

 

 


Norm Barrows

Rockland Software Productions

"Building PC games since 1988"

 

rocklandsoftware.net

 


Sponsor:

#2 Jason Z   Crossbones+   -  Reputation: 4858

Like
1Likes
Like

Posted 02 September 2013 - 07:49 PM

Maybe it is me, but I am having a hard time following your question (it's probably me...).  In general, I would treat your ground rendering methods in the same way that you treat any other object in your scene.  There should be a way to encapsulate one rendering operation into an object, and that object should be able to configure the pipeline for whatever technique will be used - regardless of splatting, or megatexture, or whatever, your scene and your rendering code shouldn't care - all that detail should be held in your terrain object.

 

If you do that, then the implementation details are completely abstracted away and isolated to a single class.  So if you wanted to switch to a brand new terrain rendering technique that you saw at SIGGRAPH 2020, then you just code up a new class that does it, drop it in as a replacement to your existing terrain class and you are done!



#3 Norman Barrows   Crossbones+   -  Reputation: 1968

Like
0Likes
Like

Posted 02 September 2013 - 09:37 PM


There should be a way to encapsulate one rendering operation into an object, and that object should be able to configure the pipeline for whatever technique will be used - regardless of splatting, or megatexture, or whatever, your scene and your rendering code shouldn't care - all that detail should be held in your terrain object.
 

 

then i guess the questions would be:

 

1. anything new in ground texturing techniques besides mega and spat? that's about all i've heard of recently. well, texture arrays, perhaps qualifies.

 

2. i suppose the mesh drawing technique will have to be dependent on the texturing method

 

it would be nice to be able to separate ground mesh drawing, ground mesh texturing, and heightmap.

 

heightmap can be separated it seems.    chunk, dynamic, whatever - a heightmap function can connect that to a procedural, bitmap based, or whatever heightmap.

 

but i guess drawing the ground mesh and texturing it are tied together.   tiled textures require individual dynamic quads. i think most any other texturing method can just use a big mesh - a chunk or big dynamic mesh.

 

guess i'll only be able to split it into two components, the heightmap, and the ground renderer.

 

well right now i have tile based dynamic quads, large dynamic meshes, and static chunks to work with. which would be better for general ground drawing?  

 

i'm thinking streamed or procedurally generated static chunks might be the best option.


Norm Barrows

Rockland Software Productions

"Building PC games since 1988"

 

rocklandsoftware.net

 


#4 Juliean   GDNet+   -  Reputation: 2363

Like
0Likes
Like

Posted 03 September 2013 - 06:55 AM


There should be a way to encapsulate one rendering operation into an object, and that object should be able to configure the pipeline for whatever technique will be used - regardless of splatting, or megatexture, or whatever, your scene and your rendering code shouldn't care - all that detail should be held in your terrain object.

If you do that, then the implementation details are completely abstracted away and isolated to a single class.

 

I very highly double that. IMHO, this is the key essence to having both high functionality, while both maintaining extensibility and decoupling. For example, in my engine, on the lowest level, I have a DX9/11 wrapper. On top of that, I have a rendering architecture implementing render queues, stages, etc.. On top of that, I have a graphics-layer that utilizes higher-level graphics objects like models, text-objects, effects, ... . And then I have different classes like Water, Terrain, who use those graphics objects to implement their functionality.

 

As for your questions, I'm too having a bit of a hard time understanding what exactly you want? The mesh drawing method will need both the texturing method, as the kind of mesh you are loading (heightmap, procedural). As far as I'm concerned, you can't really hide this away, since for example (what you didn't mention) you'll need some sort of LOD configuring the rendering settings (what index buffers to use, ...), depending on the source of terrain information. The exact type of rendering will be determined by your shader anyways, so if you support e.g. shader permutations, you can ship one big terrain shader, which chooses texturing methods etc. by what option the user selects.



#5 Norman Barrows   Crossbones+   -  Reputation: 1968

Like
0Likes
Like

Posted 03 September 2013 - 09:23 AM

i guess what i'm looking for is a way to decouple generic terrain mesh drawing from game specific heightmap info, and game specific texturing info.

 

with a static chunk, the "heightmap" is predefined in the y values of the vertices.   so a generic chunk drawer, perhaps generic to the point of user defined chunk size, can be pretty much game data independent as far as the heightmap is concerned. it doesn't care about height maps,  it just draws meshes passed to it.

 

with a dynamic quad or large dynamic mesh, a function that returns y altitude for a given x,z location can be used to decouple the heightmap source data structure (procedural, bitmap based, whatever) from the mesh drawing routine. this way you could swap in and out different heightmap systems, and the ground drawing routine wouldn't care. so it would be suitable for both bitmap height mapped levels, and procedurally generated open worlds.    the idea would be to provide a few built in optional heightmap systems, say, bitmap, procedural Perlin, and procedural sinusoidal. or the user could hook up their own through the "float user_heightmap(float x,float z)"   API.

 

but when it comes to de-coupling the game specific texturing from the mesh drawing, i'm not so sure it can be done, or is practical.  

 

so this seems to have evolved into a discussion on how to best couple ground mesh drawing and texturing. 

 

well, lets see...

 

you'd choose some texturing method, lets say splatted mega texture with all the usual "map" channels (normal, displacement, etc).

 

and some mesh drawing method, static chunks perhaps.

 

then you'd combine them to make a terrain drawing routine.

 

game specific heightmap data is implicit in the y values of the vertices of a chunk, so your drawing routine doesn't deal with game specific heightmap info. it just draws meshes.

 

generally speaking, heightmap info can be in any form, and the drawing routine can use it.

 

but game specific texturing info will need to be in a format usable by the texturing method.

 

so i guess it can only be split into two decouple-able parts:   the heightmap, and the actual mesh drawing and texturing.

 

so you'd have

 

[  chunks |  dymanic mesh  |  dynamic quads |  etc   ]    combined with [ mega &|  splat &| other effects ]     coupled to [ perlin | bitmap | other heightmap] 

 

i suppose the thing to do would be to provide a few different combos of mesh/texture drawing routines for the engine / library. as well as a couple different heightmap systems to choose from.

 

so for me, the next question would be: how is outdoor terrain typically drawn in a shooter?     i use chunks in my open world fps/rpg, but a pure shooter might tend to do things more level oriented.     then again, i suppose a level is just a "chunk of one".   for RTS, probably one big mesh, or chunks, depending on map size.   in fact, that's probably the general rule of thumb for all games:  "one big mesh, or chunks, depending on map size".    flight sims would almost definitely need chunks due to the large size of their "game worlds".

 

static could be used, except where chunks are modifiable. in that case, dynamic could be used, either permanently, or temporarily while the terrain is changing, then copied to a static buffer for future drawing.   temporary dynamic sounds most efficient.

 

so it would seem chunks  may be the best general option.  and user defined chunk size shouldn't bee too hard.   the user (gamedev) would specify chunk size, and clip range from the camera, and the drawing routine would use the game's chunk and texture data to draw chunks around the camera.

 

sound good?


Edited by Norman Barrows, 03 September 2013 - 10:08 AM.

Norm Barrows

Rockland Software Productions

"Building PC games since 1988"

 

rocklandsoftware.net

 


#6 Juliean   GDNet+   -  Reputation: 2363

Like
0Likes
Like

Posted 03 September 2013 - 01:17 PM


a function that returns y altitude for a given x,z location can be used to decouple the heightmap source data structure (procedural, bitmap based, whatever) from the mesh drawing routine.

 

I still fail to see the necessity. If your underlying resource system is well designed, and your loading routines are decoupled, then you don't need any specific function. In my system, I store textures (as well as anything else) in a ResourceCache-class with a string as key. The TerrainLoader-class gets a reference to that class passed in, and the actual function for loading a terrain gets the terrain-textures name passed in. It just accesses that very texture, and calls its "GetPixel(x,y)" method. Where the texture came from, whether heightmap-loaded, procedurally generated, randomly generated, does matter, its a texture. Its only used for loading it once though, after that, even if you are modifying the terrain, you'll modify it directly, you don't want to modify the texture and reconstruct the terrain data, that would be inefficient. Is there anyting else that tells you the need for a seperate abstraction class?

 


the idea would be to provide a few built in optional heightmap systems, say, bitmap, procedural Perlin, and procedural sinusoidal. or the user could hook up their own through the "float user_heightmap(float x,float z)" API.

 

As said, this shouldn't matter at that point. Your terrain-chunk takes a texture for construction. Additionally, you can build whathever system you want to generate the height info into a texture. This makes it even easier for users to choose what they want, and most importantly, it removes each and every coupeling. The terrain can be used without your generation methods, and the generation methods can be used for any other terrain system. That way, the terrain system is completely decoupled from its data initialization, while maintaining all the advantages you listed before.



#7 Jason Z   Crossbones+   -  Reputation: 4858

Like
0Likes
Like

Posted 03 September 2013 - 06:59 PM

 


There should be a way to encapsulate one rendering operation into an object, and that object should be able to configure the pipeline for whatever technique will be used - regardless of splatting, or megatexture, or whatever, your scene and your rendering code shouldn't care - all that detail should be held in your terrain object.

If you do that, then the implementation details are completely abstracted away and isolated to a single class.

 

I very highly double that. IMHO, this is the key essence to having both high functionality, while both maintaining extensibility and decoupling. For example, in my engine, on the lowest level, I have a DX9/11 wrapper. On top of that, I have a rendering architecture implementing render queues, stages, etc.. On top of that, I have a graphics-layer that utilizes higher-level graphics objects like models, text-objects, effects, ... . And then I have different classes like Water, Terrain, who use those graphics objects to implement their functionality.

 

As for your questions, I'm too having a bit of a hard time understanding what exactly you want? The mesh drawing method will need both the texturing method, as the kind of mesh you are loading (heightmap, procedural). As far as I'm concerned, you can't really hide this away, since for example (what you didn't mention) you'll need some sort of LOD configuring the rendering settings (what index buffers to use, ...), depending on the source of terrain information. The exact type of rendering will be determined by your shader anyways, so if you support e.g. shader permutations, you can ship one big terrain shader, which chooses texturing methods etc. by what option the user selects.

 

 

It sounds to me like your design is what I'm talking about.  I don't mean that an entire rendering framework will fit into each individual object, but rather that the code that your objects use to do the rendering (using your rendering framework) should be encapsulated and interchangeable.  I wouldn't do it any other way...  If you would like to check out Hieroglyph 3, there are plenty of examples of this at work throughout the code base.



#8 Norman Barrows   Crossbones+   -  Reputation: 1968

Like
0Likes
Like

Posted 07 September 2013 - 08:47 PM

well, i wasn't really looking to decouple loading.   once stuff is loaded,  i was trying to come up with a way that a ground drawing routine would only need to know y given x,z for procedural or lookup mesh displacement, and texture for a given quad. that would allow the user to use any data structures they wanted. 

 

but i've come to the conclusion that its probably both inefficient and not easily done.

 

so it looks like a chunk based system is the way to go. for level type games, you can just have one big chunk. for big levels and open worlds, you have multiple chunks.

 

and then you simply implement however killer a texturing method you want.

 

user defined chunk size should be doable. so they still have flexibility there.

 

but the texturing method will dictate the texture data required (splat vs tiled vs mega, etc).


Edited by Norman Barrows, 07 September 2013 - 08:49 PM.

Norm Barrows

Rockland Software Productions

"Building PC games since 1988"

 

rocklandsoftware.net

 





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS