Jump to content

  • Log In with Google      Sign In   
  • Create Account

divide on subsystems


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
11 replies to this topic

#1 fir   Members   -  Reputation: -456

Like
0Likes
Like

Posted 20 March 2014 - 04:47 AM

Do you divide your game project/ sources on subsystems?

I found it usefull to divide my code on some subsystems 

mainly window subsystem and graphics subsystem 

(and game subsystem), that would be the three most logical

but some things i do not know where to put (like camera code,

or math routines)

 

 

 



Sponsor:

#2 Ashaman73   Crossbones+   -  Reputation: 8001

Like
2Likes
Like

Posted 20 March 2014 - 06:19 AM

Here are a list of the modules/libs/subsystems in my game:

1. base (math routines, utiles, factories,statemachines,dispatcher )

2. game/ai  (game entities, pathfinding, scanning, steering)

3. script (script engines for ui , behavior, ai)

4. physics engine

5. network

6. rendering

7. audio

8. tools



#3 haegarr   Crossbones+   -  Reputation: 4602

Like
1Likes
Like

Posted 20 March 2014 - 08:54 AM

I don't assign math routines to a sub-system; like many other things they give the foundation framework used by the sub-systems.

 

The explicit sub-systems I use are

* Input (to abstract input raw handling, HID, ...)

* Video (to abstract display related stuff: monitor + graphics card)

* Audio (to abstract audio hardare related stuff)

* Graphic (to abstract graphic rendering; implemented in 3 layers)

* Sound (to abstract sound rendering; implemented in 3 layers)

* Storage (for VFS and the like)

* Netz (for networking)

 

There are many more sub-systems that are used to handle aspects of CES; I call this kind of sub-systems "services", e.g. SpatialServices which manages placement, proximity, collision detection, and similar things. Following this, NPC controlling (AI, steering, …), physics, and similar things are implemented as services, too.



#4 Norman Barrows   Crossbones+   -  Reputation: 2357

Like
1Likes
Like

Posted 20 March 2014 - 03:20 PM

I used to go crazy with modules, one for graphics, one for mesh and texture databases, one for the animation engine, one for audio, etc.

 

now i keep it simple, one for the audio library (Zaudio), one for the "graphics and everything else" library (Z3d), and one for game-specific code (Caveman, Airships!, SIMSpace, etc.)..

 

the graphics and everything else library is graphics, math, file i/o, timers, dice - pretty much all the generic low level stuff for games, except audio. when i implemented the audio library i made it separate as it seemed the cleaner way to do things. good module separation and all that jazz


Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

 

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

 

 


#5 BGB   Crossbones+   -  Reputation: 1554

Like
1Likes
Like

Posted 20 March 2014 - 06:56 PM

in my case, my engine is basically divided up somewhat...

 

renderer stuff:

    "lbxgl": high-level renderer, ex: scene/model/animation/materials/...

    "pdgl": low-level renderer, ex: texture loading, OS interfaces (WGL, GLX, ...), various utility stuff.

common:

    "btgecm": lots of stuff needed by both client and server, ex: voxel terrain, map loading, some file-format code, ...

    "bgbbtjpg": graphics library, deals with image loading/saving, video codecs / AVI / ..., compressed-texture formats, ...

    "bgbmid1": audio library, deals with mixing, MIDI, text-to-speech, several audio codecs, ...

client:

    "btgecl": deals with "client side stuff", mostly sending/recieving messages from server, updating the scene-graph, ...

server:

    "btgesv": server-side stuff (game logic, simple "Quake-like" physics, mobs / AI, ...).

    "libbsde": rigid-body physics simulation stuff, largely not used at present in favor of simpler physics.

script / infrastructure (back-end):

    "bgbgc": garbage collector

    "bgbdy": dynamic types, object-system stuff, VFS (virtual filesystem), ...

    "bgbsvm": script VM

    ...

 

generally I had split libraries up when it gets sufficiently big that it is either unwieldy or takes a long time to rebuild.

in the past, this had generally been somewhere around 50 kLOC.

 

this is not always the case, for example, my high-level renderer ("lbxgl") is ~ 111 kLOC and is not split up.

adding up a few renderer related libraries, works out to around 303 kLOC.

 

current project line-count: 879 kLOC.

 

granted, this project has kind of been ongoing for a while...



#6 fir   Members   -  Reputation: -456

Like
0Likes
Like

Posted 21 March 2014 - 01:13 AM

and how you up (asking to all answerers) divide on this subsystems?

 

You make dll, or group in folder, or this is only by some

prefixes?

 

I previously used pfefixes but now Im changing to folders

but yet not sure if im quite happy with that (I always had 

strong problems with name chosing and here my code is all 

clean and good but the module (file) and systems naming and

grouping makes me angry



#7 fir   Members   -  Reputation: -456

Like
0Likes
Like

Posted 21 March 2014 - 01:15 AM

Here are a list of the modules/libs/subsystems in my game:

1. base (math routines, utiles, factories,statemachines,dispatcher )

2. game/ai  (game entities, pathfinding, scanning, steering)

3. script (script engines for ui , behavior, ai)

4. physics engine

5. network

6. rendering

7. audio

8. tools

 

could you maybe say me what you call by dispatcher? 

Also what dou you mean by scanning?



#8 fir   Members   -  Reputation: -456

Like
0Likes
Like

Posted 21 March 2014 - 01:17 AM

I don't assign math routines to a sub-system; like many other things they give the foundation framework used by the sub-systems.

 

The explicit sub-systems I use are

* Input (to abstract input raw handling, HID, ...)

* Video (to abstract display related stuff: monitor + graphics card)

* Audio (to abstract audio hardare related stuff)

* Graphic (to abstract graphic rendering; implemented in 3 layers)

* Sound (to abstract sound rendering; implemented in 3 layers)

* Storage (for VFS and the like)

* Netz (for networking)

 

There are many more sub-systems that are used to handle aspects of CES; I call this kind of sub-systems "services", e.g. SpatialServices which manages placement, proximity, collision detection, and similar things. Following this, NPC controlling (AI, steering, …), physics, and similar things are implemented as services, too.

Could you say how do you make such layers?



#9 fir   Members   -  Reputation: -456

Like
0Likes
Like

Posted 21 March 2014 - 01:23 AM

in my case, my engine is basically divided up somewhat...

 

renderer stuff:

    "lbxgl": high-level renderer, ex: scene/model/animation/materials/...

    "pdgl": low-level renderer, ex: texture loading, OS interfaces (WGL, GLX, ...), various utility stuff.

common:

    "btgecm": lots of stuff needed by both client and server, ex: voxel terrain, map loading, some file-format code, ...

    "bgbbtjpg": graphics library, deals with image loading/saving, video codecs / AVI / ..., compressed-texture formats, ...

    "bgbmid1": audio library, deals with mixing, MIDI, text-to-speech, several audio codecs, ...

client:

    "btgecl": deals with "client side stuff", mostly sending/recieving messages from server, updating the scene-graph, ...

server:

    "btgesv": server-side stuff (game logic, simple "Quake-like" physics, mobs / AI, ...).

    "libbsde": rigid-body physics simulation stuff, largely not used at present in favor of simpler physics.

script / infrastructure (back-end):

    "bgbgc": garbage collector

    "bgbdy": dynamic types, object-system stuff, VFS (virtual filesystem), ...

    "bgbsvm": script VM

    ...

 

generally I had split libraries up when it gets sufficiently big that it is either unwieldy or takes a long time to rebuild.

in the past, this had generally been somewhere around 50 kLOC.

 

this is not always the case, for example, my high-level renderer ("lbxgl") is ~ 111 kLOC and is not split up.

adding up a few renderer related libraries, works out to around 303 kLOC.

 

current project line-count: 879 kLOC.

 

granted, this project has kind of been ongoing for a while...

 

Do you need it such wide and heavy (800k lines is big, my all own personal codes last 5 years framework + prototypes is 100k lines

only) 



#10 Hodgman   Moderators   -  Reputation: 31939

Like
0Likes
Like

Posted 21 March 2014 - 01:40 AM

Here's the solution explorer for a project I'm involved with (blurred due to NDA) - each folder icon is a "module". Cross module dependencies are kept track of and enforced -- if I committed code that makes module A depend on B, while B already depends on A, it would be detected and I'd be told off for being an idiot and breaking the architecture.

fiEsBs1.png

Inside many of those folders, there's internal layers -- e.g. the public cross-platform interface, and several platform-dependent implementations.



#11 haegarr   Crossbones+   -  Reputation: 4602

Like
0Likes
Like

Posted 21 March 2014 - 02:51 AM

...

Could you say how do you make such layers?

Layered software is a common approach where higher layers use lower layers to do their job, usually provide support for higher layers, but without knowing about layers above self. In the case of graphic rendering the lowest layer is the 3rd party graphics API like OpenGL v4.x or v3.x, OpenGLES v3.x or v2.x, D3D v11 or v10, whatever.

 

The next higher layer, the one that is the lowest w.r.t. to the own implementation, is an abstraction of said graphics API and is called GraphicDevice in my case. A GraphicDevice abstracts the API and gives it a unified interface. This interface is data driven: A given list of graphic rendering jobs is processed. Each job consists of state setting commands and a draw call (you probably / hopefully knowns of the threads that deal with such a rendering approach).

 

The next higher level obviously generates such rendering jobs. It does so by processing the scene, determining which objects need to be rendered, requesting the necessary parameter sets, generating mentioned rendering jobs, and pushing the jobs into the current job queue. This layer is actually given mainly in form of a graphic rendering pipeline (i.e. the place where the distinction between forward rendering, deferred rendering and such is implemented; I hope some day it is programmable by a node system, but for now it is modularized but hard coded :) ). On the other hand, this layer of Graphic is coupled to some Services instances. A specific Services is GraphicServices where abstracted graphic rendering resources like rendering targets, "code units" (actually shader script snippets), textures, and similar things are managed. The already mentioned rendering jobs often contain simple indices that refer to an abstraction stored within the GraphicServices. The GraphicDevice then is able to get information about abstracted resources and deal with them as needed for the underlying 3rd party graphic API. Other Services used by the graphic rendering pipeline provide the placement or materials to be used.

 

One may say that above the graphics pipeline layer there is the producer layer. Its job is to generate resources like meshes for freshly spawned particles or the skinning of skeletons. But that does not fit really good into the pattern, because such a layer would not need the next lower one (the graphic rendering pipeline) to do so. Moreover, from an optimization point of view, generating meshes for objects that are not visible is a waste of time, and culling is done in the layer below, so to say. So producing has not really an own layer in this sense.

 

 

From an implementation point of view, layers are a collection of instances that together provide an API for a specific, well defined task. It often uses another layer but this is (mostly) not visible to own clients. The higher a layer the higher is the level of dealing with a problem. E.g. the graphic rendering pipeline layer looks at the scene and picks what to render, but it isn't interested in the details like sorting the jobs by material / script / or whatever.


Edited by haegarr, 21 March 2014 - 03:29 AM.


#12 BGB   Crossbones+   -  Reputation: 1554

Like
0Likes
Like

Posted 21 March 2014 - 03:20 AM

 

in my case, my engine is basically divided up somewhat...

 

renderer stuff:

    "lbxgl": high-level renderer, ex: scene/model/animation/materials/...

    "pdgl": low-level renderer, ex: texture loading, OS interfaces (WGL, GLX, ...), various utility stuff.

common:

    "btgecm": lots of stuff needed by both client and server, ex: voxel terrain, map loading, some file-format code, ...

    "bgbbtjpg": graphics library, deals with image loading/saving, video codecs / AVI / ..., compressed-texture formats, ...

    "bgbmid1": audio library, deals with mixing, MIDI, text-to-speech, several audio codecs, ...

client:

    "btgecl": deals with "client side stuff", mostly sending/recieving messages from server, updating the scene-graph, ...

server:

    "btgesv": server-side stuff (game logic, simple "Quake-like" physics, mobs / AI, ...).

    "libbsde": rigid-body physics simulation stuff, largely not used at present in favor of simpler physics.

script / infrastructure (back-end):

    "bgbgc": garbage collector

    "bgbdy": dynamic types, object-system stuff, VFS (virtual filesystem), ...

    "bgbsvm": script VM

    ...

 

generally I had split libraries up when it gets sufficiently big that it is either unwieldy or takes a long time to rebuild.

in the past, this had generally been somewhere around 50 kLOC.

 

this is not always the case, for example, my high-level renderer ("lbxgl") is ~ 111 kLOC and is not split up.

adding up a few renderer related libraries, works out to around 303 kLOC.

 

current project line-count: 879 kLOC.

 

granted, this project has kind of been ongoing for a while...

 

Do you need it such wide and heavy (800k lines is big, my all own personal codes last 5 years framework + prototypes is 100k lines

only) 

 

 

it is big mostly because there is a lot of stuff that it does.

 

often, stuff seems simple until one needs code to do it, and more code to do it reasonably efficiently (scaling things up to more realistic world-sizes and workloads, ...).

the code then has a way of getting bigger.

 

sometimes things can also expand when they need to be generalized, say what could otherwise be done with a big function of doom needs to be broken up into smaller general-purpose sub-functions, but potentially resulting in a higher overall line count than had it been a single one-off function, ...

 

periodically I prune things down to try to keep things from getting too far out of control though.

also, a lot of doing stuff myself without much use of 3rd party libraries (I am not really personally much of a fan of having lots of external dependencies).

 

some things can also get a little hairy though.

video stuff, texture-compression, getting stuff packed into large combined VBOs in an attempt to avoid killing performance with endless small draw-calls, making a script-interpreter perform acceptably (and do useful levels of stuff), ... it all takes code.

 

nothing in particular is particularly huge, mostly just lots of little things.

 

 

in comparison to a lot of other codebases, for what stuff it does, it doesn't really seem to be doing all that terribly though.


Edited by BGB, 21 March 2014 - 11:22 AM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS