Archived

This topic is now archived and is closed to further replies.

Engine Planning!

This topic is 5124 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey, I''m trying to wrap my head around everything I need to have. I wouldn''t even ATTEMPT to list everything, or to ask for a list of everything I need to consider in creating an efficient and good-looking enginge, but I wonder if anyone has some tips for planning out what kind of things I need to consider? Like, for instance... I''m trying to get my head around these Vertex Arrays. Apparently it''s more efficient to have everything in one huge vertex array, and have every entity store its own range of indices in that huge vertex array. Along with the vertex array you need to have the texture array, so you need to concatenate all the textures in the scene into one big texture, and add offsets to the texture coordinates accordingly. ...but man, that''s one HUUGE array! Especially if there''s animation involved, because you would need to have every frame included in the array unless you want to copy it into the space each time. Anyway, that was sort of just a stream of consciousness, but it''s an example of what I''m going through in trying to plan out the engine. I don''t want specific help, really; there''s too much for me to expect to get help on everything. I just want some general tips on how to go about planning it. Like, should I start from the ground up, or from the top down? Should I have only one Vertex Array or VBO for the entire range of vertices, or would that be a bad idea? Does or does not an additional pass at the geometry divide the fill rate in half? Just so many questions, and I want to build a strong basic framework so I don''t have to change it later. Oh, and on another note, I have used things like DarkBasic in the past; I used to think I knew, but now I''m wondering, what are the advantages of OGL over something like that? DarkBasic is basically a basic pre-made engine that can load and display models and textures and do collision detection and stuff like that, controllable through a simple BASIC-like scripting language. Isn''t that the ideal of what an OpenGL engine should be like? Why doesn''t everyone use something like DarkBasic? And are my eyes decieving me, or do consoles push an amazing number of polygons compared to PC''s? Thanks... End of Post

Share this post


Link to post
Share on other sites
Using skeletal animation you need only one instance of the mesh. Of course, skeletal animation is a major topic by itself. . .

Anyway, it probably depends more on the implementation as to which use of VBOs is faster -- not to mention the drivers. I've heard of majors slowdown in some NVidia drivers when a single vertex buffer of more than 6 MB is used. Last I checked, ATI drivers don't allow more than 32 MB in a single buffer (though that admittedly is a heck of alot of data). I think that really the only big expense in using multiple buffers lies in state changes -- binding new buffers. Same as with textures, from what I've read on various forums. Of course, there certainly are optimizations to get around this, such as batch rendering.

Textures can technically be stored in individual large buffers (depending on the maximum texture size the video card can support), accessing each one with a different set of texture coordinate, but there are some huge limitations to this. For example, one will have problems trying to use the various texture wrap modes. Usually, multiple texture are packed into a larger texture only for times when the texture is not going to be wrapped. Fonts are a good example.

Rendering geometry for a second time may divide fillrate in half if you are rendering it in exactly the same way. This isn't always the case, though, as there are some instances where you may want to render the scene multiple times in different ways. Doom 3, for example, renders the scene first to the z-buffer before rendering it again and again for each light that must be added. It does this to actually minimize fillrate usage, as it prevents pixels that will be overwritten from being rendered. This is expecially important with long shader programs, as you'll want to prevent as much wasteful rendering as possible. Overall, whether to render geometry multiple times or not is purely up to what you're trying to accomplish.

Not everyone uses DarkBASIC because alot of people like to have more in depth control over engine design. They also like to be able to upgrade their renderers themselves, rather than wait for the engine to be patched by its developers.

Consoles are more specialized machines than PCs. Everything about them is made to most efficiently render graphics, resulting in an inevitably better machine for games than PCs. Also, console graphics tend to take shortcuts with some things. I noticed in Soul Calibur 2 the environments really didn't have very high polygon counts as compared to the characters, for example.

[edited by - Ostsol on November 30, 2003 11:59:30 PM]

Share this post


Link to post
Share on other sites
Xbox, which is the most powerful console out there uses a modified GF3 which is like a GF3 Ti500. How many PC games out there have a minimum requirement of a GF3 Ti500? Most PC games will run on anything above a TnT though not at maximum detail or smooth framerates. Many features of a GF3 class card cannot be used in order to cater to less powerful PC owners like me

One of the first games which will take full advantage of a GF3 class card is Doom3/HL2. These are the PC games which will finally be able to match the current batch of console games and it wont be long before newer games start looking better.

2nd reason is that since PCs are more powerful, developers tend to be lazy and dont optimise PC games as much as they do on consoles for several reasons.

1) People will upgrade their PCs in a matter of time unlike consoles.

2) If the game is going to be multiplatform, they will optimise the code for the consoles. For example, when Metal Gear was ported to PC, a PSX graphics emulation engine was written so that the PSX version code could work on PCs. This pushed up the minimum requirements to almost 300mhz as opposed to the Playstations 33mhz processor. The requirements would have been much lower had they rewritten the code for the PC, but it did not make financial sense to do so.

3)PC games run on higher resolutions and fillrate becomes a major bottleneck for PC games. This is set to grow worse as Pixel Shaders become use more and more in games. And games like Doom3 start using muliple passes for each frame. Since TVs only display 640 X 480, consoles will have a huge advantage.

However there a number of very well optimised PC games which deserve mention.

1)Top of the list is Starcraft. Playing 8 player games online with up to 1600 units in a game and no lag on a Pentium 90mhz. And ofcourse the number of sprites onscreen at once(Mass Carriers anyone?). This game still amazes me. Handling the collission for 1600 units is no easy task, add to that pathfinding which usually is very expensive and tons of onscreen sprites with no slowdown on a Pentium!

2) For 3D, it has to be halflife(Quake2 engine), it ran smoothly on a 233mhz with a Riva128 and Tnt.

Share this post


Link to post
Share on other sites
quote:


1)Top of the list is Starcraft. Playing 8 player games online with up to 1600 units in a game and no lag on a Pentium 90mhz. And ofcourse the number of sprites onscreen at once(Mass Carriers anyone?). This game still amazes me. Handling the collission for 1600 units is no easy task, add to that pathfinding which usually is very expensive and tons of onscreen sprites with no slowdown on a Pentium!



i believe most of starcraft was programmed in assembly, try that one out!

quote:


2) For 3D, it has to be halflife(Quake2 engine), it ran smoothly on a 233mhz with a Riva128 and Tnt.




carmack is god. quake 2 runs on ANYTHING!!!!!!!

Share this post


Link to post
Share on other sites
O and one more thing.

Its never about the polygon counts, its always the fillrate. A GF2 can render 20mps but rarely does it go above 2mps-4mps. The PS2 was supposed to be 20mps, until today i dont think there is a game which goes above 4-5mps on any PS2 game.

Also, while the newer cards can go 80mps in real tests, i dont think any game will use more than 5k poly models any time soon because with shadows volumes and other fancy effects, the same model has to be rendered multiple times per frame. On top of that, skeletal animation will also slowdown speeds on high poly models.

Share this post


Link to post
Share on other sites
I''m thinking about the same things myself - want to make a 3d fps engine. Sorry for the banality.

I haven''t written any engine code yet - I write small apps with this and that particular technique, but I''ve got some ideas that may be worth consideration.

Take the serious sam example - one game, two renderers (D3D and OpenGL), same for UT2003. Carmack uses solely OpenGL (just like me =)), but that doesn''t matter because of the Quake structure. Quake has a renderer and everything else. Ppl can make mods because the engine is modular. So I thought that when I start coding an engine I will make it as molular as possible and a multitask app.

For example, one thread does the I/O, another does the physics calculations, a third and forth threads do the video and audio outputs etc.

I don''t know if it''s right or wrong, but that''s the way I plan to go, so I thought I''d share the thought

Share this post


Link to post
Share on other sites
quake 2 was modular (yucky)
quake 3 was carmack's first oop engine (look at the results)

doom 3 i can assure is oop, and probably pretty damn advanced

right now, im designing my engine in a dll, so it can be re-used all over the place (editors, update the engine without messing with the game, etc.)

[edited by - fireking on December 1, 2003 11:52:29 AM]

Share this post


Link to post
Share on other sites
quote:
For 3D, it has to be halflife(Quake2 engine), it ran smoothly on a 233mhz with a Riva128 and Tnt.


Quake 1 engine, and its obvious since the textures are still stored in .wad files with the rest in .pak files.

In single player, half-life ran quite well, due to the VERY restricted size of the maps. Each map was only a tiny part of the whole game and used lots of tricks to reduce the polygon count (ie: lights floating in the air). In multiplayer it was another story. When playing TFC on the hunted map on my K6-2 500 MHz with a Voodoo3, I would still had framerates as low as 10 FPS in some areas. The network traffic was also quite a bit worse than Quake 2.

I will always remember half-life as the most buggy game I have ever played, with still at least a dozen important bugs after the 25th patch release (I don''t know the exact number ;D). I heard Valve had declared they didn''t have time to fix memory leaks, yet they still have time to add in that camera proxy which serves no purpose (since they could simply have created a standard spectator mode) and make that DMC mod (oh jeez).

Just to name a few bugs which I still experience:
- Out of sfx_t
- Loss of sound when alt-tab
- Not being able to alt-tab back
- Numerous 2D menu glitches
- Sounds that never stop
- 2D sprites that don''t disappear
- AI bugs in the monsters
- Half-life "crashing" when it doesnt have a wad

Yes, Quake 2 ran on almost everything.



Looking for a serious game project?
www.xgameproject.com

Share this post


Link to post
Share on other sites
All this stuff is interesting thanks! I mean, I can''t really exactly go through everything and comment on it, but just know that I read all that all of you said, and I appreciate the posts in this thread.

In skeletal animation, doesn''t each bone have its own matrix, multiplied by its parent matrix? How do you get all the vertices and coordinates and all, from everything currently being processed, onto one common matrix, so you can do collision detection and stuff like that?

Share this post


Link to post
Share on other sites
i have been doing skeletal animation for quite some time now...

yes, the transforms in a skeleton are propagated as in a hierarchy. think of it like a tree with the root being the pelvis. the pelvis node would have three child nodes, the lower spine, the left thigh, and the right thigh. those child nodes would have child nodes of their own, and so on. a recursive function fits it rather nicely, and if your one of those anti-recursion people i''m sure it can be made iterative.

when the model (model = mesh and corresponding skeleton) is loaded, it is in a ''reference'' pose and each skeleton joint and mesh point has it''s ''original'' transform. as the skeletal moves and the model has to be drawn, calculate each joint''s current transform using the recursion, and subtract the ''original'' transform from it. this transform difference gets stored for each joint.

next, loop through all the points/normals of the mesh. each point will have a (possibly hard-coded) number of weights associated with it. e.g. points on the elbow will be weighted %50 to the upper arm and %50 to the lower arm. for each point, multiply it''s original position by each of influencing transform differences (the difference calculated in the last paragraph), then by that transforms weight value for that point. sum these point*transform*weight calculations to get the current point position. if you are doing lighting u must transform the normal also, but of course normals only get affected by rotation.

u can see why people are trying to move soft skinning to the hardware (or writing vertex shaders for it), since there are so many calculations involved. for a non-skinned model, all u do it load its transform and draw the vertex buffer, but for skinning u must transform all the points manually, recopy them to the vertex buffer, then draw the buffer. it helps to code some of it in assembler, possibly using sse/3dnow. also, it might help to keep the rotations in quaternions, for transforming the normals.

Share this post


Link to post
Share on other sites
I have a REALLY diferent view of engine programming from many other people here in GameDev.Net

For one I see the Engine in layers. There is the OS layer, which is the very first layer you should code, and it has nothing to do with graphics.

In the OS (Operating System) layer you should code some of the following:
- Managed Memory System (so you don''t have to ask Windows for a block of memory each time you need one. Just allocate a big one and manage it)
- Resource Management System (almost everything an Engine has to deal with is a Resource. Sounds, textures, meshes, those are all Resources. Each resource is diferent, but they all have things in common: they require ram space, they have to be loaded/saved. If done correctly, the Load/Save functions should be plug-in overloadable, so that you can start your engine by loading 3D Studio mesh files, and later on someone can write a plugin that loads Maya files instead.)
- Input (yes, the keyboard, mouse and joystick devices)
- Sound (use a wrapper here, OpenAL or FMOD)
- Scripts (I used LUA and I am pretty happy with it. Scripts can be as powerful as you make them, by allowing them access to the core of the engine, you can do pretty amazing things.)
- PlugIns (Plugins that can alter the engine from within, and add functionality to it.)
- Window & Dialog Factory (a sub-system for dealing with the creation and handling of windows and sub-components, like textboxes, listboxes, etc.)
- NetCode (start with winsock)
- System Detection (you allways need to implement some sort of system detection, to know if the computer has the required graphic extensions, the minimal ammount of RAM, etc, not only to warn the user that the game might not run adequately on his system, but also to adapt the engine to the machine, by using another sub-set of extensions, or decreasing graphic quality, etc)
- Event Hooking System (im not going to give elaborate examples on this, but an Event Hooking system is a quite critical piece of an engine. If something generates events, other parts of the engine can "attach" themselfs onto the event, and respond in their own fashion to it.)
- Error System (most people lave this to the end. If your engine is well designed, and you have good and stable netcode in place, you can easily implement a remote debugger, so that when you''re working in fullscreen, you can still send/receive debug messages from another computer. The error system also, is very useful in a critical crash, saving important debug data to disk.)
- A "screwdriver". A companion program or set of programs that create the basic resources the engine relies upon, like converting 3D files to your own file format, compressing them, archiving all the resources into a single file, etc...

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As you can see, this is a pretty extensive list, and you won''t find a single thing there that has anything to do with graphics or games.

That game-specific code sits on top of the OS you just designed, and is polymorphic enough to handle any kind of game, from Pong to a Quake 3, OpenGL or DirectX.

This is how I see Engine design. A collection of sub-systems that allows me great freedom and flexibility when actually siting down and writing the game code.

To sit down and think about if I''m going to use BSP trees or Quad trees.... thats non-sense to me.

Good luck with all your future projects...


[Hugo Ferreira][Positronic Dreams][Colibri 3D Engine][Entropy HL2 MOD][Yann L.][Enginuity]
Stop reading my signature and click it!

Share this post


Link to post
Share on other sites
quote:
Original post by pentium3id
I have a REALLY diferent view of engine programming from many other people here in GameDev.Net

For one I see the Engine in layers. There is the OS layer, which is the very first layer you should code, and it has nothing to do with graphics.

In the OS (Operating System) layer you should code some of the following:
- Managed Memory System (so you don''t have to ask Windows for a block of memory each time you need one. Just allocate a big one and manage it)
- Resource Management System (almost everything an Engine has to deal with is a Resource. Sounds, textures, meshes, those are all Resources. Each resource is diferent, but they all have things in common: they require ram space, they have to be loaded/saved. If done correctly, the Load/Save functions should be plug-in overloadable, so that you can start your engine by loading 3D Studio mesh files, and later on someone can write a plugin that loads Maya files instead.)
- Input (yes, the keyboard, mouse and joystick devices)
- Sound (use a wrapper here, OpenAL or FMOD)
- Scripts (I used LUA and I am pretty happy with it. Scripts can be as powerful as you make them, by allowing them access to the core of the engine, you can do pretty amazing things.)
- PlugIns (Plugins that can alter the engine from within, and add functionality to it.)
- Window & Dialog Factory (a sub-system for dealing with the creation and handling of windows and sub-components, like textboxes, listboxes, etc.)
- NetCode (start with winsock)
- System Detection (you allways need to implement some sort of system detection, to know if the computer has the required graphic extensions, the minimal ammount of RAM, etc, not only to warn the user that the game might not run adequately on his system, but also to adapt the engine to the machine, by using another sub-set of extensions, or decreasing graphic quality, etc)
- Event Hooking System (im not going to give elaborate examples on this, but an Event Hooking system is a quite critical piece of an engine. If something generates events, other parts of the engine can "attach" themselfs onto the event, and respond in their own fashion to it.)
- Error System (most people lave this to the end. If your engine is well designed, and you have good and stable netcode in place, you can easily implement a remote debugger, so that when you''re working in fullscreen, you can still send/receive debug messages from another computer. The error system also, is very useful in a critical crash, saving important debug data to disk.)
- A "screwdriver". A companion program or set of programs that create the basic resources the engine relies upon, like converting 3D files to your own file format, compressing them, archiving all the resources into a single file, etc...

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
As you can see, this is a pretty extensive list, and you won''t find a single thing there that has anything to do with graphics or games.

That game-specific code sits on top of the OS you just designed, and is polymorphic enough to handle any kind of game, from Pong to a Quake 3, OpenGL or DirectX.

This is how I see Engine design. A collection of sub-systems that allows me great freedom and flexibility when actually siting down and writing the game code.

To sit down and think about if I''m going to use BSP trees or Quad trees.... thats non-sense to me.

Good luck with all your future projects...


Actually, your design is very similar to mine We share the same idea of how an engine should work I guess




James Simmons
MindEngine Development
http://medev.sourceforge.net

Share this post


Link to post
Share on other sites
quote:
Original post by pentium3id
For one I see the Engine in layers. There is the OS layer, which is the very first layer you should code, and it has nothing to do with graphics.

I organize my engine in layers, too! I organize it according to complexity though. For example, the first level has the really basic subsystems like error logging, memory manager, settings loader, and virtual file system. Since every other part of the engine could depend on these, I group them all into one group. Next comes the OS-specific level, with classes like Graphics, Input, etc. Then (last?) comes the abstract level: such as classes for a scenegraph, to perform game logic, etc.

Share this post


Link to post
Share on other sites
The whole "layering" thing sounds sort of like what I''m planning to do, but then I decided not to do it. =P maybe I will see that I need it once I start; but for now, what I really need to do is start, to see how far I get. I''m just afraid of starting before I know exactly what to do, because I don''t want to leave anything out, or do anything the wrong way, and then have to start over or anything. But if I don''t start at all, that''s even worse than having to start over; so I think I''ll be getting started =). My original idea was using two layers, a "Clean" layer and a "Dirty" layer, but now I''m thinking that''s just sort of stupid, because nothing is really going to be all that clean, unless I decide to use scripting with Lua or Python, which I will attempt to make immaculately intuitive. I guess I will reserve a special, deep, dark layer for DirectInput and possible DirectSound, though =P, I don''t know exactly what it is, but what I do know is that I HATE COM and I HATE DIRECTX. Now that that''s out of my system ! Is there something to fill in for DirectInput like FMod is a good match for replacing DirectSound?

Oh, and jorgander, what you said was very interesting to me :D. I could try to continue in a direct discussion of your information, but that would be beyond the scope of this thread. I''ll just say that originally, I was planning on just having one bone of influence per set of vertices, with no skin movement or whatever; I guess now that I think of it, it would not look so great. You managed to fit a lot of information in your post =) I''ll have to keep rereading it, though; animation is probably going to be my biggest adversary besides fill rate in the building of my engine.

Share this post


Link to post
Share on other sites
quote:
Original post by Rellik
Is there something to fill in for DirectInput like FMod is a good match for replacing DirectSound?



For sound you might also want to look at OpenAL as an alternative sound system as its very close to OpenGL in syntax etc.
As for input, its probably best to write (or get) an DirectInput wrapper, plug it in and forget about it. There isnt really a saner way to do it on windows, also that layer of abstraction would make it easier to port (if you so felt the need one day)

Share this post


Link to post
Share on other sites
Is a custom memory manager really useful in general or simply best primarily for when there''s lots of dynamic content? My old terrain renderer, for example, was constantly creating and deleting terrain segments as one moved through the world.

Share this post


Link to post
Share on other sites
Personaly, i''d use normal ''heap'' memory for objects which arent going to be created or destoryed during the game loop (such as player models, sounds etc) and some kind of ''memory pool'' system for objects that are going to be, that way i could impliement my own system for dealing with the memory and not worry about memory becoming fragmented.
So, i might have a pool from which i draw missile objects, another for cute furry creatures etc.
The reason for different pools is down to size, if you delete a cute furry creature and then create 2missile objects in the space you might have some free space left over which isnt big enuff for owt to fit in, however by drawing the same type from each pool you can remove that risk and nicely pack your memory useage.

Share this post


Link to post
Share on other sites
A memory pool is allways a good thing, but you shouldnt use several pools, you should use one.

How you then partition that memory pool is another thing all together.

You can assign an area for objects inferior to 1KB, another for objects inferior to 10KB, etc, the list goes on.

Another excelent thing is that when you handle memory like this you know everything about what is going on, and you can fill statistics sheets, what is the average mem block allocated, how many allocations per second, etc.

You can then use that data to tweak the assigned areas, so that memory fragmentation becomes more and more rare.

You can also create real-time analisers that predict allocations and do that tweaking in real time...


[Hugo Ferreira][Positronic Dreams][Colibri 3D Engine][Entropy HL2 MOD][Yann L.][Enginuity]
Stop reading my signature and click it!

Share this post


Link to post
Share on other sites
Another thing i forgot to mention before is a file system that supports:
- Archives
- Compression

Archives are those files that have more files in them. Let call them .BIG files. You can read a .big file''s header and virtually map all files inside into a virtual folder so to speak.

The game doesnt see the game''s data folder as it is, instead it sees all files that inside the archives, and doesnt see the archives themselfs. (this is just a design option, you can revert to normal folder status).

Also seamless Compression/Decompression is a huge plus. Imagining having data blocks that allways have a 2 byte header that tells the File.Read class if the next block is compressed or not. This way the class can decide itself to load the data and decompress it automaticly.

For the programmer, this is seamless. He doesn''t have to know if the file is compressed or not, as long as the tools used to create the file create the appropriate headers.

Such a file system allows us to have compressed resources inside archives, creating a neatly packed data folder:
Sounds.big, Meshes.big, Textures.big, Scripts.Big, etc....


[Hugo Ferreira][Positronic Dreams][Colibri 3D Engine][Entropy HL2 MOD][Yann L.][Enginuity]
Stop reading my signature and click it!

Share this post


Link to post
Share on other sites
Just be aware that there is overhead with compression -- decompression. If load times are a concern, this may either help or hinder, depending on the bottleneck.

Just saying, its a good option, but not a requirement by any means.

Share this post


Link to post
Share on other sites
Well, not a requirement of course, but it doesnt hurt if its there. The Compression subsystem can be turned off entirely, and it isnt even invoked if your files dont contain compressed blocks.

If the game is 200MB compressed then there is little gain, might as well leave it uncompressed and fill an entire CD.

I imagine you could easily implement an option for file "mounting", in which compressed files could be decompressed, therefore saving in loading times, if the user is "rich" in disk space.

No one sins in adding features....


[Hugo Ferreira][Positronic Dreams][Colibri 3D Engine][Entropy HL2 MOD][Yann L.][Enginuity]
Stop reading my signature and click it!

Share this post


Link to post
Share on other sites