Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

Rellik

Engine Planning!

This topic is 5437 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey, I''m trying to wrap my head around everything I need to have. I wouldn''t even ATTEMPT to list everything, or to ask for a list of everything I need to consider in creating an efficient and good-looking enginge, but I wonder if anyone has some tips for planning out what kind of things I need to consider? Like, for instance... I''m trying to get my head around these Vertex Arrays. Apparently it''s more efficient to have everything in one huge vertex array, and have every entity store its own range of indices in that huge vertex array. Along with the vertex array you need to have the texture array, so you need to concatenate all the textures in the scene into one big texture, and add offsets to the texture coordinates accordingly. ...but man, that''s one HUUGE array! Especially if there''s animation involved, because you would need to have every frame included in the array unless you want to copy it into the space each time. Anyway, that was sort of just a stream of consciousness, but it''s an example of what I''m going through in trying to plan out the engine. I don''t want specific help, really; there''s too much for me to expect to get help on everything. I just want some general tips on how to go about planning it. Like, should I start from the ground up, or from the top down? Should I have only one Vertex Array or VBO for the entire range of vertices, or would that be a bad idea? Does or does not an additional pass at the geometry divide the fill rate in half? Just so many questions, and I want to build a strong basic framework so I don''t have to change it later. Oh, and on another note, I have used things like DarkBasic in the past; I used to think I knew, but now I''m wondering, what are the advantages of OGL over something like that? DarkBasic is basically a basic pre-made engine that can load and display models and textures and do collision detection and stuff like that, controllable through a simple BASIC-like scripting language. Isn''t that the ideal of what an OpenGL engine should be like? Why doesn''t everyone use something like DarkBasic? And are my eyes decieving me, or do consoles push an amazing number of polygons compared to PC''s? Thanks... End of Post

Share this post


Link to post
Share on other sites
Advertisement
Using skeletal animation you need only one instance of the mesh. Of course, skeletal animation is a major topic by itself. . .

Anyway, it probably depends more on the implementation as to which use of VBOs is faster -- not to mention the drivers. I've heard of majors slowdown in some NVidia drivers when a single vertex buffer of more than 6 MB is used. Last I checked, ATI drivers don't allow more than 32 MB in a single buffer (though that admittedly is a heck of alot of data). I think that really the only big expense in using multiple buffers lies in state changes -- binding new buffers. Same as with textures, from what I've read on various forums. Of course, there certainly are optimizations to get around this, such as batch rendering.

Textures can technically be stored in individual large buffers (depending on the maximum texture size the video card can support), accessing each one with a different set of texture coordinate, but there are some huge limitations to this. For example, one will have problems trying to use the various texture wrap modes. Usually, multiple texture are packed into a larger texture only for times when the texture is not going to be wrapped. Fonts are a good example.

Rendering geometry for a second time may divide fillrate in half if you are rendering it in exactly the same way. This isn't always the case, though, as there are some instances where you may want to render the scene multiple times in different ways. Doom 3, for example, renders the scene first to the z-buffer before rendering it again and again for each light that must be added. It does this to actually minimize fillrate usage, as it prevents pixels that will be overwritten from being rendered. This is expecially important with long shader programs, as you'll want to prevent as much wasteful rendering as possible. Overall, whether to render geometry multiple times or not is purely up to what you're trying to accomplish.

Not everyone uses DarkBASIC because alot of people like to have more in depth control over engine design. They also like to be able to upgrade their renderers themselves, rather than wait for the engine to be patched by its developers.

Consoles are more specialized machines than PCs. Everything about them is made to most efficiently render graphics, resulting in an inevitably better machine for games than PCs. Also, console graphics tend to take shortcuts with some things. I noticed in Soul Calibur 2 the environments really didn't have very high polygon counts as compared to the characters, for example.

[edited by - Ostsol on November 30, 2003 11:59:30 PM]

Share this post


Link to post
Share on other sites
Xbox, which is the most powerful console out there uses a modified GF3 which is like a GF3 Ti500. How many PC games out there have a minimum requirement of a GF3 Ti500? Most PC games will run on anything above a TnT though not at maximum detail or smooth framerates. Many features of a GF3 class card cannot be used in order to cater to less powerful PC owners like me

One of the first games which will take full advantage of a GF3 class card is Doom3/HL2. These are the PC games which will finally be able to match the current batch of console games and it wont be long before newer games start looking better.

2nd reason is that since PCs are more powerful, developers tend to be lazy and dont optimise PC games as much as they do on consoles for several reasons.

1) People will upgrade their PCs in a matter of time unlike consoles.

2) If the game is going to be multiplatform, they will optimise the code for the consoles. For example, when Metal Gear was ported to PC, a PSX graphics emulation engine was written so that the PSX version code could work on PCs. This pushed up the minimum requirements to almost 300mhz as opposed to the Playstations 33mhz processor. The requirements would have been much lower had they rewritten the code for the PC, but it did not make financial sense to do so.

3)PC games run on higher resolutions and fillrate becomes a major bottleneck for PC games. This is set to grow worse as Pixel Shaders become use more and more in games. And games like Doom3 start using muliple passes for each frame. Since TVs only display 640 X 480, consoles will have a huge advantage.

However there a number of very well optimised PC games which deserve mention.

1)Top of the list is Starcraft. Playing 8 player games online with up to 1600 units in a game and no lag on a Pentium 90mhz. And ofcourse the number of sprites onscreen at once(Mass Carriers anyone?). This game still amazes me. Handling the collission for 1600 units is no easy task, add to that pathfinding which usually is very expensive and tons of onscreen sprites with no slowdown on a Pentium!

2) For 3D, it has to be halflife(Quake2 engine), it ran smoothly on a 233mhz with a Riva128 and Tnt.

Share this post


Link to post
Share on other sites
quote:


1)Top of the list is Starcraft. Playing 8 player games online with up to 1600 units in a game and no lag on a Pentium 90mhz. And ofcourse the number of sprites onscreen at once(Mass Carriers anyone?). This game still amazes me. Handling the collission for 1600 units is no easy task, add to that pathfinding which usually is very expensive and tons of onscreen sprites with no slowdown on a Pentium!



i believe most of starcraft was programmed in assembly, try that one out!

quote:


2) For 3D, it has to be halflife(Quake2 engine), it ran smoothly on a 233mhz with a Riva128 and Tnt.




carmack is god. quake 2 runs on ANYTHING!!!!!!!

Share this post


Link to post
Share on other sites
O and one more thing.

Its never about the polygon counts, its always the fillrate. A GF2 can render 20mps but rarely does it go above 2mps-4mps. The PS2 was supposed to be 20mps, until today i dont think there is a game which goes above 4-5mps on any PS2 game.

Also, while the newer cards can go 80mps in real tests, i dont think any game will use more than 5k poly models any time soon because with shadows volumes and other fancy effects, the same model has to be rendered multiple times per frame. On top of that, skeletal animation will also slowdown speeds on high poly models.

Share this post


Link to post
Share on other sites
I''m thinking about the same things myself - want to make a 3d fps engine. Sorry for the banality.

I haven''t written any engine code yet - I write small apps with this and that particular technique, but I''ve got some ideas that may be worth consideration.

Take the serious sam example - one game, two renderers (D3D and OpenGL), same for UT2003. Carmack uses solely OpenGL (just like me =)), but that doesn''t matter because of the Quake structure. Quake has a renderer and everything else. Ppl can make mods because the engine is modular. So I thought that when I start coding an engine I will make it as molular as possible and a multitask app.

For example, one thread does the I/O, another does the physics calculations, a third and forth threads do the video and audio outputs etc.

I don''t know if it''s right or wrong, but that''s the way I plan to go, so I thought I''d share the thought

Share this post


Link to post
Share on other sites
quake 2 was modular (yucky)
quake 3 was carmack's first oop engine (look at the results)

doom 3 i can assure is oop, and probably pretty damn advanced

right now, im designing my engine in a dll, so it can be re-used all over the place (editors, update the engine without messing with the game, etc.)

[edited by - fireking on December 1, 2003 11:52:29 AM]

Share this post


Link to post
Share on other sites
quote:
For 3D, it has to be halflife(Quake2 engine), it ran smoothly on a 233mhz with a Riva128 and Tnt.


Quake 1 engine, and its obvious since the textures are still stored in .wad files with the rest in .pak files.

In single player, half-life ran quite well, due to the VERY restricted size of the maps. Each map was only a tiny part of the whole game and used lots of tricks to reduce the polygon count (ie: lights floating in the air). In multiplayer it was another story. When playing TFC on the hunted map on my K6-2 500 MHz with a Voodoo3, I would still had framerates as low as 10 FPS in some areas. The network traffic was also quite a bit worse than Quake 2.

I will always remember half-life as the most buggy game I have ever played, with still at least a dozen important bugs after the 25th patch release (I don''t know the exact number ;D). I heard Valve had declared they didn''t have time to fix memory leaks, yet they still have time to add in that camera proxy which serves no purpose (since they could simply have created a standard spectator mode) and make that DMC mod (oh jeez).

Just to name a few bugs which I still experience:
- Out of sfx_t
- Loss of sound when alt-tab
- Not being able to alt-tab back
- Numerous 2D menu glitches
- Sounds that never stop
- 2D sprites that don''t disappear
- AI bugs in the monsters
- Half-life "crashing" when it doesnt have a wad

Yes, Quake 2 ran on almost everything.



Looking for a serious game project?
www.xgameproject.com

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!