Sign in to follow this  

"Next Gen" game engine design

This topic is 4030 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So a lot of you may own a 360 (I don't lol) and you may have seen all the videos for ps3. One thing that I notice about these consoles is the amazing amount of detail in the game environments. Take gears of war for this example, the ground may have cracked bricks and moss growing on them with lots of plants and weed coming out everywhere, even lots of bricks and rocks just laying around (i,m not sure if these are independant objects from the levels static geometry). The walls can be broken and smashed up and have vegitation growing around them. The actual architecture of buildings can be massive and is extremely detailed. The actual levels may have thousands of interactive objects, like cars, bins, wood thats been smashed etc etc. Now I was just wondering if anyone knows of the kind of tecniques these games are using to implement all this stuff in the engine? I don't mean fancy particle effects or per pixel lighting etc, just the detail. I was thinking it seems to me like the artists are just making much more complex models and textures. Would they still just be using classical methods like octree and occlusion culling etc? I know the hardware is so much more powerful and can through more polys about while having fancy lighting etc. What about creation? Do you think these are still made in max or mya, what about the levels? Now for a new project I am starting (notice people start another engine all the time without finishing the old one?) I was thinking just let the artist create the whole geometry for the level in max/mya and use a plugin that allows them to apply materials that will be used in game. They would then import this into a custom level editor where they can place all the dynamic objects that the player can interact with including enemies, lights and triggers etc. The engine would use normal methods for space partitioning and culling and with todays hardware would probably render it all with no problem (yeah I know I made that sound easy and am aware of the REAL problems but for now just work with me lol) So the point I am making is, is there anything really new here or are the artists just extremely talented and getting paid really well? I have other questions about animation and rendering methods but i'll let this discussion make way just now.

Share this post


Link to post
Share on other sites
You might check out the 3D Engine Design section of the six ShaderX books that are available. Many of them show how game engines are build. A good book is 3D Game Engine design from Dave Eberly.

- Wolfgang

P.S: sounds more like a beginner question, maybe better to move / ask it in the beginner forum?

Share this post


Link to post
Share on other sites
I wasn't meaning it to be a begginners question, I do know what goes in to engine development, I have also got 3d game engine programming book and have developed a few engines (not all finished lol)

I was just meaning for a discussion about maybe any new techniques people are aware of in next gen engines.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Destructible environments are nothing new, they've been around for years now.

Share this post


Link to post
Share on other sites
I can tell you that most _really_ good "next gen" engines, use trick of old engines.

Basic stuff like good scene managment, Smart use of BSP or quad trees.
But also things like a tasked based design, so muilty threaded job processing can be done.

tricks like offline processing fo meshes to platform specific forms. Ie make VB in a tool.

But to make it really awsome, its about giving power to people who can use it. IE designers, artists and level builders doing most of the work.

Share this post


Link to post
Share on other sites
I dont think this is a "beginner" question. My thoughts on modern engine design..

1) no BSP or portal etc.... all geometry is manged by octrees, with occlusion culling. No precalculated visibility.
2) No lightmapping, all lighting is done per-pixel with shadow-mapping
3) Instancing plays a big part of mesh rendering
4) agressive LOD systems using billboard imposters
5) Game specific editors are limited to placing meshes and such, all geoemtry is created in external 3D modelling applications like Max or Maya
6) General purpose algorithms are employed where possible, special cases are reduced..surface shading is done via shader fragments, allowing for complex variations of effects.

Share this post


Link to post
Share on other sites
The biggest difference in 'next gen' games is the content. While Unreal Engine 3 looks fantastic and has nice features from an engineering standpoint, Gears of War is mostly the art. One could use Ogre3d and create visuals as good as that, but that level of art takes alot of time and talent. The art requirements for 'next gen' are the main part that have taken a huge leap in time, cost, and effort.

BSP, lightmapping, precalculated visibility is alive and well in modern engines.

Share this post


Link to post
Share on other sites
shaders is the keyword for next-gen engines. lots of game company started to use special artists to get correct shader codes and values. and some optimize differences with old engines, because of the increasing of texture sizes, polygon count..etc while the content is bigger and more active, some algorithms getting unuseful(and i think BSP for rendering is one of these). and of course CPU-GPU(-PPU) usages are more stabilizated, just like making GPGPU.

Share this post


Link to post
Share on other sites
Yep, it's mostly some really good art, with enough support from the engine to avoid overly constraining the art.

Although there are lots of new tricks, the old ones are still alive and kicking. It might surprise you, for example, to find out that certain "next gen" titles are using static vertex colors for lighting of some geometry.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
I think games are starting to make a big move from precalculated to calculated. I mean textures that satisfy on the latest engines are huge, used to be they were relativly small and gave a result that looked better than procedural techniques. Now we're moving towards the position of having procedural textures that look as good as those created by artists:
http://www.bit-tech.net/gaming/2006/11/09/Procedural_Textures_Future_Gam/1.html

Artists used to have to create every animation for their 'actors', now we have physics that calculates some of these, especially death with rag dolls.

I think as games move on we'll see more and more dynamic procedural content, from textures to maybe even certain sounds. Perhaps one day we'll be able to generate voices that don't sound like, well crap. I think models will start to be animated procedurally, and perhaps even an element of dynamic story generation (though not too much, that would make the whole exercise pointless IMHO).

Share this post


Link to post
Share on other sites
Quote:
Original post by DrEvil
The biggest difference in 'next gen' games is the content.
BSP, lightmapping, precalculated visibility is alive and well in modern engines.


I think while the bar for content is always being raised, the orginal question in this thread was about engine design--therefore this comment is largely irrelevant.
The artistic quality of content has nothing to with engine design.

BSP (as I understand it) is not a particularly useful algorithm anymore with complex and dynamic environments--neither is lightmapping, so I question the relevance of these old methods to modern and future game engine design.

By modern i mean state-of-the-art, not revamped old engines. I dont pretend to be an expert, but even a cursory look at the literature and various upcoming titles shows a very noticeeable decline in the use of precalculated visiblity, lighting and so on.

Dyanimic lighitng and geometry is the key in modern engines. Every object should be able to be illuminated in one way, all collison should act the same without differentiating "models" and "static geometry". There should be reduced special cases in rendering.

Also, the "mega texture" concept is very interesting and certainly ought to be explored by engine developers, even if ultimately rejected for procedural textureing (which still has a very long way to go if it ever becomes reasonable as a general purpose solution).

I certainly think procedural generation of natural features, like terrain, vegetation, weather, etc, are becoming a real possibiility, and are certainly the way things will go in the future; natural phenomena are too complex to be hand-modelled entirely.

Share this post


Link to post
Share on other sites
Quote:
1) no BSP or portal etc.... all geometry is manged by octrees, with occlusion culling. No precalculated visibility.
2) No lightmapping, all lighting is done per-pixel with shadow-mapping
3) Instancing plays a big part of mesh rendering
4) agressive LOD systems using billboard imposters
5) Game specific editors are limited to placing meshes and such, all geoemtry is created in external 3D modelling applications like Max or Maya
6) General purpose algorithms are employed where possible, special cases are reduced..surface shading is done via shader fragments, allowing for complex variations of effects.

This all depends on the underlying hardware platform and the requirements of the game. Think of a game like DOOM 3 ... why not use a portal system? Regarding #2 think of a game that happens at night ... Midnight Club comes to mind :-). If you do not want to handle a huge number of light sources with cached 256x256 shadows per "spot-light", you might find light mapping attractive :-). #6: shader fragments are so retro ... you want to use shaders with conditions for this. It is faster in decent hardware with all those caches. Additionally it is not only a debugging and maintenance nightmare but also does not really offer all the flexibility a shader / graphics programmer wants.

Since the beginning of game programming times, there is no engine that is useful for every kind of AAA game. They usually take shortcuts to make a specific type of game run fast.
So it depends :-) ...

I believe it is better to ask the original question in the following way:
- my target platform are XBOX 360 / PS3
- my game is a racing game that mainly happens at night
- I want a high level of global illumination <-> dynamic lighting with a lower detail level
- My cars should be influenced by every shadow from any light source / my cars are so fast, I do not care about the shadow stuff I wnat the game to run permanently with 60 fps
- etc. etc.

Share this post


Link to post
Share on other sites
Building on the previous post, note that many engines are domain specific. To adapt them to things they weren't designed for often requires significant work under the hood to get things up to snuff.

Share this post


Link to post
Share on other sites
Quote:
Original post by JinJoI was thinking it seems to me like the artists are just making much more complex models and textures.
They are, but there's a lot of smoke and mirrors going on, because while they model e.g. character models that have millions of polygons (in ZBrush, Mudbox, and other packages) these are used only to bake normal maps, ambient occlusion maps, etc. that are used on in-game characters that are on the order of a few thousand polygons (say, around 3-6K triangles). The same goes for the backgrounds: they're modelled in high-res, but the detail is captured into normal maps and mapped onto low-res geometry in the game.

In other words, the actual in-game polycount is higher than in previous generation games, but really not as much higher as you're led to believe.

Share this post


Link to post
Share on other sites
Quote:
Original post by Matt Aufderheide
1) no BSP or portal etc.... all geometry is manged by octrees, with occlusion culling. No precalculated visibility.
2)...
....natural phenomena are too complex to be hand-modelled entirely...

Agree!!!Right.Completly.
But what about a procedural creation of world entirely? (it's my addition)
Of course,creation procedure must work with set ot input param's
and all components of world can be created independently.
Now in modern game:
~10-100 artists & model/level designers in team
~1-10Gb content
I don't want (and can't) to create everything,that's why I'm thinking about it.
For example,some time ago I've attempted to make two-stage
cloud / terrain generation procedure:
1st pass- random generation of parameters to choose satisfactory
2st pass- small random deviation to choose appropriate
What do you think/know about this( and simular)?

[Edited by - Krokhin on November 25, 2006 5:22:41 AM]

Share this post


Link to post
Share on other sites
Lightmaps are still very important, many people are using these in next gen titles - extended beyond the original simple lightmap to include normal information etc.
Per Pixel dynamic lights on everything are not totally practical for every situation and your not getting any radiosity.

Procedual textures are interesting, but many examples are not generated on the GPU so there is no win for a developer to use them if they take an equal amount of GPU memory to a bitmap (procedual or not).

Remember "Next Gen" were talking about tech thats been around for ages,
PS3 & XBox360 GPU's are more or less DX9 hardware

Share this post


Link to post
Share on other sites
Quote:
Original post by wolf
Quote:
1) no BSP or portal etc.... all geometry is manged by octrees, with occlusion culling. No precalculated visibility.
2) No lightmapping, all lighting is done per-pixel with shadow-mapping
3) Instancing plays a big part of mesh rendering
4) agressive LOD systems using billboard imposters
5) Game specific editors are limited to placing meshes and such, all geoemtry is created in external 3D modelling applications like Max or Maya
6) General purpose algorithms are employed where possible, special cases are reduced..surface shading is done via shader fragments, allowing for complex variations of effects.

This all depends on the underlying hardware platform and the requirements of the game.


Good point. I really don't care if an engine uses some old-school BSP/PVS or portal scene management techniques, as long as it makes that particular engine run fast in those environments that it's designed for. Performance is my first priority and everything else (including fancy designs) fall to second place.

Share this post


Link to post
Share on other sites
Well fast performance for a level made of boxes is one thing, but when you are interested in creating an engine that mimicks an offline renderer, you have to have a more flexible, dynamic approach. Light maps and PVS for instance are not dynamic, so should be discarded.

I see no reason why dynamic lighitng cant be used for all objects.. many games do this already.

in my view, for modern and future 3d hardware, the best engine designs for MOST purposes is based on the concepts I've mentioned; Octree management, hardware occlusion culling, and dynamic lighting and shadowing. I see no need for thigns like BSp or lightmaps.

Share this post


Link to post
Share on other sites
As far as graphics are concerned, we're already at near photo-realistic quality with graphics. The next step is probably cutting back on imposters, adding more subtleties, and more rampant instancing. Imagine a World War II shooter with 30,000 soldiers on the screen that looks as good as Gears of War. Personally, I believe graphics engines have come about as far as they will get until procedurally guided techniques become more advanced (which is not to say there won't be artist intervention). The costs associated with these games are astronomical, and it will only get higher as people demand more.

Now, your thread title is "Next Gen game engine design", but the content is about graphics engine design. There's a huge difference! There's a ton left to be explored for actual gameplay elements. Examples such as Katamari Damacy and the Wii controller are only the beginning as developers find ways to develop interesting games without millions of dollars. If you want to blur the edge a little bit, consider destructable environments. How many games let you blow up whatever building you feel like? Not many. In the games that let you, how often does it actually make a difference? Never.

Here are some of my predictions. We're probably at the peak of custom built engines. Pre-built, but very powerful and customizable engines such as Source and Unreal are going to be licensed out more and more. This will allow companies to focus on the game. Why spend millions on your own engine when you can pay $500,000-$750,000 for something just as good or better and devote all those extra people to the game itself? Branching story lines will be huge. Imagine a game where you play an assassin, and your job (uncreatively enough) is to assassinate someone. Let's say you fail. Instead of forcing you to start the mission tediously over, the game keeps on going with a different sotry. Perhaps this person because a huge threat to world security; either way, your actions (or lack of action) make a difference.

This will require massive amounts of content, but it will be affordable because the actual engine will be already complete. Instead of 30 graphics programmers, you hire 15 level designers and 15 artists. Or, if procedural graphics take over, 25 level designers and 5 artists. This, ideally, would create a game with a hundred times more depth than we have today.

In summary: there's not a whole lot left for graphics to explore. (It's a little scary to see how many of the techniques used were thought up in the 70s and 80s.) Apart from little optimization techniques and better ways to do things, we can already do pretty much everything we want to. The future is in the game, not the graphics.

Share this post


Link to post
Share on other sites
Quote:
Original post by Raloth
As far as graphics are concerned, we're already at near photo-realistic quality with graphics.
In summary: there's not a whole lot left for graphics to explore.


What? This is rediculous.. do you live on planet purple?

Frankly graphics in realtime are very primitive still, and have perhaps DECADES to go before they approach perfect photo realism.

IN fact, graphics are the main area that should and will be improved. I cant belive someone would think something like this.

Share this post


Link to post
Share on other sites
Quote:
Original post by Matt Aufderheide
Well fast performance for a level made of boxes is one thing, but when you are interested in creating an engine that mimicks an offline renderer, you have to have a more flexible, dynamic approach. Light maps and PVS for instance are not dynamic, so should be discarded.


Quote:
I see no reason why dynamic lighitng cant be used for all objects.. many games do this already.


Wrong. If you want to mimick offline renderers, do DO need light maps. Offline renderers almost universally support global illumination, which is just plain impossible to do at run-time (PRT does not count - its PRECALCULATED radiance transfer). But without GI, the levels look weird. Oh well, not as long as everything is dark like in Doom 3, since GI would be barely noticeable there anyway. But a corridor with big windows with a clear sky and the sun shining? Plenty of GI there. Lightmaps are the simplest way to store GI, and extensions like Radiosity Normal Mapping allow some extra tricks there, but still light information is precalculated and stored.

Quote:
in my view, for modern and future 3d hardware, the best engine designs for MOST purposes is based on the concepts I've mentioned; Octree management, hardware occlusion culling, and dynamic lighting and shadowing. I see no need for thigns like BSp or lightmaps.


In my view, I see engines using the right tools for the right jobs.

- Octrees wont get you very far in a space sim, for example.
- HW culling is bound to some pipeline bubble problems until predicated rendering takes off. Besides, a coarse CPU culling is still wise (like some hierarchical culling or view frustum culling), to avoid saturation.
- Dynamic lighting & shadowing: see above. Besides, there is no point in constantly recalculating shadows that never change. Shadow generation is SLOW, so only the important lights should get realtime shadows.
- BSP is alive and kicking, you know. Its just another spatial partitioning algorithm. Well, node-based BSP rendering is not very useful for UT2007-like scenes, but nothing stops you from having leaves with 10-20k triangles. Besides, BSPs are great for collision detection..


That said, I see some things actually getting simpler.
- Much functionality moves to the shaders, thus the APIs get simpler. No texture stage states, indexed vertex skinning stuff, vertex blending, etc. It all gets reduced to sampler- and renderstates, buffers (textures/indexbuffers/vertexbuffers), shaders.
- Many visibility algorithms are now obsolete for next-gen stuff, and brute force becomes the way to go. Terrain rendering is a perfect example. There is no point in worrying about vertex based LOD anymore; just split the terrain in patches, cull per-patch, draw the visible ones. End of story (unless your game is already GPU bound and the vertex shaders are saturated, then some LOD may be wise, but thats rarely the case). Carmack got it right by focusing on the actual terrain texturing with his "Mega-Texture" approach (e.g. a fancy pixel shader guaranteeing non-repeating terrain surfaces) and pixel (!) based LOD (no need for fancy pixel shaders when the hill is very far away).


I see these as todays' challenges:
- Scalability. This is getting more and more problematic.
- Shadow generation. This is one huge bottleneck.
- Animation. Please, people. Nowadays we have very pretty triangles, but animations are still sorely lacking. Valve actually made progress with their facial animations. But where are AI-based animations with IK and physics as input (like, trying to lift something up, but since its too heavy the AI does something different and makes the actor express this) ?

Share this post


Link to post
Share on other sites
Quote:

Wrong. If you want to mimick offline renderers, do DO need light maps. Offline renderers almost universally support global illumination, which is just plain impossible to do at run-time But without GI, the levels look weird.


How do you define global ilumination? Shadow maps, shadow volumes, etc, are all forms of global ilumination. So is an ambient term, sunlight, etc.

If you mean objects recieve some bounced light, then i dont see why this cant be acheived using realtime methods, such as generate cubemaps dynamically, or some kind of per-vertex ray-tracing. I have seen demos that do this.

Too much is made of global ilumination as a seperate thing, when in fact its all about how many pixels you can process, which will increase dramatically in future.

In short, lightmaps are not dynamic, so if you move lights OR geometry around, they are worthless, and as such become less useful in the future.

Shadow mapping can be made to be updated only when needed, so this isnt so much of an issue. And of course, not all lights need to cast shadows, such as those in the distance and so on.

-------------------
As far as scene management goes, of course you still do frustum culling on CPU, but you can use an octree for this, right?

For a space sim, octree will certainly help you manage your scene, why wouldnt it? You certainly cant use BSP there so i dont see how that point is relevant.

Essentially the goal is to allow an artist to drop a bunch of meshes in a world in an arbitrary maner, and the engine will just work.

Share this post


Link to post
Share on other sites
Quote:
Original post by Matt Aufderheide
How do you define global ilumination? Shadow maps, shadow volumes, etc, are all forms of global ilumination. So is an ambient term, sunlight, etc.


Global Illumination = Direct Illumination + Indirect Illumination. The first part is already done with today's renderers, the second part isn't. Thats where PRT etc. kick in.

Quote:
If you mean objects recieve some bounced light, then i dont see why this cant be acheived using realtime methods, such as generate cubemaps dynamically, or some kind of per-vertex ray-tracing. I have seen demos that do this.


"Some" bounced light. Haha. You do realize that the indirect part involves an ENORMOUS amount of computation right? The amount of calculations really go through the roof with increasing scene complexity. Per-vertex raytracing is really not helping, this is just another version of direct illumination, unless you count in secondary rays, which multiply quickly. Some sort of real-time photon mapping has been done for simple objects, but is far from being useful yet.

Also, you might want to cache the indirect illumination data. Fortunately, this data usually has only low frequency components, so its ok to store it in - tada - low-resolution lightmaps.

Here is a good example: http://graphics.ucsd.edu/~henrik/images/imgs/mie3pm.jpg
The sun is the only light source here, yet indirect illumination lights the entire room. The sun rays bounce off objects a trillion times, cause objects to become indirect light sources etc. good luck doing this in real-time.

Quote:
Too much is made of global ilumination as a seperate thing, when in fact its all about how many pixels you can process, which will increase dramatically in future.


No, its NOT about how many pixel one can progress. Its all about gathering light from many indirect locations, for example via a photon map & final gathering (which requires photon map calculation). And this can involve many rays. Processing power may increase, but it still does not match lighting complexity in your typical shooter level.

Quote:
In short, lightmaps are not dynamic, so if you move lights OR geometry around, they are worthless, and as such become less useful in the future.


Nonsense. In a game, you rarely move all lights. Its just pointless to have everything dynamic when only 10% actually get modified at run-time.

Quote:
Shadow mapping can be made to be updated only when needed, so this isnt so much of an issue. And of course, not all lights need to cast shadows, such as those in the distance and so on.


Actually, this IS an issue. You have to handle the shadowmap cache, the importance of lights vs. their distance etc. This is not trivial, and involves tweaking. Also you do not want to pollute the cache with totally invariant shadows, lightmaps are perfectly ok for these static lights.

Quote:
As far as scene management goes, of course you still do frustum culling on CPU, but you can use an octree for this, right?


I just don't understand your octree fixation. VFC is perfectly ok for small scenes. Besides, they are not mutually exclusive.

Quote:
For a space sim, octree will certainly help you manage your scene, why wouldnt it? You certainly cant use BSP there so i dont see how that point is relevant.


Why would ANYONE use an octree in space? Space sims are not ideal cases for them; bounding volume hierarchies or a 3D R-Tree (which bases on the former) are more suited there. Besides, octrees limit you to a specific spatial range (the extents of the root node), and in astronomic scales, floating-point precision starts to become an issue, so spatial partitioning is problematic there anyway.

Quote:
Essentially the goal is to allow an artist to drop a bunch of meshes in a world in an arbitrary maner, and the engine will just work.


And guess what - this is scene specific. Right tool for the right job. An octree is not the holy grail.

Share this post


Link to post
Share on other sites
Quote:
Here are some of my predictions. We're probably at the peak of custom built engines. Pre-built, but very powerful and customizable engines such as Source and Unreal are going to be licensed out more and more. This will allow companies to focus on the game. Why spend millions on your own engine when you can pay $500,000-$750,000 for something just as good or better and devote all those extra people to the game itself?

This is only good as long as you want to do Unreal or Gears of War type of games ... everyone else will have to come up with its own engine :-) or license a different engine. Other than this just count the number of games that were released with the Unreal 3 engine in the last four years. Do you think a game team that licensed the engine four or three years ago wouldn't have been faster with its own tech?

... but I agree if you want to do a Gears of War style game you want to license the Unreal 3 engine.

Share this post


Link to post
Share on other sites
Quote:
Original post by Matt Aufderheide
Frankly graphics in realtime are very primitive still, and have perhaps DECADES to go before they approach perfect photo realism.

IN fact, graphics are the main area that should and will be improved. I cant belive someone would think something like this.

Sorry about that, I should have qualified it a bit more. I meant to say we can do it for very small areas, but it will be a while before we can do it for an entire world. This is why I also said "near" [smile]. Besides, do we really want perfectly photorealistic games? Don't we play games to get away from reality?

Share this post


Link to post
Share on other sites

This topic is 4030 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this