Level Rendering

Started by
3 comments, last by JasonBlochowiak 17 years, 1 month ago
Games like God Of War have an extremely vast environment. what do they use to render the environments, i dont see it being octree based of bsp based. any thoughts ?
Code makes the man
Advertisement
You can render vast environments with both BSP and Octree structures (as well as zone/portal structures). For example, in our platform at work, we render all of earth and near space; we use an Octree.
enum Bool { True, False, FileNotFound };
yea, i know that large environments are suited for octree's, specially with a detailed LOD implemented. i was just wondering if anyone knew... the other thing that is very nicely done is that the levels all appear to be seamless... i can not remember any loading sections or what. i am assuming they are loaded in in the background ( multithreaded ) while cutscenes are playing. i currently am running an octree with bsp for collision detection ( fairly fast ).

anyways, thanks for the input and look forward to hearing from others on the topic
Code makes the man
It depends on the specifics of the game mechanics and graphical design, really; every project will be different. For instance, our projects have very large environments (in terms of raw units) but we use nothing but naive frustum culling and LOD reduction, because objects are very sparsely placed in empty space. That approach wouldn't work well for many projects, but works great for us, and there's not much point implementing anything more sophisticated.

The gameplay mechanics will affect things a lot as well. Limited or constrained camera angles can be exploited to eliminate geometry that will never be visible, and this can provide tremendous benefits for certain games. In general one of the biggest factors is the maximum viewable set - the largest chunk of space that might possibly be visible during the scope of gameplay. If the camera tends to be fixed looking down at the ground, for instance, a fairly simple frustum cull against the scene can be sufficient, because you're only looking at a fairly small area at any given time. A more dynamic or horizontally-oriented camera may require a different strategy.


In short: it all depends [smile]

Wielder of the Sacred Wands
[Work - ArenaNet] [Epoch Language] [Scribblings]

I've used a hybrid portal/octree (well, Kd tree, really, but close enough) scheme to deal with indoor and outdoor areas on PC & console titles. No vast terrains in that, but decent sized outdoor areas.

Basically, each Zone had portals that could lead to other Zones. Zone visibility testing was done by flooding outward from the current Zone through visible, open portals. Each Zone was a loose fitting Kd tree - which nodes got rendered would depend on how the tree fit against the camera frustum.

A later optimization was to trim the camera frustum as it passed through the portals. This let us treat walls and such as effective major occluders, without the need to use hardware occlusion queries or a similar mechanism, at least for world geometry. We also used the trimmed frusta to test object bounding volumes against. So, world geometry generally acted as major occluders for objects, but we never really had circumstances where objects were significant enough occluders to go the other way (and doors and the like already closed their portals).

We also did background streaming to allow us to use much larger levels.

For testing or playing around it might not matter, but for an actual game I'd highly recommend splitting the collision geometry from the visual geometry. Collision geometry generally can be a lot coarser - partly because it doesn't need to be 100% accurate, and partly because a lot of visual details that require higher polycounts to represent don't need to be a part of the collision layer. Also, there are times where it's easier to fudge the collision geometry than it is to get the physics to handle things the way you want - a classic example is street curbs being modeled as slopes in the collision layer. Additionally, most physics packages have special pre-processing for cooking polygon soups into something that can be used quickly at runtime, so you usually end up with a different chunk of data in memory anyways.

This topic is closed to new replies.

Advertisement