Handling an open world

Started by
18 comments, last by Infinisearch 8 years, 5 months ago
Hi all,

In the last 2 years I learned a lot about 3D, OpenGL, GLSL, Rendering techniques like instanced rendering, framebuffers and post processing, but also things like ECS vs OOP, state managing, different Frameworks (GLFW, SDL, SFML, ...),
Physics (BulletPhysics) and stuff like that.
Previously I made little 2D games in Java and C++.

Now I'm feeling ready to dig deeper.
My current project is a 3D open world game. Graphics isn't relevant, that's why it will simply consist of low poly objects with low res textures and a pixelating post processing shader with a sci fi setting.
The first idea is a city, which feels a little bit like a living world.

Therefore my goals are cutted to
these points:
- animated neon lights and general lighting (banners, "advertises", street lamps, ambient lighting, etc.)
- walking pedestrians
- flying cars on multiple height levels
- moving around in first person

I think, I can definitely achieve this in a few years with my current knowledge. But there is one simple point I can't wrap my head around it:

<strong>TL:DR</strong>
- How are multiple (hundreds of) objects (buildings) handled? I know how I can render anything,but I didn't think, that I need to render anything at the same time. But how I can easily achieve it, to determine if something should be rendered and something not? How can I calculate if it is visible at the current position and view direction of the player?
And that, without hitting the performance a lot?

My ideas:
I know about texture streaming but is sonething like that possible for whole game objects? Like "game object streaming"?

I read stuff about so called SceneGraphs. But I don't really get it how it can help me? When each objects position is stored relatively to its parent position, in which way it would be simplier/faster to determine, if its in the viewport or not?

I hope it is understandable what I mean. :)

PS:
Please don't answer, if you just want to tell me something like "GTA V has over 1000 members in over 3 years full time". I appreciate it.
Advertisement


How are multiple (hundreds of) objects (buildings) handled? I know how I can render anything,but I didn't think, that I need to render anything at the same time. But how I can easily achieve it, to determine if something should be rendered and something not? How can I calculate if it is visible at the current position and view direction of the player?
And that, without hitting the performance a lot?

You will typically use frustum culling for this purpose: http://www.lighthouse3d.com/tutorials/view-frustum-culling/

Also, you can use simplified buildings for distant ones (level of detail, LOD)..

.:vinterberg:.

You can put your world into smaller grid sized bits to implement with the suggestion above to cut down on what you have to check if it is in or out as you will only have to test things in your grid + grid(s) that are infront of you

There are few things you can implement to handle large scenes:

1) View frustum culling (as mentioned by vinterberg). With this you avoid rendering anything outside of the view frustum as a simple way to keep draw calls down. You typically do this by having objects bounded by a simple shape (e.g. Axis Aligned Bounding Box or Bounding Sphere), which are then intersected with the camera frustum. To improve this, a hierarchy or grid is often used e.g. you first check if the bounding shape of an entire block of buildings is visible, if so, then check the individual bounding volumes within. Look into grids, nested grids, quadtrees and octrees as structures often used to accelerate this process.

2) Level-of-Detail (LOD) systems: Instead of drawing a full detail version of an object at all distances, you prepare simpler versions of the object which are then used when far enough away from the camera, so much so that the extra detail wouldn't really be visible. You can also stop drawing small or thin items at a distance too e.g. if you look all the way down a street, the rubbish bins needn't be rendered after a few hundred meters etc. For a very large scenes, people will sometimes use billboards or imposters (quads that face the camera essentially, very similar to particles) as the lowest level of detail. Generating the LOD models is often a manual process, but it can be automated in some cases.

3) Occlusion culling: Being inside a city, you can make use of the fact that buildings often obscure lots of the structures behind them. There are a number of techniques to do this:
i) you can break the level into parts and precompute which parts of a level are visible from each area (search for Precomputed Visibility Sets). This technique is fairly old fashioned as it requires quite a bit of precomputing and assumes a mostly static scene, but it can still be handy today in huge scenarios or on lower spec platforms.

ii) GPU occlusion queries - this is where you render the scene (ideally a much simpler version of it) and then use the GPU to determine which parts you actually need to render. Googling should provide you with lots of info, including the gotchas with this approach.
iii) CPU based occusion culling - this can be done by rasterizing on CPU a small version of the scene into a tiny zbuffer-like array, which is then used to do occlusion queries much like the GPU version. This avoids the latency of the GPU version at the expense of more CPU cost.
iv) A mix of both GPU and CPU approaches where you re-use the previous frame's zbuffer.

There are other methods but I think these are the most common.

HTH,

T

there are actually two issues to be dealt with:

1. drawing it all. frustum cull is the first step. depending on scene type, additional methods of VSD and reducing rasterizing such as oct-trees, occlusion culling, LOD, terrain chunks, optimized render queue, etc may help. well - optimized render queue always helps <g>. and don't forget to make your assets speed friendly in size.

2. fitting it all into memory. streaming, asset reuse, small (low ram) assets, real-time procedural generation - all these can be employed.

start with a prototype with a frustum cull and a whole bunch of cubes, enough to get the number of triangles onscreen as you want in the game. see how fast that runs. if its not good enough, consider your scene type and determine what additional method(s) are called for. implement and test. repeat until fast enough.

once you can draw it fast enough, then all you need to do is figure out how to get the bits of data you need into ram when you need them - some how. streaming is a popular solution. i use small assets, heavy asset reuse, and real-time procedural terrain chunk generation (both on-demand and in the background).

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

Thanks at all! That made it a bit clearer, especially the frustum culling. smile.png

I would appreciate it, if I could do this. But for me, there's one detail missing...

If I have a bunch of meshs, all bound to a cube and a player with a "camera", I only have a bunch of points (the cubes) and a 4x4 matrix (the camera). How am I now able to compute the frustum culling on the CPU side?

The link you gave me is awesome! Thanks! smile.png

But what about the logical update aspect instead of just the rendering?

If I render only the objects within the previously calculated frustum, maybe I could update each objects logic?

Or is something like "update only objects within a specific radius outside of the frustum" a good approach?

As Tessellator mentions you really should look into spatial subdivision in combination with frustum culling. Things like BSP, quad, oct, kd - trees and bounding volume hierarchies (though not spatial subdivision also PVS) could be used to accelerate frustum culling. ( https://en.wikipedia.org/wiki/Space_partitioning ) He also mentions occlusion culling this is important as well, look into software occlusion culling and predicate rendering (IIRC).


But what about the logical update aspect instead of just the rendering?
If I render only the objects within the previously calculated frustum, maybe I could update each objects logic?
Or is something like "update only objects within a specific radius outside of the frustum" a good approach?

Yes LOD in regards to AI and current game relevance is also a good idea, radius is good and a second dimension is relevance to an active "plot".

-potential energy is easily made kinetic-


But what about the logical update aspect instead of just the rendering?

i only update everything at games speeds up to 128x accelerated time. above that speed, i simply model the effects of running the AI, but don't actually run it. and anything far enough from the player gets removed from the simulation entirely, so it never gets updated.

Norm Barrows

Rockland Software Productions

"Building PC games since 1989"

rocklandsoftware.net

PLAY CAVEMAN NOW!

http://rocklandsoftware.net/beta.php

I would recommend using a premade engine for something like an open world game.

Something like unreal engine 4 would do the job well as it already has industry proven LOD, frustrum culling, level streaming and open world systems built in (see their kite demo).

You could spend years writing all this yourself, and if you just want to get on writing the game and know the functionality needed works fine, going the premade route is a good idea.

If you do want to do it all yourself, good luck because creating it and making sure it scales well could take you a long time...

Good luck!


Something like unreal engine 4 would do the job well as it already has industry proven LOD

I have no real experience with unreal engine 4 so I was wondering exactly what you mean by this? Does UE4 use some sort of dynamic LOD or is it still static but the method used to select LOD level is "robust"?

-potential energy is easily made kinetic-

This topic is closed to new replies.

Advertisement