Archived

This topic is now archived and is closed to further replies.

Indoor levels with today's hardware

This topic is 5670 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, i''m thinking about how to do indoor rendering with todays GPU''s (GF3 and better). Quake3''s low-poly & eye-candy way to go is no longer top-notch, today''s T&L hardware is capable to render a quake3-level brute-force with >40 fps. Also, it''s not that interesting to do a q3-renderer anymore :> So, lets have a look at todays fancy features: vertex & pixel shaders, high-poly environments. I thought about using them in a indoor level, but there are some problems: -Visibility Determination: of course. its always a problem. Limiting the level to a static one (or "semidynamic", e.g. some parts can be modified, like a breakable wall) eases this greatly, tho. A relatively easy approach would be to use portal rendering with not necessarily convex sectors, all polys of the sector are stuffed in one VB, all their indeces in one IB. so, VSD is achieved by determining and drawing the visible sectors. -Memory issue: unfortunately, high-poly means high-mem consume. So, some kind of multithreaded loading in the background has to be done. uh, right now i can''t remember any more problems, but i''m pretty sure there are zillions of ''em =) what are your thoughts about this kind of stuff?

Share this post


Link to post
Share on other sites
HSR and efficient memory management are definitely the biggest problems.

First point is to organize your data structures in the best possible way: minimum cache misses, fast availability, 3D hardware optimized formats (to avoid on the fly conversion) and swapability (if you have really big scenes). Everything should be organized using a hierarchic scenegraph, so that local geometry is grouped together in memory. This greatly reduces cache misses and allows fast streaming of the data to the 3D card.

HSR is another point. Modern portal based systems (that do not require convex sectors, and allow arbitrary subconnections between sectors) fare pretty well in terms of efficiency. Next, you might want to use an occlusion culling scheme, even if this is not entirely useful on indoor scenes. A good LOD scheme is vital though.

About dynamic vs. static scenes: rules of thumb with modern 3D cards is: You can make them as dynamic as you want, as long as you do it entirely on the GPU (using vertex and pixelshaders). Streaming dynamic geometry is expensive, especially if it needs to be preprocessed by the CPU. If it is static in memory, then making it dynamic on the 3D card itself is very fast.

As an example: In our company, we are currently working on a realtime architectural walkthrough. The scene is a highly detailed airport consisting of approx. 120 million faces. The requirements are minimum 50fps constant, and the whole thing has to run on a GeForce4 Ti4600. That isn''t easy. The memory requirements are enormeous (several gigabytes), so we use a cached streaming system: from HD -> RAM -> 3D card. Although the scene is essentially 90% indoor, we can''t use portals since we have tons of glass and transparent walls. That would render a portal system useless. So we exclusively use occlusion culling and a LOD system. We could cut down the max. number of faces per frame to approx. 350,000, which is acceptable for our target hardware.

So basically, it''s not so much a question of the number of faces used. More a question of how the data structures are organized, and how fast they can be accessed.

/ Yann

Share this post


Link to post
Share on other sites
Well, more or less standard occlusion culling. We have a highly simplified mesh of the large scene features (walls, floors, large objects, etc). This mesh is rendered using normal culling methods. Then, while rendering the high polycount scene, each scenegraph node is checked against the previously rendered occlusion map. If it is fully occluded (depth test), then it is not rendered. Newer 3D cards let you even do that check in HW (although we do it in software for various implementation reasons).

Share this post


Link to post
Share on other sites