• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.

Archived

This topic is now archived and is closed to further replies.

Dirge

Occlusion Culling

36 posts in this topic

I should add that its just an idea of mine. I havent actually implemented it, so I dont know for sure if its any good.
0

Share this post


Link to post
Share on other sites
quote:

Still wondering what methods people use to choose occluders, since most OC schemes require a good method to get good results. What type of precomputed information do people use and how do you use it for dynamic occulder determination.


More or less distance based selection. Along with estimated projection area (the larger, the better the occluder). Add occluders in order, until a (CPU dependent) maximum number of polygons are reached.

quote:

Do people use time coherence in estimate the occluder set for the next frame?


Not really, at this point. Making good use of time coherency would require a way to consistently check the actual effectiveness of an occluder (ie. how much would the occlusion result degrade, if that occluder was removed). Resolving such dependencies can be very complex and slow, especially when taking the effects of occluder fusion into account. A certain other time coherency is used though, by considering the occluders that were visible on the last frame (ie. that weren't occluded themselves) with a higher priority on the current frame. This gives a slightly better occlusion set, as the number of redundant occluders is reduced.

quote:

Yann, can you estimate how many triangles you render with software rasterizer per frame ?


Hard to say, as I don't use triangles. I use planar shapes instead, arbitrary convex or concave polygons (that can have holes). That way, I only need to evaluate the z-gradients once for each shape. If I used triangles, I would have a lot of redundant gradient computations, since coplanar triangles are very common on occlusion skins.

quote:

Yann: So whats the difference between your method and the hierarchical z-buffer ?


None It essentially is a software rendered hierarchical z-buffer. With the difference, that only dedicated low-poly occlusion geometry is rendered to it, instead of the entire scene.

quote:

Also just out of curiousity what methods (known algo's???) are used for the line/fill tasks?


I simply use the standard "create the outer polygon lines, and fill horizontal spans inbetween" algorithm. That's the same that was commonly used in all those old non-accelerated 3D renderers. It's a little bit modified to support concave polygons.

quote:

And one last... I assume the map is rendered to a 2D array in the main memory.. true or false?


True.

quote:

One thing you can do to speed up the occlusion buffer rendering is to use a single depth value for an entire triangle. When rendering an occluder triangle using the furthest z-value of the 3 vertices for the entire triangle will make the occlusion tests more conservative, but reduces the rendering to filling a span of memory with a single value


Hmm, that would definitely speed up the rendering. But I'm not sure, if the occlusion would not becomne a little too conversative. Note that I didn't try it either, so I'm merely speculating. You would have to take the differential gradients of a polygon (1/z, in screenspace) into account, and if the difference isn't too big, then you could approximate by a constant value. Hmm, sounds good. But you shouldn't so that on all of your occlusion geometry. Imagine the player standing besides a long wall, orthogonal to the camera plane (extending from behind the camera, up to the horizon, filling the entire half of the screen).

The gradient differential would be extremely large, and if you approximated that wall with a constant depth, the occlusion effect would approach zero. Now, if you have an entire town behind that wall, well, you get the rest

But the idea is pretty good, and would probably speed up the rendering process. Especially on occluders being almost parallel to the camera plane. I will try it out on my engine, and report back.


[edited by - Yann L on May 6, 2003 6:06:01 PM]
0

Share this post


Link to post
Share on other sites
if you have large polygons, it can be really easy to occlude by making planes between the edges of the polygon and the viewer, almost like a frustrum but with more/irregular sides.
0

Share this post


Link to post
Share on other sites
quote:

None It essentially is a software rendered hierarchical z-buffer. With the difference, that only dedicated low-poly occlusion geometry is rendered to it, instead of the entire scene


Heh, I thought you said you use HOM

BTW. Have ya measured how much time statistically it takes to test the occlusion. I know its parallel with the GPU but CPU time is also useful for other things, so I`m curious...
0

Share this post


Link to post
Share on other sites
quote:

Heh, I thought you said you use HOM


Well, HOM just means ''hierarchical occlusion mapping''. How you achieve that, either by coverage maps and depth estimation buffers, as in the first Zhang approach, or by cutting the coverage and keeping an exact depth buffer hierarchy, doesn''t really matter. The result is the same.

quote:

BTW. Have ya measured how much time statistically it takes to test the occlusion. I know its parallel with the GPU but CPU time is also useful for other things, so I`m curious...


The test itself is typically very fast. Most positive or negative results can be obtained on a very coarse level, due to the hierarchy. Only half visible objects will recurse into the map, but that''s not really critical either (the whole testing code is in ASM). The bigger issue is the occlusion map rendering and hierarchy creation. Is it worth it ? primarily depends on your level design and geometry. For me, it cuts over 90% of the faces, I couldn''t live without it. But in other cases, it might not be so effective. Best thing, is to include a little runtime profiler into your occlusion code, rdtsc is your friend.
0

Share this post


Link to post
Share on other sites
Me thinks that this const-depth idea might be more useful if the space partitioning relied more upon the scene geometry. For example in this long hallway scene you might know that rooms are behind it and only check for the occlusion, not condidering the stored depth. This again can be accomplished via a semi-automatic portalization that I wrote about a couple of posts ago.
0

Share this post


Link to post
Share on other sites
quote:
Original post by treething
Hey Yann, did you get a chance to try out the constant-depth idea?

Yes, I did. It works pretty nice, actually. Although I had to set the threshold for the 1/z gradient pretty low. The occlusion went highly conservative very fast, otherwise. Still, I get a speedup of around 60% on the rendering, if directly in front of a parallel wall (obviously, since the constant filling reduces the impact of CPU fillrate). Generally, you can achieve around 20% speedup on a normal camera view. Not bad for such a simple modification.
0

Share this post


Link to post
Share on other sites
Yeah, looks like it falls down to applying a ''billboard'', so triangles that have a high gradient would not work. It''s a shame you have to do all that occlusion culling in software. Maybe in next generation of hardware, we''ll get some fast query mechanism to get some feedback from the renderer, if that''s ever possible.
0

Share this post


Link to post
Share on other sites
We have that in current generation hardware. The problem is that you cannot take advantage of CPU/GPU parallelization due to the fact that with current state of hardware occlusion culling CPU and GPU need to constantly talk to each other.

Perhaps in the next generation hardware we''ll be able to pass "if..then" instructions to hardware queries, this is when things are going to get really quick.
0

Share this post


Link to post
Share on other sites
What is about visibility determination for shadow-map lighting? For example, object A is invisible, but it can cast shadow to visible object B. So it must be rendered to the sm.

This is more complicated type of visibilty determination. Is it possible to use HOM (rendering from light source) for this purposes?
0

Share this post


Link to post
Share on other sites