Sign in to follow this  
stevesan

state of the art in geometry scalability?

Recommended Posts

id software's Tech5 promises to solve the problem of scalable texturing. supposedly, every texture can be totally unique, and the engine won't hiccup at all. we'll see if they deliver. but what about the problem of geometry scalability? it seems that games are still far away from large, highly detailed geometries. for example, are there any games that even come close to letting you fly around New York City, and then actually enter buildings via windows and doors - all with no load times or major hiccups? what is the state of the art with respect to these kind of issues? Tech5 achieves its scalability with texture compression, so i imagine geometry compression (which is getting more attention in academia these days) would play a role in solving the problem. along with LOD rendering techniques, such as progressive meshes and ChunkLOD. so, what is the state of the art with these kind of algorithms? i think solving this problem even partially could enable some amazing games, involving flight, exploration - all with zero load times and no "superman fog."

Share this post


Link to post
Share on other sites
people write books about which algorithm you can use in which case. Similar to id's megatexture idea some work for certain game genres and not for others.
It also depends on your target hardware. If you target the main game platforms 360 and PS3 you might find "natural" solutions to LOD that are different.

Share this post


Link to post
Share on other sites
Quote:
Original post by wolf
people write books about which algorithm you can use in which case. Similar to id's megatexture idea some work for certain game genres and not for others.
It also depends on your target hardware. If you target the main game platforms 360 and PS3 you might find "natural" solutions to LOD that are different.


Is it fair to say, however, that there's plenty of room for improvement? Ie. there are still some things we can't do, but may want to do (like the NYC with building interiors example)?

Share this post


Link to post
Share on other sites
Nitpick: id Tech 5 solves the texture problem primarily through fine grained (sparse) streaming, not through compression (though that is used too, both DXT to get the physical memory costs down, and more aggressive compression on-disc to get the total size down).

As for geometry one idea is to work towards getting much better displacement mapping going. We need better control over where the triangles go (ideally we'd write a small shader that just returns true or false for each passed in edge signifying if it should be split -- though some hybrid approach where you can specify a varying degree of subdivision for each edge so you don't have to recheck essentially the same edge too many times).

Once we have that, we just compress our geometry into a low-frequency control mesh, and store displacement maps (possibly as "distance from N-patch" to minimize high frequency information in the displacement maps). These displacement maps can be streamed in and compressed in the exact same way as textures are (though it probably needs a different adress space, as "important" areas for geometry are not the same as important areas for textures -- e.g. silhouettes are important for geometry, but unimportant for textures).

The low-frequency control meshes could be streamed in through a fairly naive streaming system as it doesn't take up too much memory (e.g. pre-computed cell-to-cell visibility, where you always guarantee that you have enough geometry for the current cell, and any neighbours to the current cell, or something like that).

Share this post


Link to post
Share on other sites
Quote:
Original post by sebastiansylvan
Nitpick: id Tech 5 solves the texture problem primarily through fine grained (sparse) streaming, not through compression (though that is used too, both DXT to get the physical memory costs down, and more aggressive compression on-disc to get the total size down).


Oh yeah, I didn't mean to imply it was JUST compression that did it :)

Quote:
Original post by sebastiansylvan
As for geometry one idea is to work towards getting much better displacement mapping going. We need better control over where the triangles go (ideally we'd write a small shader that just returns true or false for each passed in edge signifying if it should be split -- though some hybrid approach where you can specify a varying degree of subdivision for each edge so you don't have to recheck essentially the same edge too many times).

Once we have that, we just compress our geometry into a low-frequency control mesh, and store displacement maps (possibly as "distance from N-patch" to minimize high frequency information in the displacement maps). These displacement maps can be streamed in and compressed in the exact same way as textures are (though it probably needs a different adress space, as "important" areas for geometry are not the same as important areas for textures -- e.g. silhouettes are important for geometry, but unimportant for textures).

The low-frequency control meshes could be streamed in through a fairly naive streaming system as it doesn't take up too much memory (e.g. pre-computed cell-to-cell visibility, where you always guarantee that you have enough geometry for the current cell, and any neighbours to the current cell, or something like that).


Very interesting. There is some recent work on progressive geometry compression (just search those 3 words on the ACM database), which to my understanding focuses on just that: compressing in a way that allows for different frequency levels to be uncompressed separately in a progressive, refining way (probably with some blending/geomorphing ability too). It seems that their main application is for things like Google Earth, where streaming data over the internet is the bottle neck. But it seems very possible you could apply these same principles to interactive rendering, where the bottleneck may be reading from disk or sending to the GPU.

There is also work on visibility-sensitive progressive compression - which seems perfect for the "sparse streaming" that you'd want for this. Hmm I'll have to do more reading on this...

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this