100 billion polygons realtime?

Started by
9 comments, last by Hodgman 13 years, 1 month ago
This afternoon a friend of mine, came to tell me about this video:

http://www.youtube.c...h?v=M04SMNkTx9E

After seeing it and especially the tittle, 100 billion polys?? really?? One thing is a big large megatexture of 128 000 x 128 000,
which can take 32Gb or more on HD, maybe less with some sort of compression, and ANOTHER 100 billion polygons, in the best case
scenario, let's say a classic 44byte vertex composed of position, normal, texture coordinates and tangent, that geometry would take up to
12 TB!!

100 000 000 000 (100 billions)

100 000 000 000 x 3 = 300000000000 (3 vertex per poly)

13200000000000 bytes ( 44 bytes per vertex )

12890625000 KB

12588500.9765625 MB

12293.45798492431640625 GB

12.005330063402652740478515625 TB

How this could be possible? I'm missing something, that's for sure, because this was a talk in the GDC, I don't believe that I'am the only one who notice that 100 billions is a really big number...

Another thing, in the second 0:11, the red couch has hard edges, even the boat has it too, and the leaves are simple planes with alpha. I can't believe that a demo who has as many polygons as stars in the galaxy can't have modeled leaves, not just planes! from second 1:31 to 1:33 look the bottom left corner, that are billboards, not fully geometry, clearly those planes rotate to face the camera all the time...
Advertisement
And yet that video doesn't look any better than games I already play on my PS3 and XBox360.

You only ever need enough polygons to properly define the silhouette of whatever object is being modeled.

LOD is best used on distant objects, and when done right it blends seamlessly.

Impostor rendering is highly effective, and is also undetectable when done right.I play a game that has ~1000 zombies on screen all the time, and I can't even tell where the real ends and the impostors begin.

As game rendering gets to be more and more like real photography, the backgrounds get washed out and out of focus anyways. It's called depth of field.

Polygon count hasn't been an issue in ages. It's all about optimizing shader performance now. And when the final image is up, the problems are accurate material representation, and natural motion. This is even a problem in high end CGI films. Very bad shoulder movements. Bad hair. And everything moves very puppet like, even with motion capture.

As for your 'HOW'. The scene is segmented. 100 billion is the scene total. Not the rendered total. MOST of that data won't get touched over the course of many frames. And when stuff does need to be updated, it's quick enough to page stuff in and out over the BUS. Carmack has said you can do mega texture paging on as little as 16mb.
I understand the concept of paging and memory virtualization, the problem I see is, where the data is stored? 100 billions? there is no way to store that amount of data...

Let's say that is possible to store that amount of data (9 harddrives of 1.5TB?), just imagine the time spent on searching what polygons of which page or segment needs to be cached for rendering, can't be 100 billions... 100 millions sounds real and possible using memory virtualization, because only a few thousands will be cached and rendered and there is space on harddrive, even in ram memory to store 100 million polygons, but 100 billion, no way.
If you read the PDF linked in the video description --- the source data has billions of polygons, the runtime data is optimised down to much less than that. Also, they specifically mention that they bypass much of this pipeline for certain meshes that the artists wanted to model traditionally, namely the alpha-textured trees.

Regarding the space required for their texture data, they use their own (up to) 60:1 compression on colour textures (compared to DXT's 6:1 ratio). They transcode to DXT when uploading to the VRAM's mega-texture pages.

Read the PDF. The presentation is impressive because of the work-flow, not the runtime poly-count. There's no UV unwrap or 2D texturing steps -- that's a huge part of the traditional art pipeline that they just nuked! There's also no polygon budget or texture budget for the artists to consider, and decisions like what resolution to use are done by the pipeline, not by the artists. The whole presentation is about giving freedom to their art department, not "look how many polygons we can draw".
[font="arial, verdana, tahoma, sans-serif"]And yet that video doesn't look any better than games I already play on my PS3 and XBox360.
Again, the only part about poly-counts that's interesting is the pipeline they've created that lets the artists use as many polygons as they like while modelling -- the number they draw at runtime isn't the point.[/font]
[font="arial, verdana, tahoma, sans-serif"][/font] [font="arial, verdana, tahoma, sans-serif"]Also, half[/font][font="arial, verdana, tahoma, sans-serif"] of the presentation is on their real-time global-illumination solution, which is quite novel and largely runs on the CPU. You can see this in the demo with things like the red armchairs or the green walls being bounced off the white furniture. Those bounces aren't baked or faked.[/font]
From what I gathered from having a look over the paper, the "mega mesh" stuff is part of the content creation process, and is a tool which gets around limitations of sculpting tools (zbrush, mudbox, etc). These tools aren't designed for sculpting entire worlds at once, as they work with subdivided versions of geometry (your model might be 10k polys, but you need hundreds of thousands or even millions of polygons to sculpt in the details) and they fall over with high amounts of polygons (10 million in the paper). Obviously 10 million isn't enough to represent an entire world for sculpting so the megamesh tool lets them select certain parts of geometry to use in the sculpting tool at a time.

Once that's done then it looks like a pretty standard pipeline of producing normal maps and then rendering as normal ingame (maybe 100,000 polygons per scene, but with normal maps of billions of polygons). They aren't actually rendering billions of polys in realtime.
[size="1"]
And yet that video doesn't look any better than games I already play on my PS3 and XBox360.

Hah! I'll consider that as a subjective opinion, of course; personally I liked the cartoony look of the video. Not everything has to look like Crysis, Modern Warfare, or Gears of War, or yet another an Unreal tech derivative.

From a tech perspective, it is quite impressive, as Hodgman pointed out.
<h3></h3>
Latest project: Sideways Racing on the iPad
Ok, thanks for pointing that out, I understand now, interesting without a doubt, and I agree, looks different and I kind of like it,
is new technology, new ways of doing the traditional process, it totally worth it. I am tired of Unreal, I think is overrated, just because
it's tools are good and easy to use doesn't make it the best engine, it's the most common and traditional, uses light maps, PRT and
thousands of filters to make it pretty, glow everywhere, I don't now, maybe it's impressive for some gamers, not for me.

ID Software in the other hand is impressive, again, efforts on new technology, it is progress, not the same thing with more glow all over...

BTW the label of the video should say "100 billions polygons on the content creation process", they tend to make us think that is in real time, which is not.
A lot of those polygons are also achieved by the tessellation chip in the Xbox, so the actual stored scene has fewer polygons then the rendered scene does.
Yep, it's not "infinite geometry" or anything. Still, it's a neat way to set up the pipeline to take advantage of megatextures.The talk about GI was also nice.

[quote name='Daaark' timestamp='1300529612' post='4787871']And yet that video doesn't look any better than games I already play on my PS3 and XBox360.

Hah! I'll consider that as a subjective opinion, of course; personally I liked the cartoony look of the video.
[/quote]I don't even like Crysys. I wasn't talking about the cartoony effect. That was style.

I meant the overall technicalities. Regardless of the technology they used, the final image doesn't look any better than existing games already on the 360. It doesn't solve any of the big problems we have with 3D graphics.


This topic is closed to new replies.

Advertisement