Jump to content

  • Log In with Google      Sign In   
  • Create Account


Unlimited Detail Real-Time Rendering Technology?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
26 replies to this topic

#1 SteveDeFacto   Banned   -  Reputation: 109

Posted 03 August 2011 - 02:16 AM

http://www.youtube.c...h?v=00gAbgBu8R4

What do you guys make of this? Personally I'm not sure what I think and something about it seems a miss...

Sponsor:

#2 Hodgman   Moderators   -  Reputation: 28422

Posted 03 August 2011 - 02:40 AM

There was already a topic posted yesterday:
http://www.gamedev.n...ing-technology/
and in april:
http://www.gamedev.n...ail-technology/
and a year before that:
http://www.gamedev.n...limited-detail/
and probably other threads too...

This guy pops up with his exaggerations about once a year (he's been working on this since 1995), and is somewhat of a joke in the game industry.


He's got a data structure for efficiently doing ray queries against point-cloud data. That's about it.
It's not even a unique achievement. There's plenty of public research on the same topic if you want to make your own.

If you want to know what's amiss -- he doesn't have:
* proven destruction/deformation (kinda important for a lot of modern games).
* proven large-scale animated scenes (they've only shown a single small model with animation).
* proven storage of large scale data-sets at all (the demos are heavily instanced).
* proven work-flow for content creators (this is what they're working on now, after getting investment).
* proven shading pipelines (no description of the "pixel shader" equivalent in his system).
* decent art to show it off.
* decent frame-rate (20hz is 'interactive' but not 'real-time').
* GPU acceleration (games need the CPU for other tasks too).

#3 Gaiiden   Senior Staff   -  Reputation: 4811

Posted 03 August 2011 - 01:43 PM

Yea, there's absolutely nothing about this tech, taken from what the guy has said about it in his videos, that interests me from a game development standpoint. Right, making games look trillions of times better will make games better. Uh huh. Scuse me while I go play through Deus Ex for like the 9th time....

Drew Sikora
Executive Producer
GameDev.net


#4 way2lazy2care   Members   -  Reputation: 782

Posted 03 August 2011 - 01:57 PM

Yea, there's absolutely nothing about this tech, taken from what the guy has said about it in his videos, that interests me from a game development standpoint. Right, making games look trillions of times better will make games better. Uh huh. Scuse me while I go play through Deus Ex for like the 9th time....


I think it would open up a ton of possibilities for artists to express themselves without having to deal with polygon budgets. That would help a lot. Not to say that gameplay isn't more important, but technical art budgets being expanded to such a degree would have a huge impact.

#5 phantom   Moderators   -  Reputation: 6858

Posted 03 August 2011 - 02:21 PM

I think it would open up a ton of possibilities for artists to express themselves without having to deal with polygon budgets. That would help a lot. Not to say that gameplay isn't more important, but technical art budgets being expanded to such a degree would have a huge impact.


Not really; you'd replace 'poly budget' with 'voxel budget' instead.. and right now it has all the problems Hodgman listed.

#6 Antheus   Members   -  Reputation: 2397

Posted 03 August 2011 - 02:22 PM

Yea, there's absolutely nothing about this tech, taken from what the guy has said about it in his videos, that interests me from a game development standpoint. Right, making games look trillions of times better will make games better. Uh huh. Scuse me while I go play through Deus Ex for like the 9th time....


"Look better" is barely about poly counts anymore. The current challenge would be the uncanny valley.

But too much graphical detail annoys me in a similar manner. The pebbles on the road example are nice. But then the very first thing my brain will want to do is to kick them or draw a line in the sand. Or if I shoot in the ground, I expect individual pebbles to fly around. And then interact with a nearby cardboard box, punching hundreds of holes in it. And here is where the system breaks down. But for plastic don't-touch scenes, sure, why not. It does look nice.

And just like invisible walls are annoying, this static nature of such high detail will be a detractor. Level designers managed to work around it mostly, but for open spaces we're still confined to islands surrounded by endless sea.

----

They seem to be using heavy instancing. I wonder if fractal algorithms could be put to use. Or if a generalized tree structure with arbitrary non-orthogonal subspace divisions would work.

#7 Luckless   Crossbones+   -  Reputation: 1730

Posted 03 August 2011 - 02:40 PM

Yea, there's absolutely nothing about this tech, taken from what the guy has said about it in his videos, that interests me from a game development standpoint. Right, making games look trillions of times better will make games better. Uh huh. Scuse me while I go play through Deus Ex for like the 9th time....


9th time this year, or month?


But I'm still hopeful that technology like these guys are doing is actually real, and not smoke and mirrors to leech off investors while they produce vapour ware. Because honestly, this isn't just about making games look good, but it is also about making it easy to make a game look good.

Personally I think if we develop the graphics systems to allow anyone to easily produce beautiful games, then that just means there are that many more resources that can be devoted to make the games themselves good. Once large studios can no longer focus on just making pretty games to try and appear head and shoulders above their peer's products, it will mean they have to actually make great games to appear head and shoulders above their peers.


If it works as they are promising it will, then great. If not, then fine. I'm not an investor, so I have nothing to lose.
Old Username: Talroth
If your signature on a web forum takes up more space than your average post, then you are doing things wrong.

#8 dpadam450   Members   -  Reputation: 857

Posted 03 August 2011 - 03:38 PM

Welcome to tessellation. Who cares about this tech when tessellation owns.

#9 way2lazy2care   Members   -  Reputation: 782

Posted 03 August 2011 - 03:40 PM


I think it would open up a ton of possibilities for artists to express themselves without having to deal with polygon budgets. That would help a lot. Not to say that gameplay isn't more important, but technical art budgets being expanded to such a degree would have a huge impact.


Not really; you'd replace 'poly budget' with 'voxel budget' instead.. and right now it has all the problems Hodgman listed.


The point is the promise of how much they are increasing the budget. It's not a small incremental change, it's a 100 fold change. I know for certain that the palm tree example is something that bothers me. Even little things like using bump maps and normal maps on things like the molding in doors/rooms bugs me.

Obviously if somebody found a way to increase polygon budgets to the same degree I would be just as impressed. Like Luckless said, it's about making it easy to make games look good. Haven't you ever wondered what your artists could come up with if they weren't so limited?

#10 Sirisian   Crossbones+   -  Reputation: 1672

Posted 03 August 2011 - 04:33 PM

Welcome to tessellation. Who cares about this tech when tessellation owns.

You do realize that's because of dedicated hardware support right? Tesselation takes inputs to generate more triangles. So many in fact that they're about the size of a pixel. On top of this you have multiple maps like specular, normal, displacement, etc. With all this data there are people that are realizing there are other ways to encode and render the objects which is probably faster. Showing these techniques running on a GPU of all things puts the GPU triangle renderers to shame honestly.

It's a good thing GPUs are becoming so general purpose nowadays that techniques like this can run on them. A dedicated hardware implemented raycaster (meaning custom hardware) would simply crush a triangle renderer. With SVO or nested grid ray traversal you simply step through encountering voxels one at a time making single object transparency calculations are trivial. (Multiple intersected objects requires techniques akin to DX11 linked list OIT though). Triangles are rasterized in a rather slow way that pretty much requires a GPU dedicated hardware. Also if you know how real-time radiosity can be done using light propogation volumes you immediately begin to realize that dedicated hardware based raycasters are much better technology.

I wrote a small javascript based voxel renderer for fun a while back. Doing the same with triangles would have killed my CPU. (Before anyone asks, I've tried implementing a 3D system in JS. After like 1K flat shaded triangles it starts to slow down fast). The point is stop comparing something you see running through dedicated hardware to something that performs almost the same on a general purpose CPU.

Anyway I support them. I've read most of the papers on SVO rendering, so whatever they're doing with point-cloud data seems to be working (and working fast).

#11 phantom   Moderators   -  Reputation: 6858

Posted 03 August 2011 - 05:22 PM

Anyway I support them. I've read most of the papers on SVO rendering, so whatever they're doing with point-cloud data seems to be working (and working fast).


The problem is the video shows all these awesome looking (and heavily instanced, so much so I can see the patterns) rendered scenes and no animation at all.

You can have as much detail in the world as you like but frankly without animation support it's pretty useless right now; I'll pay attention when it can do that.

#12 way2lazy2care   Members   -  Reputation: 782

Posted 03 August 2011 - 05:49 PM

The problem is the video shows all these awesome looking (and heavily instanced, so much so I can see the patterns) rendered scenes and no animation at all.

You can have as much detail in the world as you like but frankly without animation support it's pretty useless right now; I'll pay attention when it can do that.


Even using it just for environments would be pretty awesome. Little things like trees with no roots, rubble piles with no actual rubble, and small details like that go a LONG way. Much like what Carmack is supposedly working on with voxels, taking away the limits in static parts of the game would have a huge affect on quality even though interactive elements may not change at all.

#13 Hodgman   Moderators   -  Reputation: 28422

Posted 03 August 2011 - 06:02 PM

But I'm still hopeful that technology like these guys are doing is actually real, and not smoke and mirrors to leech off investors while they produce vapour ware. Because honestly, this isn't just about making games look good, but it is also about making it easy to make a game look good.

Is it? No part of their presentations even mention the art pipeline, except that they're working on a polygon-to-point-cloud converter, which is already a solved problem.

Making a game look good is an art problem, and they've not done anything to improve artist's workflows, except relax polygon budgets, maybe -- there's no budget for a single model, but they've not shown a uniquely modelled *world* -- what's the data requirement for a non-instanced scene under their system?

I think it would open up a ton of possibilities for artists to express themselves without having to deal with polygon budgets.

This has already been done with traditional polygon pipelines. "Megatextures" get rid of your texture resolution budget, and applying the same concept to geometry, along with automatic baking of texture maps, gets rid of the polygon budget.

"Look better" is barely about poly counts anymore. The current challenge would be the uncanny valley.

Exactly, the shading/lighting in their scenes is terrible. They've got more geometry density (though it's the same instances repeated over and over), but they've got absolutely no material detail.


#14 Sirisian   Crossbones+   -  Reputation: 1672

Posted 03 August 2011 - 06:07 PM

The problem is the video shows all these awesome looking (and heavily instanced, so much so I can see the patterns) rendered scenes and no animation at all.

You can have as much detail in the world as you like but frankly without animation support it's pretty useless right now; I'll pay attention when it can do that.

They have animation. They showed a video a while back with a small animated bird. Also in the description of the video they said they have animation working. I noticed a lot of people missed that part.

About the instancing I imagine one part might be art and the other might be memory constraints depending on their algorithms. I know the Atomontage engine has a nice voxel compression algorithm. I assume they've created something similar for nicely compressing their formats.

Exactly, the shading/lighting in their scenes is terrible.

They showed that their video is using a single color then showed they've recently created a smooth lighting model with more than 1 color of shadows. The picture looks impressive. Regarding shading yeah they lack material shading which is kind of odd.

Also object redundancy is everywhere in games already. If you play Crysis 2 and explorer you will notice all the windows and the artwork on the buildings is duplicated. Almost every door on one level is the same. All of which is static geometry too.

#15 Hodgman   Moderators   -  Reputation: 28422

Posted 03 August 2011 - 08:53 PM

They have animation. They showed a video a while back with a small animated bird.

Which doesn't prove anything -- that demo was likely frame-by-frame swapping of different data-sets.
They need to show an animated humanoid blending different animations.
N.B. animated SVOs are possible, but none of the unlimited detail official videos show it.

Also object redundancy is everywhere in games already

The fact their world is instanced is an important thing to note, because it has a huge impact on the claim of the amount of data that's actually in their "unlimitedly detailed" scene. Notch's point about how the demo would be impressive if they were only honest is basically what I'm getting at by pointing this out.

Crysis 2 doesn't claim to allow unlimited amounts of geometry in their worlds. An interesting tangent to note, is that large parts of Crysis 2's environments were sculpted from voxel data, and then converted to polygons for storage/rendering Posted Image

#16 Cornstalks   Crossbones+   -  Reputation: 6974

Posted 03 August 2011 - 09:26 PM

They showed a video a while back with a small animated bird.

Yeah... emphasis on the small. I dunno about you, but in most of the games I play, there's more than a small bird that's being animated.

Also in the description of the video they said they have animation working. I noticed a lot of people missed that part.

The banners on various websites I've visited have told me I've won a prize for being the1,000,000th visitor...




...Ok, so I started writing a longer response, but I'm too lazy. Basically you should just look into the research that's already been done on this so you can see for yourself the drawbacks and the advantages, but most importantly the drawbacks that the guy in the video is failing to mention (deliberately, mind you, because he wants to build up a hype and increase his funding, which is pretty much a scam, in the long run).
[ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

#17 Krohm   Crossbones+   -  Reputation: 3014

Posted 04 August 2011 - 12:23 AM

What's the point with this.

I mean, if you post on youtube, 50% of the comments will be like "this is crap", "<game of the month> looks way better" etc. Those people are very lucky. Or perhaps they're spending $$ on PR. Look at the mess going on. How can people even think this is real? Don't they see the painful patterns? That's the only thing I don't understand.

Next time I'll have a work meeting I guess I'll promise infinite possibilities. If talking shit is going to get me the money, I'll sure have a try!

#18 way2lazy2care   Members   -  Reputation: 782

Posted 04 August 2011 - 07:20 AM

Which doesn't prove anything -- that demo was likely frame-by-frame swapping of different data-sets.
They need to show an animated humanoid blending different animations.


Why does it have to be a humanoid? The bird they showed had more detail than most humanoids in any game I've seen. Animation blending they definitely need to show, but I don't see why you wouldn't be able to do that with a point cloud. Voxels are more difficult as they're storage structure as well as their data needs to be changed drastically to allow for animation, but animating a point cloud should be no more complicated than animating a mesh.

"unlimitedly detailed" scene. Notch's point about how the demo would be impressive if they were only honest is basically what I'm getting at by pointing this out.

notch both ignored any sort of compression or optimization on voxels in his data size analysis and didn't realize that the demoed tech IS NOT VOXELS. It's point cloud based. There's plenty of criticism on them overselling, but even their individual instanced spots have more detail than most entire levels in games. It's at least worth keeping an eye on.

#19 Luckless   Crossbones+   -  Reputation: 1730

Posted 04 August 2011 - 09:31 AM

But I'm still hopeful that technology like these guys are doing is actually real, and not smoke and mirrors to leech off investors while they produce vapour ware. Because honestly, this isn't just about making games look good, but it is also about making it easy to make a game look good.

Is it? No part of their presentations even mention the art pipeline, except that they're working on a polygon-to-point-cloud converter, which is already a solved problem.

Making a game look good is an art problem, and they've not done anything to improve artist's workflows, except relax polygon budgets, maybe -- there's no budget for a single model, but they've not shown a uniquely modelled *world* -- what's the data requirement for a non-instanced scene under their system?


If it works as they appear to be claiming, (and that is a huge if that I expect to prove unlikely) then it really does have a major impact on the art pipeline of a game. First, they appear to not just be relaxing the polygon budget, but kicking it to the curb altogether. That means artists sculpt out their models in whatever tools they desire, achieve the look they want, and never think twice about render performance of what they're making. One of the projects I'm attached to at work apparently lost nearly two weeks worth of artist budget because they had to go back to rework everything, first due to exceeding polygon count, and then second due to the art no longer meeting the customer's desired visual standards. That is money that could have either been put toward testing and bug fixing, or another project. (Basically reworked 90% of the game's art assets, twice.)


And then there is the far bigger part that makes creating art for a game so much easier. Simply it boils down to Not creating art for the game. You rework old art, giving it small nudges to get older assets to look as you want, and then add them to your game. Using a method like they're suggesting means you create art now, build an art portfolio within your company, and then you keep reusing the same base assets. They still get to look better as the tech advances and you build better tools to work with the core data, but "Asskicking Shooter X" makes use of a lot of the same graphical assets you used years ago when you made "Asskicking Shooter I".

If it works, it means your art budgets go toward modification and positioning of art, instead of creation, modification, and positioning of art. You'll still create art that is unique to the game itself, but your stock art assets (The random trash, the trees, and other little things that are found in every game to make the world look complete) is no longer huge chuck of an art budget. Sure that stuff often gets shared already, but the stuff that was created 3-4 years ago is already useless. The stock stuff should now have stronger legs, last longer, and still look great years after it was initially created. More assets reused for a longer period of time means more of the budget goes toward other things besides recreating stuff because it became out dated.


Do I expect it to work? No, not really. Personally I think it is pure vapour ware as far as being a usable product for game development. However that doesn't mean the dream they're trying to push isn't cool.
Old Username: Talroth
If your signature on a web forum takes up more space than your average post, then you are doing things wrong.

#20 Hodgman   Moderators   -  Reputation: 28422

Posted 04 August 2011 - 05:25 PM

Why does it have to be a humanoid?

A humanoid would be nice, because that's likely the most common use-case for animation in games.

Animation blending they definitely need to show, but I don't see why you wouldn't be able to do that with a point cloud. Voxels are more difficult as they're storage structure as well as their data needs to be changed drastically to allow for animation, but animating a point cloud should be no more complicated than animating a mesh.

As I said earlier, animated voxels have already been solved by other people (with enough integrity to publish their research) -- it's no longer a difficult/impossible problem.
I'm not saying it's not possible for UD to animate their scenes, just that they need to be honest about their current capabilities when it comes to animation.

The bird they showed had more detail than most humanoids in any game I've seen.

You're really saying that this has more detail than current game characters? What? Really?Posted Image

notch both ignored any sort of compression or optimization on voxels in his data size analysis and didn't realize that the demoed tech IS NOT VOXELS. It's point cloud based.

This has absolutely nothing to do with their lack of honesty...


Anyway, uncompressed sizes of the data is still important during production (and drives home the point that their island is not uniquely modelled, but is repeated instances of the same objects -- something the CEO fails to mention), and seeing their tech is top-secret, with no real descriptions beyond marketing-hype, there's no way to know if it's point-clouds, or voxels, or even just *bruce dell voice* tiny little atomic polygons.


If it works as they appear to be claiming, (and that is a huge if that I expect to prove unlikely) then it really does have a major impact on the art pipeline of a game. First, they appear to not just be relaxing the polygon budget, but kicking it to the curb altogether. That means artists sculpt out their models in whatever tools they desire, achieve the look they want, and never think twice about render performance of what they're making.

This has already been achieved in polygon-based engines, so it's not a new innovation.
You can already do this to your art-pipeline without requiring some proprietary "unlimited detail" renderer.

And then there is the far bigger part that makes creating art for a game so much easier. Simply it boils down to Not creating art for the game. You rework old art, giving it small nudges to get older assets to look as you want, and then add them to your game. Using a method like they're suggesting means you create art now, build an art portfolio within your company, and then you keep reusing the same base assets. They still get to look better as the tech advances and you build better tools to work with the core data, but "Asskicking Shooter X" makes use of a lot of the same graphical assets you used years ago when you made "Asskicking Shooter I".

Doesn't this already happen? Don't we already do stupidly high-poly sculpts, which can be re-used on different levels of graphics hardware? High poly sculpts from 5 years ago still exceed current real-time poly limits, which means they're still reusable.

Do I expect it to work? No, not really.

Why not? There's been plenty of real-time ray-casting research done in recent years that does actually work on *unique* data-sets much larger than the UD demo. It's totally possible, which is why UD's sensationalist claims are so obnoxious -- it's already been done and the research published publicly.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS