[Theory] Unraveling the Unlimited Detail plausibility

Started by
166 comments, last by Ben Bowen 11 years, 11 months ago

What do you guys make of his claim that "we're not using any rays"? That part struck me. I don't understand whether he just meant "we are not raytracing", or whether he's really saying they aren't tracing a ray from the camera point into the scene to sample their geometry structure.
As I posted earlier, I think they're using frustum intersections, not rays.

For each pixel on the screen, project it out into a 3D volume (capped by the near and far planes). Select the sub-set of all points that are in that volume, then select the point in that set that's closest to the near plane. Except do it in a way that's fairly independent to the amount of points in the data-set, enough so that you'd be bold enough to say it has constant time complexity...


I just finished watching the 40 minute interview he did. I really don't see the issue with it. He backs up most of his claims and doesn't really attack anybody.
Most importantly:
* He claims a search on pre-sorted data using random input predicates with O(1) complexity - something google would kill for.
* He claims ?:1 compression ratios - worthy of a nobel prize.
Also:
* He completely misrepresents the statements of Notch and Carmack and fails to address the issues raised.
* He disregards atomontage by saying it's constrained to small scenes, when they've demo'd large unique landscapes -- and does this as part of his refutation of the point that what they've done is not new/innovative.
* He implies that the 3D scanning techniques demonstrated are somehow connected to his technology.


[font="arial, verdana, tahoma, sans-serif"]
[quote name='A Brain in a Vat' timestamp='1313095472' post='4847912']Your inability to argue points like a big boy is disgusting and is to the detriment of this whole forum.
[/quote]You're both being a pain in the cock. If you think someone's trolling you, stop feeding them.
[/font]
Advertisement

Most importantly:
* He claims a search on pre-sorted data using random input predicates with O(1) complexity - something google would kill for.

I think you might have misunderstood what he said. He never said the complexity of the search algorithm. Just that it only had to run once for each pixel. It would be awesome if it were O(1), but if it were O(1) for each pixel on the screen you could do a lot better than 20fps in software. Really it sounds like it's O(R*X) where R is the resolution and X is a big question mark that we can assume is better than terribad. I think someone brought this up on page 4.

* He claims ?:1 compression ratios - worthy of a nobel prize.[/quote]
he never claimed that. He actually avoided talking about the specifics of the memory footprint other than saying that he is pleased with it and giving the specs on the laptop on which it's running. He clarifies the "unlimited detail" point quite a few times in the 40 minute interview. I'm fairly certain after watching it through he means that the limit is no longer on the amount of geometry you can process in a given scene; you can double check his clarification, but that's what I took away from it.

* He disregards atomontage by saying it's constrained to small scenes, when they've demo'd large landscapes -- and does this as part of his refutation of the point that what they've done is not new/unique.[/quote]
this surprised me too. The more obvious thing to have pointed out is that atomontage appears to have it's focus much more on representing the inner volumes of objects not just the exterior as appears to be the case with UD.

[font=Arial]* He completely misrepresents the statements of Notch and Carmack and fails to address the issues raised.[/quote][/font]
[font="Arial"]To be fair, Notch misrepresented UD quite a bit in his original assessment in the first place.[/font]
[font="Arial"] [/font]
[font="Arial"]He didn't really touch on Carmack's too much other than as a counter to something notch said, but Carmack didn't really raise any super specific issues other than to say it was more than a few years out, which I don't think he's really contesting as it's at least a year out from the tech being done and then even further to integrate it into a game engine and eventually into a game.[/font]
[font="Arial"]
* He implies that the 3D scanning techniques are somehow connected to his technology.[/quote][/font]
[font="Arial"]Well if he can use straight point cloud data in engine he has a pretty solid argument for that. Of course it could just be a result of him not having to optimize the data he converts, which would make it a less favorable argument.[/font]

[quote name='Hodgman' timestamp='1313112155' post='4848045']* He claims a search on pre-sorted data using random input predicates with O(1) complexity - something google would kill for.
I think you might have misunderstood what he said. He never said the complexity of the search algorithm. Just that it only had to run once for each pixel. It would be awesome if it were O(1), but if it were O(1) for each pixel on the screen you could do a lot better than 20fps in software. Really it sounds like it's O(R*X) where R is the resolution and X is a big question mark that we can assume is better than terribad. I think someone brought this up on page 4.[/quote]He says that they run it once for each pixel, so that the complexity of the algorithm is related to the number of pixels processed instead of the amount of geometry processed.
Normally, it would be based on both, so it would be [font="Courier New"]O(P*G)[/font] (where P is pixels and G is geometry), however he implies that the complexity is independent of the amount of geometry, which is why it's "unlimited", meaning the complexity is just [font="Courier New"]O(P)[/font].

Now, if we're just rendering a single pixel, that means the complexity is [font="Courier New"]O(1)[/font] (or [font="Courier New"]O(constant)[/font] if you like) meaning that the search for the closest 'atom' for a single pixel runs in constant time, regardless of the amount of geometry in the scene.

This is clearly nonsense. You can only achieve constant-time search if you've got enough storage, and the storage requirements for pre-computed search on all possible position/direction inputs in an unlimited size scene is.... infinite.

What's really going on is that it's O(P*G) where G is "geometry times a small fraction" or "geometry raised to a small power", or "log[sub]large-base[/sub] geometry", etc... So small that it seems like O(P) in sensible conditions. If it really was O(P), then it's really so amazing that he shouldn't be selling it to games companies; he should be selling it to google. This invention would be a paradigm shift for all of computer science.
he never claimed [?:1 compression ratios].[/quote]He obviously didn't use those words, no.

[font="Arial"]He was asked "you must have some sort of memory limitations?", and replies "Umm. No. The simple answer is: no."[/font]
[font="Arial"]Along with all the "unlimited detail" hyperbole, this implies that there is no limit on the amount of data they can pack into a scene.[/font]
[font="Arial"]Obviously, there has[/font][font="Arial"] to be a limit, otherwise he'd have achieved infinite compression. ...but instead of ever addressing any real-world downsides or limitations, he sticks to his 'unlmited' line.[/font]

[font="Arial"]Even if he said, "well obviously there's a limit to the amount of data you can fit on a disk, but we're compressing it so well that it may as well be unlimited", I'd be ok with it. I'd actually be impressed if he used real metrics, like saying they're currently averaging 0.1 bits per atom, etc... However, he's always stretching things that little bit further, into bullshit territory.
[/font]
[font="Arial"]
Well if he can use straight point cloud data in engine he has a pretty solid argument for [3d scanning being connected to his tech]. [/font][/quote]However, he mentions that they scanned the elephant into a dense polygon representation, and then converted that into point data.

He says that they run it once for each pixel, so that the complexity of the algorithm is related to the number of pixels processed instead of the amount of geometry processed.
Normally, it would be based on both, so it would be [font="Courier New"]O(P*G)[/font] (where P is pixels and G is geometry), however he implies that the complexity is independent of the amount of geometry, which is why it's "unlimited", meaning the complexity is just [font="Courier New"]O(P)[/font].


Here's what he said about the search algorithm:
-But come on, you say the technology has unlimited power?
-Um yes, yes we do, and I know that's a very strange claim, but give us a chance to explain. At present the graphics card companies and the console companies all try to build bigger graphics cards or bigger consoles so you'll have more power because everybody knows that if you give a computer something to do like put a polygon on the screen or put an atom on the screen it's going to take a bit of maths and a bit of processor time to do it, so if you want to have a lot of stuff or unlimited stuff it's just not possible. Ok in our particular case we don't go about solving that problem in the same way as everybody else. Let's say your screen is 1280X720 or 768, what we do is we have a search algorithm that goes out and it grabs exactly one atom for every pixel on the screen. So if you do it that way, you end up being able to have unlimited geometry, as we show, but we're not being wasteful in how we present it on the screen.

I think he much more implies that his sort algorithm on G is just better than what is currently used. I think you're reading between the lines to see him claiming an O(1) search algorithm on point cloud data.


[font="Arial"]He was asked "you must have some sort of memory limitations?", and replies "Umm. No. The simple answer is: no."[/font]
[font="Arial"]Along with all the "unlimited detail" hyperbole, this implies that there is no limit on the amount of data they can pack into a scene.[/font]
[font="Arial"]Obviously, there has[/font][font="Arial"] to be a limit, otherwise he'd have achieved infinite compression.[/quote][/font]
[font="Arial"]Here's the quote just so we don't argue over what was said or not said:[/font]

[font="Arial"]-Do you have memory problems? People are claiming that you must have some sort of memory limitations.[/font]
[font="Arial"]-Umm. No. The simple answer is no. Our memory compaction is going remarkably well. I think we've used up our quota of unbelievable claims this month, so I'm not going to talk about memory compaction. We're not finished on that as well...[/font]


[font="Arial"]I think you're reading between the lines a bit on both accounts, but if that's what you get out of it then fine.[/font]

He says that they run it once for each pixel, so that the complexity of the algorithm is related to the number of pixels processed instead of the amount of geometry processed.
Normally, it would be based on both, so it would be [font="Courier New"]O(P*G)[/font] (where P is pixels and G is geometry), however he implies that the complexity is independent of the amount of geometry, which is why it's "unlimited", meaning the complexity is just [font="Courier New"]O(P)[/font].

No he mentions that in the pixel algorithm they are forced to sort. I assume he means sorting geometry or data points that are returned. So the geometry probably does play a part. It's just the part it plays isn't as big as with triangles.
[font="Arial"]-Do you have memory problems? People are claiming that you must have some sort of memory limitations.[/font]
[font="Arial"]-Umm. No. The simple answer is no. Our memory compaction is going remarkably well. I think we've used up our quota of unbelievable claims this month, so I'm not going to talk about memory compaction. We're not finished on that as well...[/font]


[font="Arial"]I think you're reading between the lines a bit on both accounts, but if that's what you get out of it then fine.[/font][/quote]For what it matters, I also think this is nonsense. But considering the amount of heavy instancing, I'm surprised they need to compress something in the first place.

No he mentions that in the pixel algorithm they are forced to sort. I assume he means sorting geometry or data points that are returned. So the geometry probably does play a part. It's just the part it plays isn't as big as with triangles.[/quote]Point taken but O(P) is the same as O(P * K) with K being a constant. So, having a smaller K does not change what Hodgman is saying.

BTW, I was thinking... suppose we 3D scan something. If this detail is unlimited... isn't it going to take a while to go through the wire from scanner to host? Because I don't think they built their own scanners.

BTW, perhaps I'm limited in my understanding but I acquire info with my senses and process it using my brain and what I see...
[attachment=4863:UD.jpg]
...is that there's nothing besides this island, which is admittedly bigger than atomontage but limited nonetheless. So, when he's talking about being unlimited - aka infinite - surely he's not talking about "infinitely large" but rather "infinitely small". As a start.

Previously "Krohm"

here is my take on this thing having also now watched the interview..

1. forget the "unlimited" bit... nothing in the universe is so just see it as just a "AWESOME AMOUNTS OF" instead, which is what he means methinks. so don't waste your energy on that, we all know its not actually unlimited. That is if you are taking the word unlimited to mean infinite.... but the two are different, unlimited could be the same as when another engine says it supports an unlimited number of lights.... which it true... the engine supports it.... your machine might just not be able to handle it (not a limit imposed by the engine but by the users computer)
either way I wouldn't get hung up on it.


2. he is the guy who came up with the technology and he was a hobby programmer, this could explain how he gets some terms wrong (level of distance??!) and why he may seem quite condescending... if he has no background in traditional graphics then that would make sense. His lack of knowledge of current methodologies is what I think lead to him going about it however he has done.

3. I am more and more thinking that this will lead somewhere and may indeed be the future of graphics (the guy who interviewed him was blown away) and from the sounds of it its only going to get better and faster

4. It still "boggles my mind"!!!

5. - 10. not included as I should really be working

:)

BTW, perhaps I'm limited in my understanding but I acquire info with my senses and process it using my brain and what I see...
[attachment=4863:UD.jpg]
...is that there's nothing besides this island, which is admittedly bigger than atomontage but limited nonetheless. So, when he's talking about being unlimited - aka infinite - surely he's not talking about "infinitely large" but rather "infinitely small". As a start.


They've shown a bunch of other older demos which were slightly more varied in the blocks used, but those instead lacked much of the quality... so they just traded one thing for another. And so far, everything we've seen that would be indicators of memory usage have been terribly bad (few overly reused blocks, non-shaded materials, etc). Worse than that it even seems as if they are constrained to a grid, because every single demo they've ever shown has been built from prefab tiles as far as I've been able to tell.

However, it should be important to note that the size of the island they show is in most likelihood meaningless, they could probably with ease make it... A MILLION TIMES... larger without any issues, that is meant to be the strength of the algorithm... however, they could not add more unique models to make any use of it.

And what really strikes me as strange is why they are still running it on only 1 core after all these years, it should be pretty much trivial to utilize all the cores (and remove any chance of gameplay!). I'm curious how memory performance and bandwidth works out for this, now I'm far from an expert on this, but it really seems as if that could be a potentially huge issue to overcome if it indeed is an issue (much like it is an issue with raytracing).


But really, it all falls flat in theory for me. Textures and geometries today consume enough storage and memory as it is, we couldn't simply double that today and expect everything to run well. So, now consider that reusing textures over and over like we do today is very efficient... even storing color data as textures is efficient, it allows for compression and compositing multiple textures to seemingly make up quality from thin air. Triangle geometry is efficient, you can store enormous landscapes as dirt cheap (even compressed) heightmaps.

Now, consider what UD is doing:
They apply the texture individually to each voxel... so there is no texture reuse at all, it becomes harder to compress the color data
They break up the geometry into individual voxels... so a single triangle becomes a lot of voxels

So, let's for the sake of the argument say that, they have somehow managed to come up with a compression algorithm that takes all these voxels and manages to compress them down to the size of the original polygonal model. Great... right? Well, I would argue that no, it doesn't really matter all that much... because it all comes back to the texture issue. With polygons, we can make a statue that uses 2 textures, then make a 100 more statues using the same textures. In UD, every single object has its own unique "texture"... and note that the same is true for terrain. You can no longer reuse that grass texture over and over, or use a dirt cheap heightmap to represent hundreds of kilometers terrain... instead you now have to represent each triangle and texture by hundreds and hundreds of small voxels.

There is simply no way they could achieve the storage efficiency we enjoy today, even if they use every imaginable cheat and use 3D texture materials and all kinds of tricks... it will never be nearly as storage efficient as polygonal geometry and textures, it simply can't. Or am I missing something?

And like all good things, they are not good things unless they also work in practical circumstances, it's "easy" enough for nVidia and 3DMark to whip up impressive and carefully tweaked demos, applying it in games is a very very different thing


I think this thread is starting to look more like theology then theory. Maybe it would be a good idea to start a new one where no 'he's full of bs, no you are full of bs' is allowed and where we make a list of claims who may lead to what this guy is doing. I remember that somewhere in his old technology preview he states that there are 64 atoms per cubic mm, and he says no raytracing... If I have some spare time tomorrow I might just look at all the video's he's made and compose such a list. As I think that was the original intention of this thread.
all I know is atmontage, and any engine besides Dells is a worthwhile technology, his is the most biggest shit ive ever seen, if it can make anything past what hes shown already id be suprised.

instancing... crap...

This topic is closed to new replies.

Advertisement