• Create Account

## [Theory] Unraveling the Unlimited Detail plausibility

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

168 replies to this topic

### #81JohnnyQ  Members

325
Like
1Likes
Like

Posted 11 August 2011 - 12:42 PM

I actually had the geometry in mind when talking about lossless compression. They almost surely perform lossy compression on color and normals (assuming they do not calculate them on the fly). I automatically assumed, however, that they do not throw away any geonetry information when converting from polygonal to octree, because geometric detail is supposed to be a strong point of voxel technology.

This is true. But for storing geometry information you only need 1.5bit per voxel which is really really small (and nice). Material information (color/normal/specular/emissive) can be stored in seperate textures that are strongly compressed both on disk and in graphic memory (DXT1 format for example).

In this thread I've tried to explain how you can get such good compression:
http://www.gamedev.n...36-disk-octree/

It is actually quite simple.

EDIT: In previous posts you had a fight about color inheritance in child nodes. This kind of inheritance is completely unnecessary because if color is stored in DX format texture, it will always be very well compressed - there is no need for bother with inheritance (will produce messy code). Monochromatic surfaces will be compressed very well because of DX format inner workings.

### #82way2lazy2care  Members

782
Like
1Likes
Like

Posted 11 August 2011 - 02:28 PM

How is what John Carmack is doing in this video different from what Bruce Dell is doing in the last video posted? It's the exact same thing. You just say anything you can to win an argument, without caring about truth or validity, don't you? What's the stark difference? You should be ashamed of yourself. Do you work for Euclideon or something?

And you're lying about Zenimax marketing MegaTexturing in a similar way. Show us.

What do you not get about Bruce Dell being a marketing guy and John Carmack being an implementation guy? They talk to the press differently. When you read MSDN blogs they read differently than when you listen to an interview with Steve Balmer, because Balmer is marketing and MSDN blogs are explaining. If you want to see it for yourself go to GDC and walk around the show floor. You'll hear it from EVERY representative on the show floor; they sound just like Bruce Bell. You can even go back to the conference away from the show floor and hear people that work at the exact same companies on the exact same products talk to you the way John Carmack explains stuff.

In fact I'm fairly sure I've heard iDtech 5 pitched with the line, "unlimited texture detail."

Thanks for keeping it civil though.

### #83A Brain in a Vat  Members

317
Like
0Likes
Like

Posted 11 August 2011 - 02:44 PM

What do you not get about Bruce Dell being a marketing guy and John Carmack being an implementation guy? They talk to the press differently. When you read MSDN blogs they read differently than when you listen to an interview with Steve Balmer, because Balmer is marketing and MSDN blogs are explaining. If you want to see it for yourself go to GDC and walk around the show floor. You'll hear it from EVERY representative on the show floor; they sound just like Bruce Bell. You can even go back to the conference away from the show floor and hear people that work at the exact same companies on the exact same products talk to you the way John Carmack explains stuff.

In fact I'm fairly sure I've heard iDtech 5 pitched with the line, "unlimited texture detail."

Thanks for keeping it civil though.

It's hard to keep things "civil" with you because you continually make things up to back your "arguments". I've gone through it with you in multiple threads regarding multiple topics. You're the type of arguer who doesn't care about reaching the truth, you care about coming up with some argument that can counter what someone just said, ignoring how connected to reality it may be.

Bruce Dell is a marketing guy? What makes you say Bruce Dell is a marketing guy??? Bruce Dell is a programmer.who founded his own company. John Carmack is a programmer who founded his own company. Both give demos about their tech to the press. BRUCE DELL IS A PROGRAMMER! I suspect you know this, you just are so incapable of admitting you're wrong that you cannot admit it.

Here's how it went. You were arguing that Bruce Dell's description of his tech is feasible. Most people disagreed, and gave plenty of reasons and evidence to that point. Rather than say "Okay, you guys are right, he's a liar" your brain immediately starts searching for a way to shift the argument to avoid being wrong, so you start talking about how we shouldn't be so nitpicky because this is "marketing talk" and everyone does this. When i point out that not everyone does it, with the example of John Carmack who could lie about the "unlimited detail" of his product but chooses to instead give us the truth, your brain starts looking again for a way out, and you land on the lie that Bruce Dell is excused from it because he's a "marketing guy", when in reality you know that he's a programmer just like John Carmack is.

Your inability to argue points like a big boy is disgusting and is to the detriment of this whole forum.

### #84way2lazy2care  Members

782
Like
0Likes
Like

Posted 11 August 2011 - 03:10 PM

It's hard to keep things "civil" with you because you continually make things up to back your "arguments". I've gone through it with you in multiple threads regarding multiple topics. You're the type of arguer who doesn't care about reaching the truth, you care about coming up with some argument that can counter what someone just said, ignoring how connected to reality it may be.

Bruce Dell is a marketing guy? What makes you say Bruce Dell is a marketing guy??? Bruce Dell is a programmer.who founded his own company. John Carmack is a programmer who founded his own company. Both give demos about their tech to the press. BRUCE DELL IS A PROGRAMMER! I suspect you know this, you just are so incapable of admitting you're wrong that you cannot admit it.

Bruce Dell is a CEO. John Carmack is a Technical Director. Bruce Dell released a marketing video to hype his product and company to everyone, and John Carmack was recorded talking about his product at a developer's conference to show it to developers.

Here's how it went. You were arguing that Bruce Dell's description of his tech is feasible. Most people disagreed, and gave plenty of reasons and evidence to that point.

Clearly it's feasible.John Carmack even agrees it's feasible. Most of this thread I wasn't even arguing about his technology, and just because you argue loudest doesn't mean nobody else disagreed with you either.

Rather than say "Okay, you guys are right, he's a liar" your brain immediately starts searching for a way to shift the argument to avoid being wrong, so you start talking about how we shouldn't be so nitpicky because this is "marketing talk" and everyone does this. When i point out that not everyone does it, with the example of John Carmack who could lie about the "unlimited detail" of his product but chooses to instead give us the truth, your brain starts looking again for a way out, and you land on the lie that Bruce Dell is excused from it because he's a "marketing guy", when in reality you know that he's a programmer just like John Carmack is.

Obviously he was marketing from the start. When he used the word "infinite" everyone's brain should have said, "oh. infinite. That's impossible, so it must be marketing," but instead they jumped to, "infinite?! Clearly his product is totally fake and brings nothing to the table."

By all means keep turning to insults though. That's a much better argument strategy.

### #85A Brain in a Vat  Members

317
Like
0Likes
Like

Posted 11 August 2011 - 03:18 PM

Okay, okay, I get it. The joke's on me. You're a troll.

### #86way2lazy2care  Members

782
Like
0Likes
Like

Posted 11 August 2011 - 03:31 PM

Your inability to argue points like a big boy is disgusting and is to the detriment of this whole forum.

Okay, okay, I get it. The joke's on me. You're a troll.

Clearly I'm the troll...

edit: I just finished watching the 40 minute interview he did. I really don't see the issue with it. He backs up most of his claims and doesn't really attack anybody.

### #87Hodgman  Moderators

49387
Like
2Likes
Like

Posted 11 August 2011 - 07:22 PM

What do you guys make of his claim that "we're not using any rays"? That part struck me. I don't understand whether he just meant "we are not raytracing", or whether he's really saying they aren't tracing a ray from the camera point into the scene to sample their geometry structure.

As I posted earlier, I think they're using frustum intersections, not rays.

For each pixel on the screen, project it out into a 3D volume (capped by the near and far planes). Select the sub-set of all points that are in that volume, then select the point in that set that's closest to the near plane. Except do it in a way that's fairly independent to the amount of points in the data-set, enough so that you'd be bold enough to say it has constant time complexity...

I just finished watching the 40 minute interview he did. I really don't see the issue with it. He backs up most of his claims and doesn't really attack anybody.

Most importantly:
* He claims a search on pre-sorted data using random input predicates with O(1) complexity - something google would kill for.
* He claims ∞:1 compression ratios - worthy of a nobel prize.
Also:
* He completely misrepresents the statements of Notch and Carmack and fails to address the issues raised.
* He disregards atomontage by saying it's constrained to small scenes, when they've demo'd large unique landscapes -- and does this as part of his refutation of the point that what they've done is not new/innovative.
* He implies that the 3D scanning techniques demonstrated are somehow connected to his technology.

Your inability to argue points like a big boy is disgusting and is to the detriment of this whole forum.

You're both being a pain in the cock. If you think someone's trolling you, stop feeding them.

### #88way2lazy2care  Members

782
Like
0Likes
Like

Posted 11 August 2011 - 08:31 PM

Most importantly:
* He claims a search on pre-sorted data using random input predicates with O(1) complexity - something google would kill for.

I think you might have misunderstood what he said. He never said the complexity of the search algorithm. Just that it only had to run once for each pixel. It would be awesome if it were O(1), but if it were O(1) for each pixel on the screen you could do a lot better than 20fps in software. Really it sounds like it's O(R*X) where R is the resolution and X is a big question mark that we can assume is better than terribad. I think someone brought this up on page 4.

* He claims ∞:1 compression ratios - worthy of a nobel prize.

he never claimed that. He actually avoided talking about the specifics of the memory footprint other than saying that he is pleased with it and giving the specs on the laptop on which it's running. He clarifies the "unlimited detail" point quite a few times in the 40 minute interview. I'm fairly certain after watching it through he means that the limit is no longer on the amount of geometry you can process in a given scene; you can double check his clarification, but that's what I took away from it.

* He disregards atomontage by saying it's constrained to small scenes, when they've demo'd large landscapes -- and does this as part of his refutation of the point that what they've done is not new/unique.

this surprised me too. The more obvious thing to have pointed out is that atomontage appears to have it's focus much more on representing the inner volumes of objects not just the exterior as appears to be the case with UD.

* He completely misrepresents the statements of Notch and Carmack and fails to address the issues raised.

To be fair, Notch misrepresented UD quite a bit in his original assessment in the first place.

He didn't really touch on Carmack's too much other than as a counter to something notch said, but Carmack didn't really raise any super specific issues other than to say it was more than a few years out, which I don't think he's really contesting as it's at least a year out from the tech being done and then even further to integrate it into a game engine and eventually into a game.

* He implies that the 3D scanning techniques are somehow connected to his technology.

Well if he can use straight point cloud data in engine he has a pretty solid argument for that. Of course it could just be a result of him not having to optimize the data he converts, which would make it a less favorable argument.

### #89Hodgman  Moderators

49387
Like
0Likes
Like

Posted 11 August 2011 - 08:52 PM

* He claims a search on pre-sorted data using random input predicates with O(1) complexity - something google would kill for.

I think you might have misunderstood what he said. He never said the complexity of the search algorithm. Just that it only had to run once for each pixel. It would be awesome if it were O(1), but if it were O(1) for each pixel on the screen you could do a lot better than 20fps in software. Really it sounds like it's O(R*X) where R is the resolution and X is a big question mark that we can assume is better than terribad. I think someone brought this up on page 4.

He says that they run it once for each pixel, so that the complexity of the algorithm is related to the number of pixels processed instead of the amount of geometry processed.
Normally, it would be based on both, so it would be O(P*G) (where P is pixels and G is geometry), however he implies that the complexity is independent of the amount of geometry, which is why it's "unlimited", meaning the complexity is just O(P).

Now, if we're just rendering a single pixel, that means the complexity is O(1) (or O(constant) if you like) meaning that the search for the closest 'atom' for a single pixel runs in constant time, regardless of the amount of geometry in the scene.

This is clearly nonsense. You can only achieve constant-time search if you've got enough storage, and the storage requirements for pre-computed search on all possible position/direction inputs in an unlimited size scene is.... infinite.

What's really going on is that it's O(P*G) where G is "geometry times a small fraction" or "geometry raised to a small power", or "loglarge-base geometry", etc... So small that it seems like O(P) in sensible conditions. If it really was O(P), then it's really so amazing that he shouldn't be selling it to games companies; he should be selling it to google. This invention would be a paradigm shift for all of computer science.

he never claimed [∞:1 compression ratios].

He obviously didn't use those words, no.

He was asked "you must have some sort of memory limitations?", and replies "Umm. No. The simple answer is: no."
Along with all the "unlimited detail" hyperbole, this implies that there is no limit on the amount of data they can pack into a scene.
Obviously, there has to be a limit, otherwise he'd have achieved infinite compression. ...but instead of ever addressing any real-world downsides or limitations, he sticks to his 'unlmited' line.

Even if he said, "well obviously there's a limit to the amount of data you can fit on a disk, but we're compressing it so well that it may as well be unlimited", I'd be ok with it. I'd actually be impressed if he used real metrics, like saying they're currently averaging 0.1 bits per atom, etc... However, he's always stretching things that little bit further, into bullshit territory.

Well if he can use straight point cloud data in engine he has a pretty solid argument for [3d scanning being connected to his tech].

However, he mentions that they scanned the elephant into a dense polygon representation, and then converted that into point data.

### #90way2lazy2care  Members

782
Like
0Likes
Like

Posted 11 August 2011 - 09:37 PM

He says that they run it once for each pixel, so that the complexity of the algorithm is related to the number of pixels processed instead of the amount of geometry processed.
Normally, it would be based on both, so it would be O(P*G) (where P is pixels and G is geometry), however he implies that the complexity is independent of the amount of geometry, which is why it's "unlimited", meaning the complexity is just O(P).

Here's what he said about the search algorithm:
-But come on, you say the technology has unlimited power?
-Um yes, yes we do, and I know that's a very strange claim, but give us a chance to explain. At present the graphics card companies and the console companies all try to build bigger graphics cards or bigger consoles so you'll have more power because everybody knows that if you give a computer something to do like put a polygon on the screen or put an atom on the screen it's going to take a bit of maths and a bit of processor time to do it, so if you want to have a lot of stuff or unlimited stuff it's just not possible. Ok in our particular case we don't go about solving that problem in the same way as everybody else. Let's say your screen is 1280X720 or 768, what we do is we have a search algorithm that goes out and it grabs exactly one atom for every pixel on the screen. So if you do it that way, you end up being able to have unlimited geometry, as we show, but we're not being wasteful in how we present it on the screen.

I think he much more implies that his sort algorithm on G is just better than what is currently used. I think you're reading between the lines to see him claiming an O(1) search algorithm on point cloud data.

He was asked "you must have some sort of memory limitations?", and replies "Umm. No. The simple answer is: no."
Along with all the "unlimited detail" hyperbole, this implies that there is no limit on the amount of data they can pack into a scene.
Obviously, there has to be a limit, otherwise he'd have achieved infinite compression.

Here's the quote just so we don't argue over what was said or not said:

-Do you have memory problems? People are claiming that you must have some sort of memory limitations.
-Umm. No. The simple answer is no. Our memory compaction is going remarkably well. I think we've used up our quota of unbelievable claims this month, so I'm not going to talk about memory compaction. We're not finished on that as well...

I think you're reading between the lines a bit on both accounts, but if that's what you get out of it then fine.

### #91Sirisian  Members

2259
Like
0Likes
Like

Posted 11 August 2011 - 11:07 PM

He says that they run it once for each pixel, so that the complexity of the algorithm is related to the number of pixels processed instead of the amount of geometry processed.
Normally, it would be based on both, so it would be O(P*G) (where P is pixels and G is geometry), however he implies that the complexity is independent of the amount of geometry, which is why it's "unlimited", meaning the complexity is just O(P).

No he mentions that in the pixel algorithm they are forced to sort. I assume he means sorting geometry or data points that are returned. So the geometry probably does play a part. It's just the part it plays isn't as big as with triangles.

### #92MaxDZ8  Members

4976
Like
0Likes
Like

Posted 12 August 2011 - 12:35 AM

-Do you have memory problems? People are claiming that you must have some sort of memory limitations.
-Umm. No. The simple answer is no. Our memory compaction is going remarkably well. I think we've used up our quota of unbelievable claims this month, so I'm not going to talk about memory compaction. We're not finished on that as well...

I think you're reading between the lines a bit on both accounts, but if that's what you get out of it then fine.

For what it matters, I also think this is nonsense. But considering the amount of heavy instancing, I'm surprised they need to compress something in the first place.

No he mentions that in the pixel algorithm they are forced to sort. I assume he means sorting geometry or data points that are returned. So the geometry probably does play a part. It's just the part it plays isn't as big as with triangles.

Point taken but O(P) is the same as O(P * K) with K being a constant. So, having a smaller K does not change what Hodgman is saying.

BTW, I was thinking... suppose we 3D scan something. If this detail is unlimited... isn't it going to take a while to go through the wire from scanner to host? Because I don't think they built their own scanners.

BTW, perhaps I'm limited in my understanding but I acquire info with my senses and process it using my brain and what I see...

...is that there's nothing besides this island, which is admittedly bigger than atomontage but limited nonetheless. So, when he's talking about being unlimited - aka infinite - surely he's not talking about "infinitely large" but rather "infinitely small". As a start.

Previously "Krohm"

### #93bwhiting  Members

1440
Like
1Likes
Like

Posted 12 August 2011 - 02:35 AM

here is my take on this thing having also now watched the interview..

1. forget the "unlimited" bit... nothing in the universe is so just see it as just a "AWESOME AMOUNTS OF" instead, which is what he means methinks. so don't waste your energy on that, we all know its not actually unlimited. That is if you are taking the word unlimited to mean infinite.... but the two are different, unlimited could be the same as when another engine says it supports an unlimited number of lights.... which it true... the engine supports it.... your machine might just not be able to handle it (not a limit imposed by the engine but by the users computer)
either way I wouldn't get hung up on it.

2. he is the guy who came up with the technology and he was a hobby programmer, this could explain how he gets some terms wrong (level of distance??!) and why he may seem quite condescending... if he has no background in traditional graphics then that would make sense. His lack of knowledge of current methodologies is what I think lead to him going about it however he has done.

3. I am more and more thinking that this will lead somewhere and may indeed be the future of graphics (the guy who interviewed him was blown away) and from the sounds of it its only going to get better and faster

4. It still "boggles my mind"!!!

5. - 10. not included as I should really be working

### #94Syranide  Members

375
Like
1Likes
Like

Posted 12 August 2011 - 03:19 AM

BTW, perhaps I'm limited in my understanding but I acquire info with my senses and process it using my brain and what I see...

...is that there's nothing besides this island, which is admittedly bigger than atomontage but limited nonetheless. So, when he's talking about being unlimited - aka infinite - surely he's not talking about "infinitely large" but rather "infinitely small". As a start.

They've shown a bunch of other older demos which were slightly more varied in the blocks used, but those instead lacked much of the quality... so they just traded one thing for another. And so far, everything we've seen that would be indicators of memory usage have been terribly bad (few overly reused blocks, non-shaded materials, etc). Worse than that it even seems as if they are constrained to a grid, because every single demo they've ever shown has been built from prefab tiles as far as I've been able to tell.

However, it should be important to note that the size of the island they show is in most likelihood meaningless, they could probably with ease make it... A MILLION TIMES... larger without any issues, that is meant to be the strength of the algorithm... however, they could not add more unique models to make any use of it.

And what really strikes me as strange is why they are still running it on only 1 core after all these years, it should be pretty much trivial to utilize all the cores (and remove any chance of gameplay!). I'm curious how memory performance and bandwidth works out for this, now I'm far from an expert on this, but it really seems as if that could be a potentially huge issue to overcome if it indeed is an issue (much like it is an issue with raytracing).

But really, it all falls flat in theory for me. Textures and geometries today consume enough storage and memory as it is, we couldn't simply double that today and expect everything to run well. So, now consider that reusing textures over and over like we do today is very efficient... even storing color data as textures is efficient, it allows for compression and compositing multiple textures to seemingly make up quality from thin air. Triangle geometry is efficient, you can store enormous landscapes as dirt cheap (even compressed) heightmaps.

Now, consider what UD is doing:
They apply the texture individually to each voxel... so there is no texture reuse at all, it becomes harder to compress the color data
They break up the geometry into individual voxels... so a single triangle becomes a lot of voxels

So, let's for the sake of the argument say that, they have somehow managed to come up with a compression algorithm that takes all these voxels and manages to compress them down to the size of the original polygonal model. Great... right? Well, I would argue that no, it doesn't really matter all that much... because it all comes back to the texture issue. With polygons, we can make a statue that uses 2 textures, then make a 100 more statues using the same textures. In UD, every single object has its own unique "texture"... and note that the same is true for terrain. You can no longer reuse that grass texture over and over, or use a dirt cheap heightmap to represent hundreds of kilometers terrain... instead you now have to represent each triangle and texture by hundreds and hundreds of small voxels.

There is simply no way they could achieve the storage efficiency we enjoy today, even if they use every imaginable cheat and use 3D texture materials and all kinds of tricks... it will never be nearly as storage efficient as polygonal geometry and textures, it simply can't. Or am I missing something?

And like all good things, they are not good things unless they also work in practical circumstances, it's "easy" enough for nVidia and 3DMark to whip up impressive and carefully tweaked demos, applying it in games is a very very different thing

### #95Chargh  Members

110
Like
1Likes
Like

Posted 12 August 2011 - 10:53 AM

I think this thread is starting to look more like theology then theory. Maybe it would be a good idea to start a new one where no 'he's full of bs, no you are full of bs' is allowed and where we make a list of claims who may lead to what this guy is doing. I remember that somewhere in his old technology preview he states that there are 64 atoms per cubic mm, and he says no raytracing... If I have some spare time tomorrow I might just look at all the video's he's made and compose such a list. As I think that was the original intention of this thread.

### #96 rouncer   Members

294
Like
0Likes
Like

Posted 12 August 2011 - 12:31 PM

all I know is atmontage, and any engine besides Dells is a worthwhile technology, his is the most biggest shit ive ever seen, if it can make anything past what hes shown already id be suprised.

instancing... crap...

### #97schupf  Members

221
Like
2Likes
Like

Posted 12 August 2011 - 04:25 PM

Maybe it is just a minor detail, but I found it very funny, that this guy (Bruce Dell) - who is a 3D graphics pogrammer - even does not know what LOD stands for. He always said "level of distance". Sorry, but that is just pathetic;)

And the fact that he talks like as if the "technology" had ONLY advantages is just unprofessional. Every real engineer knows that there is ALWAYS a tradeoff. But this guys alwys just said "yes. its possible". "no, we have no problems with XY". Plain marketing bullshit.

### #98 rouncer   Members

294
Like
0Likes
Like

Posted 12 August 2011 - 06:18 PM

Maybe it is just a minor detail, but I found it very funny, that this guy (Bruce Dell) - who is a 3D graphics pogrammer - even does not know what LOD stands for. He always said "level of distance". Sorry, but that is just pathetic;)

And the fact that he talks like as if the "technology" had ONLY advantages is just unprofessional. Every real engineer knows that there is ALWAYS a tradeoff. But this guys alwys just said "yes. its possible". "no, we have no problems with XY". Plain marketing bullshit.

Im not saying anything about his personality, but this guy just points out negatives to all the other engines, and reckons hes got something noone else has come up with just like magic. complete shit methinks.

### #99bwhiting  Members

1440
Like
0Likes
Like

Posted 13 August 2011 - 04:15 AM

...and reckons hes got something noone else has come up with just like magic ...

maybe he has

maybe by not having solid foundation in traditional methods of 3d graphics he has indeed come up with an entirely new approach to rendering 3d.
if he had it might explain why the auzzie government was willing to give him 2 million dollars of funding... something I don't reckon they would have done lightly.

for all we know he is onto something here. and the real time demos.. despite their drawbacks are much MUCH better than other ones I have seen.

### #100zoborg  Members

139
Like
3Likes
Like

Posted 13 August 2011 - 04:26 AM

Disregarding the obvious hyperbole of unlimited detail (read: lies), the biggest problem is that their demo looks like shit. He passes this off as being due to programmer art, but there's a more fundamental reason for it: they cannot support lighting with this method. Not even simple 1 directional lighting.

To explain why, here's a basic outline of how you could brute-force render a voxel scene:

For each pixel:
• Cast a ray (or frustum, if you like) from the camera through the pixel into the scene to find the set of voxels visible to that pixel.
• Get the lit color of each voxel, and blend the results to get the final pixel color.
Now for a large scene where there are hundreds, thousands, or millions of voxels per pixel (like in the demo), that's way too slow. To get around that they use a hierarchical spatial representation, likely an octree. The algorithm becomes:

For each pixel:
• Same ray, but this time find the cell in the spatial partition that is roughly the size of a single pixel. You only need to find 1 cell instead of however many millions of voxels it may contain.
• Get the lit color of that cell, using umm, magic?
The latter algorithm works for unlit geometry simply because each cell in the hierarchy can store the average color of all of the (potentially millions of) voxels it contains. But add in lighting, and there's no simple way to precompute the lighting function for all of those contained voxels. They can all have normals in different directions - there's no guarantee they're even close to one another (imagine if the cell contained a sphere - it would have a normal in every direction). You also wouldn't be able to blend surface properties such as specularity.

So, dynamic lighting is out. But I guess we can use baked-in lighting? The rest of their technique depends on baked-in static environments, so why not do the same for lighting? Well, you'll notice they (mostly) don't have baked in lighting either. There's a reason for that too: recursive instancing.

Recursive instancing is how they manage to have huge worlds (i.e. unlimited detail) in a reasonable amount of memory. Let's take as an example the clumps of grass in the demo. There are millions (conceivably billions) of them in the world. Even just storing a simple compressed transform for each would cost a huge amount of memory. So instead of storing individual instances, they have the following: several clumps of grass and trees (and elephants) and ground instanced in a group. That group is then instanced multiple times to form larger plots of land. And then that group is instanced several more times to form the entire world (along with a few variations for other things, such as rivers).

This allows for very efficient storage, but the world is necessarily repetitive (as they demonstrate). It's no coincidence that their earlier demos were showing off Sierpinski's pyramids, as that is fundamentally the same method except for procedural placement.

But the biggest drawback of instancing is that all instances of a model are identical. This means, for instance, that the world cannot be dynamically destructible (barring some form of copy-on-write which would cost memory and cpu time proportional to the amount of world modification). More importantly, they cannot bake-in lighting unless all instances happen to have the same exact light (i.e. all at the same absolute world rotation and with identical shadows). The only fix for this is to just duplicate models for each unique lighting condition.

Notice in the Sierpinski's pyramid demo they have lighting, but all instances have the exact same orientation and lighting and the light never moves. And in the latest demo, most of the world is completely unlit. There are a few exceptions to this, such as the elephant with some approximation of AO, and the trees with a ground shadow directly beneath them. Both cases do not vary because in both cases the light is just baked into the color.

So, despite the guy's protestations, these are not trivial issues that can be addressed in the future (or are apparently already fixed but for some reason can't be shown). They are fundamental limitations of the technique. Now I'm not saying it's impossible to fix these, but it will be very difficult and there will be inevitable compromises to make it work. Such compromises as: limit the world size (see atomontage), or stream in data from vast archives on disk (see Id tech). And that's just if you want to do decent baked-in lighting. Implementing dynamic lighting or destructible terrain is an even bigger problem.

I'll change my tune the moment they have something that looks even comparable to a modern game, but definitely not while they keep showing videos that look worse than many PS1 games (which at least had lighting).

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.