[Theory] Unraveling the Unlimited Detail plausibility

Started by
166 comments, last by Ben Bowen 11 years, 10 months ago

[quote name='Chargh' timestamp='1313168004' post='4848304']
I think this thread is starting to look more like theology then theory. Maybe it would be a good idea to start a new one where no 'he's full of bs, no you are full of bs' is allowed and where we make a list of claims who may lead to what this guy is doing. I remember that somewhere in his old technology preview he states that there are 64 atoms per cubic mm, and he says no raytracing... If I have some spare time tomorrow I might just look at all the video's he's made and compose such a list. As I think that was the original intention of this thread.
QFT.
[/quote]

From what I've seen, it really just looks like a less-sophisticated version Nvidia's SVO, but with instancing. All of the rest is hyperbole, stomach-churning marketing spiel, and redefining terminology (it's not voxels, it's atoms, it's not ray-tracing, it's the non-union Mexican equivalent). So anyone seriously interested in this should just start from the paper or any of the other copious research that pops up from a quick google search.
Advertisement

[quote name='D_Tr' timestamp='1313236455' post='4848601']
+1 zoborg

I believe that at some point nVidia, AMD and other GPU makers will add some bit twiddling functionality in their cards (probably as instructions initially) in order to accelerate voxel rendering. Nvidia's demo already casts tens of millions rays per sesond. Furthermore, when cheap solid state storage becomes common, fast streaming of many gigabytes worth of data will be possible. But all this is going to take some time, and that time surely has not arrived when someone comes in your face and insults your intelligence by saying "I got 512 trillions atoms here" when really meaning "I have a spaggetti octree with a ton of nodes pointing to the same 10 models".


Agreed. There's definitely some good research being done in this area. One of the main things preventing it from becoming mainstream is that modern GPU hardware is designed to render triangles, very fast. Large voxel worlds (and ray-tracing for that matter) require non-linear memory access patterns that GPUs just weren't designed for. Any significant sea-change in how rendering is performed is going to require collaboration with the GPU vendors.

CUDA is a step in the right direction, but what we really need is some custom hardware that's good at handling intersections against large spatial databases (think texture unit, but for ray-casting). It's a shame Larrabee didn't work out, but it'll happen eventually. And it'll be a hardware vendor to do it, not some upstart with a magical new algorithm they can't describe or even show working well.
[/quote]

Features and quality is just a matter of more performance and optimizations.
What we need more than anything else is "unlimited memory/storage" or this technology has a very limited usefulness.


+1 zoborg

I believe that at some point nVidia, AMD and other GPU makers will add some bit twiddling functionality in their cards (probably as instructions initially) in order to accelerate voxel rendering. Nvidia's demo already casts tens of millions rays per second (and the geometry is totally unique). Furthermore, when cheap solid state storage becomes common, fast streaming of many gigabytes worth of data will be possible. But all this is going to take some time, and that time surely has not arrived when someone comes in your face and insults your intelligence by saying "I got 512 trillions atoms here" when really meaning "I have a spaggetti octree with a ton of nodes pointing to the same 10 models". Unlimited detail octree = octgraph = scam...


SSD are also limited in their ability to randomly access data, which could be a huge issue since you're streaming nodes of an octree. Meaning, unless you find very smart ways of packing data, every node might be a random access. And you'll also likely run into problems with predicting nodes that must be streamed in the future. Regardless, being able to stream 1TB of data is only useful if you actually find a way to distribute 1TB of data (or whatever amount you fancy).

In the end, I really doubt the "general usefulness" of SVOs, they surely have a purpose and there might actually be genuine uses for it.


Alot of people seem to be drawing conclusions from what is being demoed now, which sure is impressive, but really lacks all of the visual quality we see in modern games, as well as lacking performance... and then that is compared to year old games that are meant to be able to run on year old computers (hell, just compare it to what the 3DMark developers are demoing). If you were to go 3 years into the future ("when SVOs are practical"), and be allowed to create a demo that too only targets modern hardware and which showcases the potential of triangles, then I'm pretty SVOs wouldn't be all what it is cranked up to be, and without the rediculous memory issues.

Also, the only thing I've seen that UD really impresses with is up-close details (but funny enough, suffers from poor quality in the distance), which surely is nice but is something you rarely are bothered by in games unless you actually specifically look for it. I would be more concerned with walking around in a world that is completely static and solid, that sure would break immersion for me before I even started playing. What many people also forget is that a modern GPU can crank out rediculous amounts of unshaded triangles and pixels, we are talking billions of them, every second. The main reason we don't do that is because somewhere along the way smart people realized that shading and effects were more important than extreme details alone (also, storage and memory limitations!)... so, we spend most of the GPU performance on shading and effects. So why are people hyping a technology that doesn't perform well yet, and doesn't even do shading!

EDIT: To be clear, the benefits of the performance being "independent" from geometry complexity is great, like we went from Forwarding Shading to Deferred Shading. But unless people find a way to get rid of the rediculous memory constrains then I don't see how it could ever really work out. Perhaps SVOs could be used to find which triangles may be visible for a given pixel and use that to reduce geometry, and similar "ideas" that wouldn't trade "geometry complexity" for "rediculous amounts of data".



And since some seem to have forgotten what games look like today:

165531_10150146309707501_301712412500_8129250_5194790_n.jpg

unlimited-detail-elephant.jpg


@Syranide: But still aren't SSDs much faster than HDDs at random reads? Especialy if you read chunks that have a size of several hundreds of KB? As for the distribution and storage concerns, you are right that 1TB is a lot of data to download or distribute on retail stores, but storage media are getting denser and connection speeds are getting faster. 1TB would take about 1 day to download on a 100 Mbps connection. This does not seem too long considering that you don't even need 1 TB to have impressive detailed voxel graphics in a game (along with polygonal ones) and that the voxel data can be distributed in a format quite a bit more compact than the one used during the execution of the program. I totally agree with your last comment about the 'general usefulness' of the technology. There is room for advancements in polygon technology too. Moore's law still holds and GPUs are getting more general purpose which is great, because programmers will be able to use whatever technique is better for every situation.

-EDIT: Just saw your last post where you make very good points in the first 2 paragraphs.
So anyone seriously interested in this should just start from the [Efficient SVO] paper or any of the other copious research that pops up from a quick google search.
That's not quite the same thing as what Chargh was pointing out, or what the title of this thread asks for though... The very first reply to the OP contains these kinds of existing research, but it would be nice to actually analyze the clues that UD have inadvertently revealed (seeing as they're so intent on being secretive...)

All UD is, is a data structure, which may well be something akin to an SVO (which is where the 'it's nothing special' point is true), but it's likely conceptually different somewhat -- having been developed by someone who has no idea what they're on about, and who started as long as 15 years ago.

There's been a few attempts in this thread to collect Dell's claims and actually try to analyze them and come up with possibilities. Some kind of SVO is a good guess, but if we actually investigate what he's said/shown, there's a lot of interesting clues. Chargh was pointing out that this interesting analysys has been drowned out by the 'religious' discussion about Dell being a 'scammer' vs 'marketer', UD being simple vs revolutionary, etc, etc...

For example, In bwhiting's link , you can clearly see aliasing and bad filtering in the shadows, which is likely caused by the use of shadow-mapping and a poor quality PCF filter. This leads me to believe that the shadows aren't baked in, and are actually done via a regular real-time shadow-mapping implementation, albeit in software.
6038279727_f6387a3e16.jpg

Also, around this same part of the video, he accidentally flies though a leaf, and a near clipping-plane is revealed. If he were using regular ray-tracing/ray-casting, there'd be no need for him to implement this clipping-plane, and when combined with other other statements, this implies the traversal/projection is based on a frustum, not individual rays. Also, unlike rasterized polygons, the plane doesn't make a clean cut through the geometry, telling us something about the voxel structure and the way the clipping tests are implemented.
6038855530_3353f5d92a.jpg

It's this kind of analysis / reverse-engineering that's been largely downed out.

[font="arial, verdana, tahoma, sans-serif"]
The latter algorithm works for unlit geometry simply because each cell in the hierarchy can store the average color of all of the (potentially millions of) voxels it contains. But add in lighting, and there's no simple way to precompute the lighting function for all of those contained voxels. They can all have normals in different directions - there's no guarantee they're even close to one another (imagine if the cell contained a sphere - it would have a normal in every direction). You also wouldn't be able to blend surface properties such as specularity.
This doesn't mean it doesn't work, or isn't what they're doing, it just implies a big down-side (something Dell doesn't like talking about).
[/font][font="arial, verdana, tahoma, sans-serif"]For example, in current games, we might bake a 1million polygon model down to a 1000 polygon model. In doing so we bake all the missing details into texture maps. On every 1 low-poly triangle, it's textured with the data of 1000 high-poly triangles. Thanks to mip-mapping, if the model is far enough away that the low-poly triangle covers a single pixel, then the data from all 1000 of those high-poly triangles is averaged together.[/font]Yes, often this makes no sense, like you point out with normals and specularity, yet we do it anyway in current games. It causes artifacts for sure, but we still do it and so can Dell.
[font="arial, verdana, tahoma, sans-serif"]
I believe that at some point nVidia, AMD and other GPU makers will add some bit twiddling functionality in their cards (probably as instructions initially) in order to accelerate voxel rendering.
They already have[/font][font="arial, verdana, tahoma, sans-serif"] in modern cards. Bit-twiddling is common in DX11. It's also possible to implement your own software 'caches' nowadays to accelerate this kind of stuff.[/font]
Too bad UD haven't even started on their GPU implementation yet though!laugh.gif
Tesselation often uses a displacement map input. It takes a patch and generates more triangles as the camera gets closer. His explanation was right of the current usage. (Unigine uses tesselation in this way).
No, height-displacement is not the *only* current usage of tessellation.rolleyes.gif
He also confuses the issue deliberately by comparing a height-displaced plane with a scene containing a variety of different models. It would've been fairer to compare a scene of tessellated meshes with a scene of voxel meshes...

Too bad UD haven't even started on their GPU implementation yet though!

In the first video they say they have started one though. I guess since we didn't see a video of it then it's hard to take their word for it.

we're also running at 20 frames a second in software, but we have versions that are running much faster than that aren't quite complete yet[/quote] Not sure if that means GPU or still software.

Also drop the "it's not unlimited". They clearly said 64 atoms per cubic mm. That is a very specific level of detail. :lol:

Also regarding the lighting. This clip explains a lot about why it looks so bad.
wooo go hodgman that's what we wanna see, someone who will look at what evidence we have and make an educated guess as to what is going on behind the scenes.

much more interesting that cries of "fake" or "bullshit", the fact is its impressive, it might not be as impressive as his bold claims make it out to be, but it is more detail than I have ever seen in a demo.. and that warrants trying to figure out what he is doing dry.gif

[quote name='Hodgman' timestamp='1313253079' post='4848674']
Too bad UD haven't even started on their GPU implementation yet though!

In the first video they say they have started one though. I guess since we didn't see a video of it then it's hard to take their word for it.

we're also running at 20 frames a second in software, but we have versions that are running much faster than that aren't quite complete yet[/quote] Not sure if that means GPU or still software.

Also drop the "it's not unlimited". They clearly said 64 atoms per cubic mm. That is a very specific level of detail. :lol:

Also regarding the lighting. This clip explains a lot about why it looks so bad.
[/quote]

If all time goes to this search algorithm what could he do on the gpu that would increase its speed? He could add more post processing but that wouldn't make it any faster.

Agreed. There's definitely some good research being done in this area. One of the main things preventing it from becoming mainstream is that modern GPU hardware is designed to render triangles, very fast. Large voxel worlds (and ray-tracing for that matter) require non-linear memory access patterns that GPUs just weren't designed for. Any significant sea-change in how rendering is performed is going to require collaboration with the GPU vendors.

CUDA is a step in the right direction, but what we really need is some custom hardware that's good at handling intersections against large spatial databases (think texture unit, but for ray-casting). It's a shame Larrabee didn't work out, but it'll happen eventually. And it'll be a hardware vendor to do it, not some upstart with a magical new algorithm they can't describe or even show working well.


This reminds me of a question I have on the subject of hardware and ray casting. Isn't the new AMD Fusion chip what you describe? The GPU and CPU have shared memory with the GPU being programmable in a C++ like way, if I'm not mistaken.

If all time goes to this search algorithm what could he do on the gpu that would increase its speed? He could add more post processing but that wouldn't make it any faster.

I have to imagine his search algorithm is a per-pixel algorithm. The GPU is really good at doing those kinds of operations. Also he'll be grabbing back the data into a g-buffer of sorts to perform post-processing the deferred way probably so you're right on that part. This should look much nicer with actual shading hopefully.

This topic is closed to new replies.

Advertisement