So anyone seriously interested in this should just start from the [Efficient SVO] paper or any of the other copious research that pops up from a quick google search.
That's not quite the same thing as what Chargh was pointing out, or what the title of this thread asks for though... The very first reply to the OP contains these kinds of existing research, but it would be nice to actually analyze the clues that UD have inadvertently revealed (
seeing as they're so intent on being secretive...)
All UD is, is a data structure, which may well be something akin to an SVO (
which is where the 'it's nothing special' point is true), but it's likely conceptually different somewhat -- having been developed by someone who
has no idea what they're on about, and who started as long as 15 years ago.
There's been a few attempts in this thread to collect Dell's claims and actually try to analyze them and come up with possibilities. Some kind of SVO is a good guess, but if we actually investigate what he's said/shown, there's a lot of interesting clues. Chargh was pointing out that this interesting analysys has been drowned out by the 'religious' discussion about Dell being a 'scammer' vs 'marketer', UD being simple vs revolutionary, etc, etc...
For example, In bwhiting's link , you can clearly see aliasing and bad filtering in the shadows, which is likely caused by the use of shadow-mapping and a poor quality PCF filter. This leads me to believe that the shadows aren't baked in, and are actually done via a regular real-time shadow-mapping implementation, albeit in software.
Also, around this same part of the video, he accidentally flies though a leaf, and a near clipping-plane is revealed. If he were using regular ray-tracing/ray-casting, there'd be no need for him to implement this clipping-plane, and when combined with other other statements, this implies the traversal/projection is based on a frustum, not individual rays. Also, unlike rasterized polygons, the plane doesn't make a clean cut through the geometry, telling us something about the voxel structure and the way the clipping tests are implemented.
It's this kind of analysis / reverse-engineering that's been largely downed out.
[font="arial, verdana, tahoma, sans-serif"]
The latter algorithm works for unlit geometry simply because each cell in the hierarchy can store the average color of all of the (potentially millions of) voxels it contains. But add in lighting, and there's no simple way to precompute the lighting function for all of those contained voxels. They can all have normals in different directions - there's no guarantee they're even close to one another (imagine if the cell contained a sphere - it would have a normal in every direction). You also wouldn't be able to blend surface properties such as specularity.
This doesn't mean it doesn't work, or isn't what they're doing, it just implies a big down-side (
something Dell doesn't like talking about).
[/font][font="arial, verdana, tahoma, sans-serif"]For example, in current games, we might bake a 1million polygon model down to a 1000 polygon model. In doing so we bake all the missing details into texture maps. On every 1 low-poly triangle, it's textured with the data of 1000 high-poly triangles. Thanks to mip-mapping, if the model is far enough away that the low-poly triangle covers a single pixel, then the data from all 1000 of those high-poly triangles is averaged together.[/font]Yes, often this makes no sense, like you point out with normals and specularity, yet we do it anyway in current games. It causes artifacts for sure, but we still do it and so can Dell.
[font="arial, verdana, tahoma, sans-serif"]
I believe that at some point nVidia, AMD and other GPU makers will add some bit twiddling functionality in their cards (probably as instructions initially) in order to accelerate voxel rendering.
They
already have[/font][font="arial, verdana, tahoma, sans-serif"] in modern cards. Bit-twiddling is common in DX11. It's also possible to implement your own software 'caches' nowadays to accelerate this kind of stuff.[/font]
Too bad UD haven't even started on their GPU implementation yet though!
Tesselation often uses a displacement map input. It takes a patch and generates more triangles as the camera gets closer. His explanation was right of the current usage. (Unigine uses tesselation in this way).
No, height-displacement is not the
*only* current usage of tessellation.
He also confuses the issue deliberately by comparing a height-displaced plane with a scene containing a variety of different models. It would've been fairer to compare a scene of tessellated meshes with a scene of voxel meshes...