Well, if you started 15 years ago from scratch, you'd have 15 years of experience in the topic. And it's not like you'd do that research in a complete vacuum. It's quite possible that he's invented something different, but I have no particular reason to believe that while he shows things that could definitely be done using well documented techniques.That's not quite the same thing as what Chargh was pointing out, or what the title of this thread asks for though... The very first reply to the OP contains these kinds of existing research, but it would be nice to actually analyze the clues that UD have inadvertently revealed (seeing as they're so intent on being secretive...)
So anyone seriously interested in this should just start from the [Efficient SVO] paper or any of the other copious research that pops up from a quick google search.
All UD is, is a data structure, which may well be something akin to an SVO (which is where the 'it's nothing special' point is true), but it's likely conceptually different somewhat -- having been developed by someone who has no idea what they're on about, and who started as long as 15 years ago.
Where do you think baked-in shadows come from? They have to be rendered sometime, and any offline shadow baking performed can be subject to similar quality issues. I'm just saying there's no way to infer from a shot that the lighting is dynamic, because any preprocess could generate the lighting in the same exact way with the same exact artifacts.
There's been a few attempts in this thread to collect Dell's claims and actually try to analyze them and come up with possibilities. Some kind of SVO is a good guess, but if we actually investigate what he's said/shown, there's a lot of interesting clues. Chargh was pointing out that this interesting analysys has been drowned out by the 'religious' discussion about Dell being a 'scammer' vs 'marketer', UD being simple vs revolutionary, etc, etc...
For example, In bwhiting's link , you can clearly see aliasing and bad filtering in the shadows, which is likely caused by the use of shadow-mapping and a poor quality PCF filter. This leads me to believe that the shadows aren't baked in, and are actually done via a regular real-time shadow-mapping implementation, albeit in software.
So I obviously don't know if it's baked or not, right? Well, there are several reasons to suspect this, and I prefer to take the tack that until given evidence otherwise, the simplest answer is correct.
Why do I think the shadows are baked?
1) First and foremost, the light never moves. This guy goes on and on about how magical everything else is, so why doesn't he ever mention lighting? Why doesn't he just move the light?
2) The light is top-down - the most convenient position for baked-in light and shadows because it allows for arbitrary orientation about the up axis. Why else would you choose this orientation since it makes the world so flat looking?
3) No specular. That's another reason the lighting looks terrible.
4) It fits in perfectly with the most obvious theory of the implementation.
Well, when you're ray-casting you don't need to explicitly implement a clipping plane to get that effect. You'd get that effect if you projected each ray from the near plane instead of the eye. But an irregular cut like that just suggests to me that yes, they're using voxels and raycasting and not triangle rasterization, so any discontinuities would be at voxel instead of pixel granularity.
Also, around this same part of the video, he accidentally flies though a leaf, and a near clipping-plane is revealed. If he were using regular ray-tracing/ray-casting, there'd be no need for him to implement this clipping-plane, and when combined with other other statements, this implies the traversal/projection is based on a frustum, not individual rays. Also, unlike rasterized polygons, the plane doesn't make a clean cut through the geometry, telling us something about the voxel structure and the way the clipping tests are implemented.
I think you're understating the potential artifacts. In their demo, a single pixel could contain ground, thousands of clumps of grass, dozens of trees, and even a few spare elephants. How do you approximate a light value for that that's good enough? We do approximations all the time in games, but we do that by throwing away perceptually unimportant details. The direction of a surface with respect to the light is something that can be approximated (e.g. normal-maps), but not if the surface is a chaotic mess. At best, your choice of normal would be arbitrary (say, up). But if they did that, you'd see noticeable lighting changes as the LoD reduces, whereas in the demo it's a continuous blend.
It's this kind of analysis / reverse-engineering that's been largely downed out.This doesn't mean it doesn't work, or isn't what they're doing, it just implies a big down-side (something Dell doesn't like talking about).
The latter algorithm works for unlit geometry simply because each cell in the hierarchy can store the average color of all of the (potentially millions of) voxels it contains. But add in lighting, and there's no simple way to precompute the lighting function for all of those contained voxels. They can all have normals in different directions - there's no guarantee they're even close to one another (imagine if the cell contained a sphere - it would have a normal in every direction). You also wouldn't be able to blend surface properties such as specularity.
For example, in current games, we might bake a 1million polygon model down to a 1000 polygon model. In doing so we bake all the missing details into texture maps. On every 1 low-poly triangle, it's textured with the data of 1000 high-poly triangles. Thanks to mip-mapping, if the model is far enough away that the low-poly triangle covers a single pixel, then the data from all 1000 of those high-poly triangles is averaged together.Yes, often this makes no sense, like you point out with normals and specularity, yet we do it anyway in current games. It causes artifacts for sure, but we still do it and so can Dell.
That's not to say dynamic lighting can't be implemented, just that they haven't demonstrated it. Off hand, if I were to attempt dynamic lighting for instanced voxels, I would probably approach it as a screen-space problem. I.e.
- Render the scene, but output a depth value along with each color pixel.
- Generate surface normals using depth gradients from adjacent pixels (with some fudge-factor to eliminate silhouette discontinuities).
- Perform lighting in view-space, as with typical gbuffer techniques.
But there's nothing in any of the demos to suggest they're doing this or any other form of dynamic lighting. I prefer to just take the simplest explanation: that his avoidance is intentional because he knows full well what the limitations of his technique are. They haven't shown anything that couldn't be baked-in, so I have no reason to believe they've done anything more complicated than that.