[Theory] Unraveling the Unlimited Detail plausibility

Started by
166 comments, last by Ben Bowen 11 years, 10 months ago
There is another interesting reveal on the following link at 2:18 that I haven't seen quoted before: -

http://www.youtube.c...feature=related

A short demo scene that contains a simple implementation of shadowing, hybrid rendering and arbitrary rotations on point cloud objects. This I believe hints at many of the missing features. that people were concerned were missing from the technology demo.

  • The shadowing is very simple, with the appearance of a low resolution shadow map but the basics are there.
  • There is a mix of polygon objects and Voxel objects in the scene, this hybrid rendering always seamed like the best solution for animation to me just like Dooms mixed sprites and polygons (1993 what a year that was!) Characters could be high resolution skeleton animated poly models rendered on the graphics card and mixed in with the Z buffer.
  • The tyre is apparently a point cloud object that is being rotated, assuming that they are rendered on the same pass from the same camera angle, that would represent an arbitrary rotation.

There is apparently a second pod cast with a further interview on memory use and animation, so I subscribed up as the only way to assess the feasibility of this is to carefully dissect every crumb of cookie that we get.

I didn't want to interrupt the more interesting discussion on the integer traversal of the octree data structure but it seams to have petered out when it was getting interesting. I'll do a spot more study before I post on that though.
Advertisement
*20

There is another interesting reveal on the following link at 2:18 that I haven't seen quoted before: -

http://www.youtube.c...feature=related

A short demo scene that contains a simple implementation of shadowing, hybrid rendering and arbitrary rotations on point cloud objects. This I believe hints at many of the missing features. that people were concerned were missing from the technology demo.
  • The shadowing is very simple, with the appearance of a low resolution shadow map but the basics are there.
  • There is a mix of polygon objects and Voxel objects in the scene, this hybrid rendering always seamed like the best solution for animation to me just like Dooms mixed sprites and polygons (1993 what a year that was!) Characters could be high resolution skeleton animated poly models rendered on the graphics card and mixed in with the Z buffer.
  • The tyre is apparently a point cloud object that is being rotated, assuming that they are rendered on the same pass from the same camera angle, that would represent an arbitrary rotation.

There is apparently a second pod cast with a further interview on memory use and animation, so I subscribed up as the only way to assess the feasibility of this is to carefully dissect every crumb of cookie that we get.

I didn't want to interrupt the more interesting discussion on the integer traversal of the octree data structure but it seams to have petered out when it was getting interesting. I'll do a spot more study before I post on that though.


1. Shadowing is not hard to do with SVOs, you can even have "perfect" shadows if you like... the problem is that it is very expensive.
2. Hybrid is also an obvious thing to do... but I'm not so sure that it is a good idea at all:
- The main draw of SVO is that performance is primarily determined by number of pixels, not geometry complexity... while polygon performance is primarily defined by the geometry complexity... mixing both would mean you suffer the drawbacks of both to some extent, which isn't ideal. And you may end up with hugely unpredictable performance as their indvidual coverage of the screen varies.
- SVOs and polygons are likely to have their own unique look, mixing the two seamlessly can be a truly daunting issue.
3. Arbitrary rotation is not hard to do with SVOs, but instancing of arbitrarily rotated, scaled, morphed and positioned objects is likely to add significant cost... something which UD doesn't currently show.

Please note, SVOs can in theory do pretty much everything triangles can and more, nobody is really rejecting that as far as I know. A primary problem is performance, the demo they showed last time ran at 20FPS @ 1024x768 on a modern computer, without shading or any modern techniques at all. Now let's scale that to the common resolution of 1920x1080, that would mean you now have 8FPS at best, and we are still not seeing any shadows, shading, rotation, lighting, heavy instancing, animation, etc. And let's not forget the ever present enormous memory issue.

Overall I'd like to think that UD/SVO is highly overrated... I'm not going to diss the atomonotage engine, it seems nice... but I find both their "visions" all too familiar to my own developer fantasies, to find the perfect solution to every problem and that somehow the best solution would be the most generic possible solution you could ever think of. It's really hard to get explain it in practice... but to give you a picture, the answer to "how much does it hurt to get punched in the face?" is not to look up theories for sub atomic particles, how they interact, their weight, how energy is transferred, what material it is, etc... no it's simply "pretty damn much, but it depends on how hard he hits you"... that is, don't break a problem into the smallest possible components, keep it high-level and approximate. And I feel confident that the same is true here, breaking down the problem to the smallest possible pieces (voxels) means you lose the ability to make optimizations, assumptions and clever tricks... you even to some degree lose the ability to have smooth surfaces. There are no longer triangles, nor surfaces, nor shapes, nor materials... it's all just individual voxels.



- SVOs and polygons are likely to have their own unique look, mixing the two seamlessly can be a truly daunting issue.
3. Arbitrary rotation is not hard to do with SVOs, but instancing of arbitrarily rotated, scaled, morphed and positioned objects is likely to add significant cost... something which UD doesn't currently show.

In my tests I just have a rotation matrix for the OBB for the object to get things into an AABB perspective. Assuming naive 3DDDA you just rasterize the box to the screen using the optimal rasterization algorithm storing the screen to OBB surface ray. (That ray's magnitude is the depth from the screen to the surface on the OBB). Then transform the ray by the inverse rotation matrix along with the surface point. Then it's just a normal traversal on SVO data since everything is not rotated.

Using the frustum rendering method it's even easier. For each OBB you just apply the inverse rotation matrix for the frustum planes around the object and you're now looking at the object in AABB state and can perform the culling and rendering. Still costly.

In my tests I just have a rotation matrix for the OBB for the object to get things into an AABB perspective. Assuming naive 3DDDA you just rasterize the box to the screen using the optimal rasterization algorithm storing the screen to OBB surface ray. (That ray's magnitude is the depth from the screen to the surface on the OBB). Then transform the ray by the inverse rotation matrix along with the surface point. Then it's just a normal traversal on SVO data since everything is not rotated.

Using the frustum rendering method it's even easier. For each OBB you just apply the inverse rotation matrix for the frustum planes around the object and you're now looking at the object in AABB state and can perform the culling and rendering. Still costly.


Indeed, to clarify for others, this may sound simple and fast... and it is, relatively. But I'm certain that UD is as fast as it is today because there are no overlapping structures or special features, but rather only a single large octree being traversed. That is incredibly cheap, you simply traverse the octree and test against rays, add even the simplest of "features" to that and you'll likely find the cost per pixel to skyrocket... even though some of the cost may be mitigated by memory latency.

And this is the neat thing with triangles, you can do some pretty costly stuff, because it's on objects as a whole, or on smaller but still large triangles. SVOs are per-pixel and thus there are likely few computations that can be shared between adjacent pixels... and when you are tracing millions of pixels per frame, then even the simplest computations become massively costly.


Although a matrix of pixels (screen-space) has euclidean regularity, I don't like the idea of making everything else have euclidean regularity as well (which has dead obvious benefits, but also dead obvious catches). That's probably why Euclideon is going for a hybrid rendering system now, right? I wonder if there's a good way to layer Bresenham-like algorithms (for instancing / tessellation composition) and access these plotting mechanisms fluidly from everywhere else within the engine.

For example: A NURBS surface to specify the curvature of a large terrain, and several layers (of what?) specifying the composition (grasses, dirt, rock etc. / uniqueness modifiers). The problem is integrating this irregular complexity into the whole pipeline, for processes such as hierarchical Bresenham-based beam-tracing.

I'm curious if there's a good way to contain essential geometry using regular definitions, combine & extend this description to form a variety of complexity and uniquness, and then, somehow efficiently traverse these structures *magically* and pick-out just exactly the information that is needed... uncertain magic.

By the way, I think the way Euclideon exhibited dirt particles in their demo is the worst part of all. Dirt is extremely small! Besides the moiste globs and little debris laying around in it, dirt is just dust! They've got what looks to be cobble stones. It would be cool if a game actually had extremely high-quality, realistic dirt, but Euclideon's dirt is much less realistic than even the games they criticized.

I really like self-shadowed terrain textures (look at the ground when its close):
[media]
[/media]
[media]
[/media]
I apologize in advance, I'm not a programmer, but I have a question that I don't really understand and nobody really discusses it.

I see everyone saying that file sizes would be enormous, that hdd space would never accommodate such high volume datasets. My question is, why would I have to save the same atom a million times?

What I mean is this, in a game, a duplicated rock utilizes the same texture(s). You don't have to save the same texture a thousand times separately, you save it on the disc once, and reuse it when needed.

So, if I was to make a blade of grass out of a million atoms, all using the same colored atom, why do I need to save each atom separately? If each atom was colored the exact same green, using the color code 0001 for that specific shade of green, why do I need to save it for each point? Why wouldn't the object file just hold the color information for each point and while rendering, search for the color code and fill in the corresponding atom?

I don't understand why EACH COLORED ATOM must be looked at, in it's own file size. Especially when the color doesn't have to be assigned until rendering. It seems like a huge waste to load color information if it's not going to be used in the frame.

Also, in this interview, http://www.3d-test.com/interviews/unlimited_detail_technology_1.htm which was done in May of 2010, he clearly states that objects converted to their format are roughly 8% of their polygon format.

So if you made a 1,000,000 polygon asset and converted it, it would be roughly 80,000 polygons. That's an infinitely smaller file size. 80k polygon objects are like 500kb. Not Megs or Gigs or Terabytes, just Kilobytes. Even if you doubled the file size to accommodate all the color information instead of just X,Y,Z coordinates, it's still an extremely small file.

Again, if this is wrong, I do apologize for typing so much. Thing is, I've always viewed the Unlimited Engine as Pointillism Painting, with the Search Engine or Look-Up Table as the Palette. And to me, an artist would never wipe his brush or clean his palette every single time he painted a separate dot. He would just fill in as many dots as possible with that color.
My question is, why would I have to save the same atom a million times?
What I mean is this, in a game, a duplicated rock utilizes the same texture(s). You don't have to save the same texture a thousand times separately, you save it on the disc once, and reuse it when needed.
This is what they're doing, and it's why their demo is made up of the same dozen rocks/props repeated a billion times.


Also, in this interview, http://www.3d-test.c...echnology_1.htm which was done in May of 2010, he clearly states that objects converted to their format are roughly 8% of their polygon format.
So if you made a 1,000,000 polygon asset and converted it, it would be roughly 80,000 polygons. That's an infinitely smaller file size. 80k polygon objects are like 500kb. Not Megs or Gigs or Terabytes, just Kilobytes. Even if you doubled the file size to accommodate all the color information instead of just X,Y,Z coordinates, it's still an extremely small file.
If you want to perform a real comparison there, you'd have to include the standard option, which is to convert the 1,000,000 polygon asset into an 80,000 polygon asset + a normal map.

If you want to perform a real comparison there, you'd have to include the standard option, which is to convert the 1,000,000 polygon asset into an 80,000 polygon asset + a normal map.


I don't really get the Normal Map example. I was referring to the conversion process as plain assets. He explained that if you used the "Unlimited Point Cloud Format", file sizes would be 8% of it's polygonal size. It didn't specify the inclusion of maps and I'm actually curious if developers would continue down that route.

Maybe this isn't possible but couldn't a Paint Program allow you to paint directly onto Point Cloud geometry? If you could bypass the entire process of having textures altogether, simply layer paint your art asset, you'd replace textures with individually colored atoms. The approach could be something like PolyPainting in ZBrush, just without the initial UV part in the beginning.

Again, I'm not a programmer, so I don't know how all the behind the scenes things work on GPU's and stuff. But since the empty space between polygon points would now be replaced with atoms, wouldn't the need for textures be replaced as well?

But since the empty space between polygon points would now be replaced with atoms, wouldn't the need for textures be replaced as well?


Yes, the surface textures could be replaced by atoms - many, many atoms. :-) He was setting up a straw-man: it may (arguably) be more efficient to represent certain surfaces that way, but then it becomes unclear how to use standard techniques such as coloring, displacement mapping or normal mapping, on top of this - you can of course represent the surface detail achieved by these techniques with these "atoms" directly, but then the atom count explodes, and is likely far less efficient of a representation, not more.
I don't really get the Normal Map example. I was referring to the conversion process as plain assets.
Ok, the problem with that comparison is that you're ignoring the asset conversion processes that are used in games.
The two asset processes that you compared were:Author highly detailed, film-quality model ----------------------------- Rendered by game
\Generate Atoms /
...but to be fair, you should actually use the asset conversion processes that are currently used by "polygonal games", which would look more like: Generate LOD and bake maps
/ \
Author highly detailed, film-quality model - - Rendered by game
\Generate Atoms /
In this comparison, both data sets (the atoms or the LOD/maps) will be a small percentage of the original size.

It's also important to note that there's not that much difference between the above two asset pipelines, and in reality, the "generate atoms" part is pretty much the same as the "bake maps" part, but it's baking a 3D map of some sort. N.B. that if textures can be replaced by creating one atom per pixel in the texture, then the memory requirements of both approaches are going to be similar - you're storing the same data in the end.

Games already do use 3D/volumetric rendering (e.g. point clouds) for certain objects when appropriate, and existing asset pipelines do already use "atom->polygon" and "polygon->atom" conversion where appropriate.
Maybe this isn't possible but couldn't a Paint Program allow you to paint directly onto Point Cloud geometry? If you could bypass the entire process of having textures altogether, simply layer paint your art asset
Yes, this already exists for both polygonal and voxel art creation programs.

Take note though, that the processes used when authoring game art and the processes used when rendering the game, don't have to be the same.
These kinds of "unlimited detail", no hassles (such as textures) processes are already in use in a lot of art creation tools.
If a current engine then requires certain technical details, such as UV coordinates, or texture-maps (e.g. because it happens that the most efficient GPU implementation just works that way), then the game engine's asset import pipeline can automatically perform the required conversions.


If/when we switch over to mainly using point-rendering in the game, it's not going to be a huge overhaul on the art pipeline, or widly enable more freedom for artists. That switchover will be just another technical detail, such as whether your vertices are FP16 or FP32 format...
Artists already have this freedom in their tools, assuming their game engine supports their chosen art tools. So the game engine and/or the artists can independently choose whether to use polygons or atoms, whether to use texture-maps or an automatic solution, etc, etc... The engine's asset import pipeline in the middle takes care of the details. This is already the way things are.

This topic is closed to new replies.

Advertisement