Sign in to follow this  
Bombshell93

[Theory] Unraveling the Unlimited Detail plausibility

Recommended Posts

Syranide    375
[quote name='Sirisian' timestamp='1314558381' post='4854793']
In my tests I just have a rotation matrix for the OBB for the object to get things into an AABB perspective. Assuming naive 3DDDA you just rasterize the box to the screen using the optimal rasterization algorithm storing the screen to OBB surface ray. (That ray's magnitude is the depth from the screen to the surface on the OBB). Then transform the ray by the inverse rotation matrix along with the surface point. Then it's just a normal traversal on SVO data since everything is not rotated.

Using the frustum rendering method it's even easier. For each OBB you just apply the inverse rotation matrix for the frustum planes around the object and you're now looking at the object in AABB state and can perform the culling and rendering. Still costly.
[/quote]

Indeed, to clarify for others, this may sound simple and fast... and it is, relatively. But I'm certain that UD is as fast as it is today because there are no overlapping structures or special features, but rather only a single large octree being traversed. That is incredibly cheap, you simply traverse the octree and test against rays, add even the simplest of "features" to that and you'll likely find the cost per pixel to skyrocket... even though some of the cost may be mitigated by memory latency.

And this is the neat thing with triangles, you can do some pretty costly stuff, because it's on objects as a whole, or on smaller but still large triangles. SVOs are per-pixel and thus there are likely few computations that can be shared between adjacent pixels... and when you are tracing millions of pixels per frame, then even the simplest computations become massively costly.

Share this post


Link to post
Share on other sites
Ben Bowen    115
Although a matrix of pixels (screen-space) has euclidean regularity, I don't like the idea of making everything else have euclidean regularity as well (which has dead obvious benefits, but also dead obvious catches). That's probably why Euclideon is going for a hybrid rendering system now, right? I wonder if there's a good way to layer Bresenham-like algorithms (for instancing / tessellation composition) and access these plotting mechanisms fluidly from everywhere else within the engine.

For example: A NURBS surface to specify the curvature of a large terrain, and several layers (of what?) specifying the composition (grasses, dirt, rock etc. / uniqueness modifiers). The problem is integrating this irregular complexity into the whole pipeline, for processes such as hierarchical Bresenham-based beam-tracing.

I'm curious if there's a good way to contain essential geometry using regular definitions, combine & extend this description to form a variety of complexity and uniquness, and then, somehow efficiently traverse these structures *magically* and pick-out just exactly the information that is needed... uncertain magic.

By the way, I think the way Euclideon exhibited dirt particles in their demo is the worst part of all. Dirt is extremely small! Besides the moiste globs and little debris laying around in it, dirt is just dust! They've got what looks to be cobble stones. It would be cool if a game actually had extremely high-quality, realistic dirt, but Euclideon's dirt is much less realistic than even the games they criticized.

I really like self-shadowed terrain textures (look at the ground when its close):
[media]http://www.youtube.com/watch?v=OSyW6zbw6eU[/media]
[media]http://www.youtube.com/watch?v=aKvsd7I4VQA[/media]

Share this post


Link to post
Share on other sites
I apologize in advance, I'm not a programmer, but I have a question that I don't really understand and nobody really discusses it.

I see everyone saying that file sizes would be enormous, that hdd space would never accommodate such high volume datasets. My question is, why would I have to save the same atom a million times?

What I mean is this, in a game, a duplicated rock utilizes the same texture(s). You don't have to save the same texture a thousand times separately, you save it on the disc once, and reuse it when needed.

So, if I was to make a blade of grass out of a million atoms, all using the same colored atom, why do I need to save each atom separately? If each atom was colored the exact same green, using the color code 0001 for that specific shade of green, why do I need to save it for each point? Why wouldn't the object file just hold the color information for each point and while rendering, search for the color code and fill in the corresponding atom?

I don't understand why EACH COLORED ATOM must be looked at, in it's own file size. Especially when the color doesn't have to be assigned until rendering. It seems like a huge waste to load color information if it's not going to be used in the frame.

Also, in this interview, http://www.3d-test.com/interviews/unlimited_detail_technology_1.htm which was done in May of 2010, he clearly states that objects converted to their format are roughly 8% of their polygon format.

So if you made a 1,000,000 polygon asset and converted it, it would be roughly 80,000 polygons. That's an infinitely smaller file size. 80k polygon objects are like 500kb. Not Megs or Gigs or Terabytes, just Kilobytes. Even if you doubled the file size to accommodate all the color information instead of just X,Y,Z coordinates, it's still an extremely small file.

Again, if this is wrong, I do apologize for typing so much. Thing is, I've always viewed the Unlimited Engine as Pointillism Painting, with the Search Engine or Look-Up Table as the Palette. And to me, an artist would never wipe his brush or clean his palette every single time he painted a separate dot. He would just fill in as many dots as possible with that color.

Share this post


Link to post
Share on other sites
Hodgman    51234
[quote name='Outthink The Room' timestamp='1336319745' post='4937797']My question is, why would I have to save the same atom a million times?
What I mean is this, in a game, a duplicated rock utilizes the same texture(s). You don't have to save the same texture a thousand times separately, you save it on the disc once, and reuse it when needed.[/quote]This is what they're doing, and it's why their demo is made up of the same dozen rocks/props repeated a billion times.

[quote name='Outthink The Room' timestamp='1336319745' post='4937797']
Also, in this interview, http://www.3d-test.c...echnology_1.htm which was done in May of 2010, he clearly states that objects converted to their format are roughly 8% of their polygon format.
So if you made a 1,000,000 polygon asset and converted it, it would be roughly 80,000 polygons. That's an infinitely smaller file size. 80k polygon objects are like 500kb. Not Megs or Gigs or Terabytes, just Kilobytes. Even if you doubled the file size to accommodate all the color information instead of just X,Y,Z coordinates, it's still an extremely small file.[/quote]If you want to perform a real comparison there, you'd have to include the standard option, which is to convert the 1,000,000 polygon asset into an 80,000 polygon asset + a normal map.

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1336324963' post='4937815']
If you want to perform a real comparison there, you'd have to include the standard option, which is to convert the 1,000,000 polygon asset into an 80,000 polygon asset + a normal map.
[/quote]

I don't really get the Normal Map example. I was referring to the conversion process as plain assets. He explained that if you used the "Unlimited Point Cloud Format", file sizes would be 8% of it's polygonal size. It didn't specify the inclusion of maps and I'm actually curious if developers would continue down that route.

Maybe this isn't possible but couldn't a Paint Program allow you to paint directly onto Point Cloud geometry? If you could bypass the entire process of having textures altogether, simply layer paint your art asset, you'd replace textures with individually colored atoms. The approach could be something like PolyPainting in ZBrush, just without the initial UV part in the beginning.

Again, I'm not a programmer, so I don't know how all the behind the scenes things work on GPU's and stuff. But since the empty space between polygon points would now be replaced with atoms, wouldn't the need for textures be replaced as well?

Share this post


Link to post
Share on other sites
Crowley99    194
[quote name='Outthink The Room' timestamp='1336334674' post='4937858']
But since the empty space between polygon points would now be replaced with atoms, wouldn't the need for textures be replaced as well?
[/quote]

Yes, the surface textures could be replaced by atoms - many, many atoms. :-) He was setting up a straw-man: it may (arguably) be more efficient to represent certain surfaces that way, but then it becomes unclear how to use standard techniques such as coloring, displacement mapping or normal mapping, on top of this - you can of course represent the surface detail achieved by these techniques with these "atoms" directly, but then the atom count explodes, and is likely far less efficient of a representation, not more.

Share this post


Link to post
Share on other sites
Hodgman    51234
[quote name='Outthink The Room' timestamp='1336334674' post='4937858']I don't really get the Normal Map example. I was referring to the conversion process as plain assets.[/quote]Ok, the problem with that comparison is that you're ignoring the asset conversion processes that are used in games.
The two asset processes that you compared were:[code]Author highly detailed, film-quality model ----------------------------- Rendered by game
\Generate Atoms / [/code]...but to be fair, you should actually use the asset conversion processes that are currently used by "polygonal games", which would look more like:[code] Generate LOD and bake maps
/ \
Author highly detailed, film-quality model - - Rendered by game
\Generate Atoms / [/code]In this comparison, both data sets ([i]the atoms or the LOD/maps[/i]) will be a small percentage of the original size.

It's also important to note that there's not that much difference between the above two asset pipelines, and in reality, the "generate atoms" part is pretty much the same as the "bake maps" part, but it's baking a 3D map of some sort. N.B. that if textures can be replaced by creating one atom per pixel in the texture, then the memory requirements of both approaches are going to be similar - you're storing the same data in the end.

Games already [i]do[/i] use 3D/volumetric rendering ([i]e.g. point clouds[/i]) for certain objects when appropriate, and existing asset pipelines do already use "[i]atom->polygon[/i]" and "[i]polygon->atom[/i]" conversion where appropriate.
[quote name='Outthink The Room' timestamp='1336334674' post='4937858']Maybe this isn't possible but couldn't a Paint Program allow you to paint directly onto Point Cloud geometry? If you could bypass the entire process of having textures altogether, simply layer paint your art asset[/quote]Yes, this already exists for both polygonal and voxel art creation programs.

Take note though, that the processes used when [i]authoring game art[/i] and the processes used when [i]rendering the game[/i], don't have to be the same.
These kinds of "unlimited detail", no hassles (such as textures) processes [b]are[/b] already in use in a lot of art creation tools.
If a current engine then requires certain technical details, such as UV coordinates, or texture-maps ([i]e.g. because it happens that the most efficient GPU implementation just works that way[/i]), then the game engine's asset import pipeline can automatically perform the required conversions.


If/when we switch over to mainly using [i]point-rendering in the game[/i], it's not going to be a huge overhaul on the art pipeline, or widly enable more freedom for artists. That switchover will be just another technical detail, such as whether your vertices are FP16 or FP32 format...
Artists already have this freedom in their tools, assuming their game engine supports their chosen art tools. So the game engine and/or the artists can independently choose whether to use polygons or atoms, whether to use texture-maps or an automatic solution, etc, etc... The engine's asset import pipeline in the middle takes care of the details. This is already the way things are. Edited by Hodgman

Share this post


Link to post
Share on other sites
Kyall    287
It's perfectly plausible. That's the topic answered. It just has some draw backs in terms of a few things, but some bonuses in terms of other stuff.

I've been thinking every now and again how I would engineer some tech to match what euclidean has, and my algorithm so far is:

1. Store the scene as sparse oct tree with duplicates in zones removed.
1.a Any place that is empty is empty of data, it has no 'transparent' voxel representation
1.b The scene is a 'box' subdivided into 8 boxes and then those boxes are sub-divided till we get down to the voxel level.
1.c Each branch along the tree has a color entry that represents the averaged color of the leaves in that section of the tree.
1.d If the color of a branch is the same as the color of all leaves under it, then that branch is set to that color and all leaves are removed. This also means that voxels inside one of these cube maps, if not completely solid and the same color, are not counted as the same colour as the tree. Empty slots in the tree that make up the shape of an object count . Maybe just ignore this last bit since it might cause problems.

2. Move along the tree and render out boxes that describe the branches in the tree. The size of each box (so the depth it goes into the tree) is related to the distance of that part of the tree from the camera. So objects closer up are described by more boxes, and objects further away by less boxes.

3. Now that we have a general idea of the depth of the scene, we use that depth to limit the queries we use to find the actual voxels that make up the scene. The further away a node is from the camera the less we care about actually getting the right value for that part of the screen.

No. 3 is what I'm having problems thinking up. Right now it'll basically be a ray trace that's only saving grace is that the amount of items it needs to test against has been greatly reduced by the pre-render depth step. Maybe that'll be faster enough to work. Maybe it won't. Haven't had the time to try this and see it it'd work. Edited by Kyall

Share this post


Link to post
Share on other sites
[quote name='Crowley99' timestamp='1336370065' post='4937979']
it may (arguably) be more efficient to represent certain surfaces that way, but then it becomes unclear how to use standard techniques such as coloring, displacement mapping or normal mapping, on top of this - you can of course represent the surface detail achieved by these techniques with these "atoms" directly, but then the atom count explodes, and is likely far less efficient of a representation, not more.
[/quote]

[quote name='Hodgman' timestamp='1336378968' post='4938012']
N.B. that if textures can be replaced by creating one atom per pixel in the texture, then the memory requirements of both approaches are going to be similar - you're storing the same data in the end.
[/quote]

In regards to both of these statements, wouldn't the conversion rate of 64 atoms per cubic millimeter take care of that? The atom count would be a controlled conversion, as well as the rate at which color data was distributed.

Until you guys said these things, I never really fully understood what he was talking about in reference to those numbers. It never dawned on me that he *could* have meant textured surfaces and might have figured out a way to represent color data from "converted textured assets".

Also, while doing more research about Point Cloud once you guys responded, I came across PhotoSynth. Microsoft made a product where you actually do the opposite. It converts high res pictures, into Point Cloud Models. The results are actually quite staggering. I don't know the conversion rate for PhotoSynth, but it seems like a technique similar to that would be ideal for the Unlimited Engine.

Share this post


Link to post
Share on other sites
Hodgman    51234
[quote name='Outthink The Room' timestamp='1336474524' post='4938336']Also, while doing more research about Point Cloud once you guys responded, I came across PhotoSynth. Microsoft made a product where you actually do the opposite. It converts high res pictures, into Point Cloud Models. The results are actually quite staggering. I don't know the conversion rate for PhotoSynth, but it seems like a technique similar to that would be ideal for the Unlimited Engine[/quote]Yep, again though, note that this is already in practice. In one project that I worked on, we wanted to base a level off of a real location that we had a video of -- so we extracted all the frames from the video as separate "photographs", fed them through an app like PhotoSynth, and got a point-cloud of that location. We then cleaned up the data-set, built LODs from it and used it in the game.

Share this post


Link to post
Share on other sites
Punika    242
A Little bit Off-topic, but every now and then someone says, it has been done before... And I don't think soo.

I mean, "some but not every one"* thinks it is a Sparse Octree with Raycasting... Then I like to see a Demo which produces 30 FPS at 1024x768 which such a deep Octree Level. I have seen none on a CPU, which they claim to use...


* Fixed that :) Edited by Punika

Share this post


Link to post
Share on other sites
Ben Bowen    115
[quote]I mean, everybody thinks it is a Sparse Octree with Raycasting[/quote]
Wait. Have you even read the topic? Only some (but very popular, you know who [img]http://public.gamedev.net//public/style_emoticons/default/wink.png[/img]) people have spread such an assumption.

Share this post


Link to post
Share on other sites
Pottuvoi    268
One thing I would love to see in their next demo is a perfect quad around 1km in size tilted on every axis while keeping the spatial resolution.
If memory/disk consumption of this object is still 8% of it's polygonal counterpart I'm impressed.

I'm pretty sure that he also said their engine didn't have LoD.
No LoD on geometry or surface color/normal information is a sure way to get aliasing hell.
We prefilter our textures for a reason and would prefilter geometry if it would allow it. (like voxels do..) Edited by Pottuvoi

Share this post


Link to post
Share on other sites
pinknation    126
try thinking more outside the box guys, unlimited detail isn't something that Bruce Dell found in some PDF file on rendering unlimited detail. He invented it, basically going against all other current techniques for rendering point data;

My gist of all this, is that the entire world(not just individual models and objects in the world), are sliced into 2D layers, and converged into cartesian coordinates; these coordinates are then during run-time almost magically reverse transformed to their proper screen-space coordinates after already being sorted through some unknown function;

Don't like that approach? We'll octree's have not been able to yeild those results in real-time either; So, lets try and take down the number of cycles, and complicated maths and leave it more simple, that's the only way he could process that much information in such a small amount of time. Accept it and move towards that; Computational Theory;

I'd also say that ray-tracing is really not the answer here as he states Thinking outside the box might be an example of this based on the cartesian idea, that is the screen-space may not be nothing more then some normals right, and just like when righting a script to do reflection, and refraction, you're no more moving a few pixels in the direction of that normal. So lets transform our cartesian coordinates with some dot product to our screen-space normal; what might happen then? Magical reverse transformation of the exact "atom" we need for that point on the screen. Without a lot of cycles; Or math.

This guy's been developing this thing for a long time, and deserves more respect for not having stuck to standards, and simply accepting the PDF's or Tutorials they find on GameDev as their ultimatum.

He went around and beyond, I say you stop fighting it, and embrace it; Just because he hasn't decided to populate it yet, does not mean that it isn't there. Give him time to perfect it, and make some affiliations with physics companies, who can then compute on the GPU while all graphics processing is being done on the CPU. This type of co-proccessing is what is going to make next-gen games next-gen; Be patient;

Share this post


Link to post
Share on other sites
alh420    5995
As long as he doesn't tell anyone how its done, its utterly useless for anyone.

Skeptical people does not believe because someone comes along telling they have done something. They want you to show it and will immediately ask "how?".

If you claim something extraordinary, you should expect a barrage of tough questions.

And if you answer those questions with snake oil claims that it will solve all your problems and put silly words like "unlimited" into it, then you can expect to not be taken very seriously.

Share this post


Link to post
Share on other sites
Krypt0n    4721
there are some paper about perfect hashing of UV sets, to map positions of voxel to texture coordinates etc. (as you'd otherwise have quite a lot of data).

In theory, you could render a box and based on the UV of a particular face + view direction, you could address a voxels using this kind of perfect hashing. It would of course suck in extreme amount of memory, but would really allow that constant time voxel lookup.

Share this post


Link to post
Share on other sites
ProfL    701
there are some interesting paper about this, indeed.
best so far that I've found: http://research.microsoft.com/en-us/um/people/hoppe/perfecthash.pdf

Share this post


Link to post
Share on other sites
FreneticPonE    3294
I've been thinking about this, and despite all the talk of how parallel this stuff could even get you'd need to solve the problem of storing hi-rez point clouds to begin with, modern games take more than enough room already without the terabytes and zetabytes you could get into with point cloud stuff.

But of course there's been talk of procedurally building/rendering stuff (whatever you want to call it, not directly artist authored). And to my thinking the best looking procedurally rendered stuff today is also the stuff that is hardest to impossible for anyone to render in games, which is small repetitive details.

So, assuming then that some sort of "sparse voxel octree/point cloud dark magic" could be rendered well enough in parallel, what would people think of using such to render individual hairs/leaves/blades of grass/etc. All of which would take way too many polys to actually render just using geometry, wouldn't take almost any storage, and wouldn't really suffer from the usual procedurally rendered problem, i.e. looking highly repetitive. I mean, one strand of hair on someone's head looks like any other strand of hair on that same persons head after all.

Share this post


Link to post
Share on other sites
Ben Bowen    115
@Frenetic Pony
[quote](whatever you want to call it, not directly artist authored)[/quote]

The essence of procedural generation has no fundamental relation to artists' involvement.

[quote]So, assuming then that some sort of "sparse voxel octree/point cloud dark magic" could be rendered well enough[/quote]

By rudimentary function, Unlimited Detail seems to render quite well. The problem is how these systems lack explicit mechanisms for any procedural abstraction. In this sense, Unlimited Detail is surely limited. Polygons win because they are so malleable and have much further limits. This greatly helps with both of the most identified issues of voxels: non-atomic dynamics (animation) and definition (impractical memory consumption for game maps etc). For example, in animation, rather than applying a transformation to a set of some geometric vertices (which mostly just define the spatial character of a model, and little else), you must apply it to the entirety of a Euclidean-regular volume, which requires a comparably greater mass of "points" to be manipulated, than the polygonal geometry.

[quote]what would people think of using such to render individual hairs/leaves/blades of grass/etc. All of which would take way too many polys to actually render just using geometry, wouldn't take almost any storage[/quote]

Okay. A rule-of-thumb (which I hope Euclideon holds): with large masses of data, its best to take advantage of loops, recursion, hierarchy and all such forms of procedure ( [b]procedural![/b] :D ); and avoid algorithmic approaches which would otherwise target atomic manipulation (SIMD's envy).

[quote]I mean, one strand of hair on someone's head looks like any other strand of hair on that same persons head after all.[/quote]

Problem: each individual hair still has an elusive curvature. Edited by Reflexus

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this