[Theory] Unraveling the Unlimited Detail plausibility

Started by
166 comments, last by Ben Bowen 11 years, 10 months ago

How this relates to point-cloud rendering I have no idea. I assume they found an interesting algorithm that might be different than the normal raycasting/3DDDA stuff.


There's an interview with the dude on kotaku that goes more in depth; though still not that in depth. Really what it came down to was that they just found a better way to organize their point cloud data that made it more easily traversed as best I can tell.
Advertisement
You're very confused.


The purpose of a normal is just to simulate light bouncing off a surface that doesn't exist. If the surface exists, there's no point in generating a normal that's any more complicated than just using the surface.


You seem to want to call voxels a "surface", which is highly inaccurate, but let's accept it for the sake of argument. You still need to get the surface normal (and binormal). How else will you apply the lighting equation? Could you compute the surface normal? Sure, with voxels that's typically done by computing the gradient at a given point. But that is not practical. They may derive the surface normals, but then they store them.

I suggest you do a survey of voxel renderers in the field instead of talking out of your ass. They all either store the normal or they do away with shading completely (which we cannot do in games).


Most surfaces are monochromatic. At the level of detail we are talking about almost everything is entirely monochromatic. You don't need to change the color of ground that severely when you can add visual interest by just adding a pothole or tire tracks to the actual geometry. All you have to do is walk around. All the walls in my apartment are exactly the same color but get all their different values from the light they take in. Same with most of the chairs and other fabric. Even the wood doors have relatively huge bands of monochrome when you're comparing them to the voxel. The coat hangers are all the same matte metal. The knives and kitchen utencils are 3/4s metal and the other quarter is all black. The lamp behind me is monochrome matte metal even though it has details in the metal with a monochrome lamp shade. The TV is entirely the same glossy black. The speakers around the room are all solid matte black. The vents are all single color matte white.[/quote]

LOL. Okay, I tell you what. You go make a game in which all surfaces are monochromatic, and we'll see how good it looks...... I am so tired of arguing with you, you just spew nonsense.
Really what it came down to was that they just found a better way to organize their point cloud data that made it more easily traversed as best I can tell.[/quote]This acceleration structure is really the cornerstone of the whole UD tech. If you knew what this structure was, you could replicate it... Plenty of other people have designed similar data structures before and have published their research.

For now, for all we know, he's just voxelized his point-clouds and put them in an SVO wink.gif

BTW crytek actually used voxels to model large parts of the Crysis 2 environments, and then they compressed them into a really great acceleration data structure: triangulated meshes laugh.gif

You seem to want to call voxels a "surface", which is highly inaccurate, but let's accept it for the sake of argument. You still need to get the surface normal (and binormal). How else will you apply the lighting equation? Could you compute the surface normal? Sure, with voxels that's typically done by computing the gradient at a given point. But that is not practical. They may derive the surface normals, but then they store them.

That depends entirely on how you are drawing your voxels. There are plenty of solutions, but they are all implementation specific. You don't need a very complex normal with voxels because they shouldn't be large enough to need them that accurate. If you're using them for static geometry you can easily store 6 bits and calculate the normal cheaply at runtime. Even with a non-static SVO you can get around it fairly quickly depending on how your tree is set up in memory. In fact, the more detailed and small your voxels get the less complicated your normals have to get. Ideally your voxels should only be the size of a pixel on the screen where the difference between a normal pointing at 0,0,1 would be practically the same as 0,1,1 especially after anti-aliasing/blurring which every voxel engine I've seen does already.

You also talk like these aren't also problems with textures and polys. We store tons of color data in games already, we store tons of geometry data. All of that is redundant when you use voxels. Because of the lack of geometry detail we need to store a lot more color data than we'd need to with highly detailed voxels.


LOL. Okay, I tell you what. You go make a game in which all surfaces are monochromatic, and we'll see how good it looks...... I am so tired of arguing with you, you just spew nonsense.
[/quote]
It's totally true dude. The biggest reason we have such fine detail in our textures in games is to simulate geometry that does not exist. If the geometry exists, we don't need complicated textures. You don't need a brick wall texture, you just need a brick color a grout color and to model the bricks. All of the voxels in a brick can all use the same brick color. All of the voxels in the grout can all use the same grout color. There's no reason to store the grout and brick color in every single voxel.

Really what it came down to was that they just found a better way to organize their point cloud data that made it more easily traversed as best I can tell.
This acceleration structure is really the cornerstone of the whole UD tech. If you knew what this structure was, you could replicate it... Plenty of other people have designed similar data structures before and have published their research.

For now, for all we know, he's just voxelized his point-clouds and put them in an SVO wink.gif

BTW crytek actually used voxels to model large parts of the Crysis 2 environments, and then they compressed them into a really great acceleration data structure: triangulated meshes laugh.gif
[/quote]

Thats actually not silly at all!
thats actually what im planning on doing, use voxels to make the world, but then ill pick say the 5th or so lod then displacement map the rest onto this lod level, its got to have much better compression than voxels, like with jpeg compression on the textures.
probably have better performance too.

[quote name='A Brain in a Vat' timestamp='1312899849' post='4846692']
You seem to want to call voxels a "surface", which is highly inaccurate, but let's accept it for the sake of argument. You still need to get the surface normal (and binormal). How else will you apply the lighting equation? Could you compute the surface normal? Sure, with voxels that's typically done by computing the gradient at a given point. But that is not practical. They may derive the surface normals, but then they store them.

That depends entirely on how you are drawing your voxels. There are plenty of solutions, but they are all implementation specific. You don't need a very complex normal with voxels because they shouldn't be large enough to need them that accurate. If you're using them for static geometry you can easily store 6 bits and calculate the normal cheaply at runtime. Even with a non-static SVO you can get around it fairly quickly depending on how your tree is set up in memory. In fact, the more detailed and small your voxels get the less complicated your normals have to get. Ideally your voxels should only be the size of a pixel on the screen where the difference between a normal pointing at 0,0,1 would be practically the same as 0,1,1 especially after anti-aliasing/blurring which every voxel engine I've seen does already.

You also talk like these aren't also problems with textures and polys. We store tons of color data in games already, we store tons of geometry data. All of that is redundant when you use voxels. Because of the lack of geometry detail we need to store a lot more color data than we'd need to with highly detailed voxels.
[/quote]

That is a rediculous statement, that textures would need to store more color data than voxels? Voxels needs to store way more color data, textures can be overlapped, tiled and procedurally composited at runtime to create visually stunning textures with relatively little storage needs, and also stretched over huge distances of terrain. No such thing for voxels, every single voxel of every single square meter of terrain needs a unique color.



LOL. Okay, I tell you what. You go make a game in which all surfaces are monochromatic, and we'll see how good it looks...... I am so tired of arguing with you, you just spew nonsense.

It's totally true dude. The biggest reason we have such fine detail in our textures in games is to simulate geometry that does not exist. If the geometry exists, we don't need complicated textures. You don't need a brick wall texture, you just need a brick color a grout color and to model the bricks. All of the voxels in a brick can all use the same brick color. All of the voxels in the grout can all use the same grout color. There's no reason to store the grout and brick color in every single voxel.
[/quote]

I would have to agree with the other dude.

I really don't see how you could possibly use monochromatic colors or somehow benefit from not baking lighting with voxels. You cannot represent textures as monochromatic colors and expect lighting to fill in the blanks, that is absurd in my opinion, a texture consists of different COLORS, your suggestion would at best be different SHADES of a single color. Meaning, it will always look like a single color with different shades. Also, you assume that we don't want to bake lighting into the voxels, which is probably a necessity right now, and will be for a very long time, forget baking ambient occlusion too (which if not for memory issues could be really nice).

I'm pretty sure that the only reason UD even looks half-decent right now is because he's baking shadows and lighting into the voxels.

Go out into the wild, hell, even the city, bring the brightest light source could ever find and photograph a bunch of things, I'm pretty sure you could not find a single thing that would end up looking like a single flat color and not be plastic or painted... and even those will probably have a slight variation to them. Even more so, you'll find that all materials reflect differently and give off different colors depending on their surrondings (also, subsurface scattering)... you try and compress that efficiently into a voxel for render with dynamic lights.

It's rediculous to suggest that we could recreate objects in nature with a single color and then let light do the work... especially when the lights most certainly would never be able consider radiance transfer, etc, in realtime.



That is a rediculous statement, that textures would need to store more color data than voxels? Voxels needs to store way more color data, textures can be overlapped, tiled and procedurally composited at runtime to create visually stunning textures with relatively little storage needs, and also stretched over huge distances of terrain. No such thing for voxels, every single voxel of every single square meter of terrain needs a unique color.

Go play with 3D noise functions. Terrain is a good example of something that can be procedurally generated with fractal noise. As the ray enters the terrain box (or starts inside) it performs tests with the higher octaves which allows it skip large areas of open terrain. Each of these octave points can be generated independent of one another. Not to mention you stop traversing when you have enough detail based on the distance. That is if designed correctly you would have infinite procedural detail for even the closest objects.

You can speed this up by caching the results in a tree so the data around the camera would be able to traverse quickly. Sadly procedurally generating data as you traverse each octave is rather intensive. It doesn't mean you can't define say a basic meter resolution mountain then procedurally generate more detail as you get close to it and discard subtrees when you move away.

There's a trick often used in voxel formats to store metadata in higher nodes. Someone mentioned normals earlier. If you store normals/contour data at higher levels in the tree you can feed that into a procedural algorithm along with say texture data to procedurally generate a surface with a certain texture. The lack of research into those areas doesn't mean it's not possibly. :wink:

There's a trick often used in voxel formats to store metadata in higher nodes. Someone mentioned normals earlier. If you store normals/contour data at higher levels in the tree you can feed that into a procedural algorithm along with say texture data to procedurally generate a surface with a certain texture. The lack of research into those areas doesn't mean it's not possibly. :wink:


How could you possibly "store normals at higher levels in the tree"?? The real surface normal at any given surface voxel depends enormously on the positions of the surface voxels around it. Two voxels at the same SVO level might have normals that are pointing 180 degrees from each other. How could that information be stored higher up in the tree?

You're suggesting that we procedurally generate normals and map them to voxels? That will look like shit, and that's why no one has done research on it. The only two options that make sense are to 1) store the lighting information or 2) generate it by analyzing the neighboring voxel information.

Imagine a traditional mesh. Imagine how shitty it would look if we procedurally generated the normals at each vertex. We don't do that -- we either store the lighting information at each vertex, or we have map a texture to it that is of finer scale than our vertices. We don't do it in meshes and no one would do it with voxels.

What you're trying to get at is that it's certainly conceivable to procedurally generate lighting perturbations at a finer scale than our voxels, but that's not really relevant to what we're talking about. We're talking about whether you'd need to store lighting information at each voxel.

How could you possibly "store normals at higher levels in the tree"?? The real surface normal at any given surface voxel depends enormously on the positions of the surface voxels around it. Two voxels at the same SVO level might have normals that are pointing 180 degrees from each other. How could that information be stored higher up in the tree?

At what detail level are you talking about? I have a desk in front of me that has a bumpy grain texture. The normals at the surface of it don't differ by more than 90 degrees. In fact the surface of most objects at 1 cm detail don't differ by that much. Normal maps exploit this at the triangle level. The same idea can be applied to voxels with overrides at lower voxel levels for interesting features. Just looking around my my phone and grainy desk all have smooth normals. At their "mip-level" in a voxel tree they have a very uniform normals. It's only until you look closer you see the surface normals are "jittered smoothly". Procedurally generating these jitters isn't out of the question.


You're suggesting that we procedurally generate normals and map them to voxels? That will look like shit, and that's why no one has done research on it. The only two options that make sense are to 1) store the lighting information or 2) generate it by analyzing the neighboring voxel information.

No I was referring to procedurally generating the detail after a certain level. Generating a cement texture (the feeling, not the 2D color one) for instance with normals isn't as difficult as it first sounds.

You don't need to analyze the neighboring voxel information if you input a normal. The normals of the generated sub-tree would use their parent normal to create a surface of voxels with a smooth change of the normal over the surface. Old articles that help paint a picture.

It's hard to explain if you've never messed with noise functions and how they work, but extracting normal information and data is very easy. Caching it in a sub-tree is also something that would be interesting.

I'm not sure why I'm defending voxels. :lol: Personally without hardware support it's very hard to get the same performance as triangles. It's more interesting academically it seems.

[quote name='way2lazy2care' timestamp='1312904170' post='4846735']

LOL. Okay, I tell you what. You go make a game in which all surfaces are monochromatic, and we'll see how good it looks...... I am so tired of arguing with you, you just spew nonsense.

It's totally true dude. The biggest reason we have such fine detail in our textures in games is to simulate geometry that does not exist. If the geometry exists, we don't need complicated textures. You don't need a brick wall texture, you just need a brick color a grout color and to model the bricks. All of the voxels in a brick can all use the same brick color. All of the voxels in the grout can all use the same grout color. There's no reason to store the grout and brick color in every single voxel.
[/quote]

I would have to agree with the other dude.

I really don't see how you could possibly use monochromatic colors or somehow benefit from not baking lighting with voxels. You cannot represent textures as monochromatic colors and expect lighting to fill in the blanks, that is absurd in my opinion, a texture consists of different COLORS, your suggestion would at best be different SHADES of a single color. Meaning, it will always look like a single color with different shades. Also, you assume that we don't want to bake lighting into the voxels, which is probably a necessity right now, and will be for a very long time, forget baking ambient occlusion too (which if not for memory issues could be really nice).[/quote]
Firstly, I was talking at the sub-meter level as we were talking about not being able to use a parent node's colors. There's no reason the majority of dirt nodes have to have a unique color. The majority of them are just brownish orange. You can still have children that are their own unique color, but most of them can just be the same orangish brown with the majority of the interest coming from shadow and light differences over the surface.

A Brain in a Vat said that the majority of voxels would need their own color data, and I just don't see that being the case. I went so far as to say that the majority of voxels in a SVO wouldn't need their own color, but could just inherit from their parents. I stand by that.

I'm pretty sure that the only reason UD even looks half-decent right now is because he's baking shadows and lighting into the voxels.[/quote]
He never mentions voxels in the video or any interviews. I'm not sure why so many people jumped to voxels, when the only technology he confirms he's using is point clouds.

Go out into the wild, hell, even the city, bring the brightest light source could ever find and photograph a bunch of things, I'm pretty sure you could not find a single thing that would end up looking like a single flat color and not be plastic or painted... and even those will probably have a slight variation to them. Even more so, you'll find that all materials reflect differently and give off different colors depending on their surrondings (also, subsurface scattering)... you try and compress that efficiently into a voxel for render with dynamic lights.

It's rediculous to suggest that we could recreate objects in nature with a single color and then let light do the work... especially when the lights most certainly would never be able consider radiance transfer, etc, in realtime.
[/quote]
234056_08aab1f7.jpg

I'll use this picture as an example. Imagine that that cliff is part of an SVO. It's root node might have the light sandy color near the top of the cliff. How many voxels in a model of this cliff would have exactly the same color. All of the voxels under that root with the same color could use the exact same color data stored in the root. The more orangy parts of the cliff next. Of those, how many voxels do you think might be the same color? It only needs to be stored in what 20 places and inherited by children? The rocks on the beach hardly need any more color than light sandyness with the detail they have.

The cliffs in the background, which wouldn't get traversed all the way to the leaf nodes; they don't even need anything other than the root color really.

Here's another example:
800px-ScotiaBankSandstone.jpg

How many of the voxels in a model of this bank would just use the same salmon color? Sure there are places like what I am guessing is bird poo over the sign, but those are easily stored in voxels containing color data while all their salmon neighbors just have to sit there and exist.

This topic is closed to new replies.

Advertisement