Jump to content

  • Log In with Google      Sign In   
  • Create Account


[Theory] Unraveling the Unlimited Detail plausibility


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
168 replies to this topic

#21 way2lazy2care   Members   -  Reputation: 782

Like
0Likes
Like

Posted 08 August 2011 - 02:01 PM

To "just add another child type" implies virtual inheritance, which adds 4 bytes (the typical size of a color) to every object. So where exactly have you saved versus just replicating the color in every single child?

It only adds data if there's a virtual function to be called. If you're traversing the tree from the top down, you don't need to call anything in the children, you just need to skip them. As far as I know SVOs generally use compression techniques based off just storing whether or not a child exists, which is why they are so efficient. It doesn't seem to be a huge stretch to expand that to use a parent's color data or not; ~6 bits total/voxel for static geometry if we double the amount I've heard is what is needed for an SVO (3bits/voxel).

It would get more complex for voxels that do have color data, but I'm not going to come up with a voxel rendering scheme off the top of my head without putting more thought into it. There's still no reason to store color data for every voxel.

Sponsor:

#22 A Brain in a Vat   Members   -  Reputation: 313

Like
0Likes
Like

Posted 08 August 2011 - 02:18 PM

Okay, assuming that's feasible, do you feel that it's realistic to assume that a large number of child voxels will share all lighting attributes with their parents? Do you think that would make for an interesting world? Do you think it's useful to have a high level of detail, such that when you zoom in all of a sudden everything is the exact same color?

#23 way2lazy2care   Members   -  Reputation: 782

Like
0Likes
Like

Posted 08 August 2011 - 02:28 PM

Okay, assuming that's feasible, do you feel that it's realistic to assume that a large number of child voxels will share all lighting attributes with their parents? Do you think that would make for an interesting world? Do you think it's useful to have a high level of detail, such that when you zoom in all of a sudden everything is the exact same color?


Not only do I think it's realistic, I'd consider it the norm when you have that kind of geometry detail. Most of our texture data today is just replicating various lighting and volumetric effects that are totally redundant with voxels.

Here's a simple example just pulled from google. If you look at the cliffs most of their diffuse data is just replicating shadows. With higher geometry detail you could easily get similar results with a single color.

#24 A Brain in a Vat   Members   -  Reputation: 313

Like
0Likes
Like

Posted 08 August 2011 - 02:42 PM

Not only do I think it's realistic, I'd consider it the norm when you have that kind of geometry detail. Most of our texture data today is just replicating various lighting and volumetric effects that are totally redundant with voxels.

Here's a simple example just pulled from google. If you look at the cliffs most of their diffuse data is just replicating shadows. With higher geometry detail you could easily get similar results with a single color.


No, you couldn't actually, because every single little pixel that's a slightly different shadow color from the one next to it would have to have its own normal. Every little pixel that's a slightly different grass or rock color from the one next to it would have to have its own color. How do your propose to get the detailed color variations if each fine-grained voxel is exactly like its neighbor?

#25 A Brain in a Vat   Members   -  Reputation: 313

Like
0Likes
Like

Posted 08 August 2011 - 02:54 PM

Most of our texture data today is just replicating various lighting and volumetric effects that are totally redundant with voxels.


And what does this even mean? This is absolutely false.

The vast majority of voxel applications don't do any shading, and therefore they don't need to store things like normals and binormals and specularity coefficients, etc. In games we do have to, unless you're suggesting voxelizing to the level of detail of actual atoms on a surface, and simulating physics-based light transport and scattering models.

Is that what you're suggesting?

#26 way2lazy2care   Members   -  Reputation: 782

Like
0Likes
Like

Posted 08 August 2011 - 03:10 PM

No, you couldn't actually, because every single little pixel that's a slightly different shadow color from the one next to it would have to have its own normal. Every little pixel that's a slightly different grass or rock color from the one next to it would have to have its own color. How do your propose to get the detailed color variations if each fine-grained voxel is exactly like its neighbor?



Why do you need normals that are any different than what can be generated from the voxels themselves when you have sufficient detail to the pixel level? You do not need different diffuse colors to get different colors any more with voxels than you do with polygons. You just let the light and the detail do the footwork.

The only reason we have half the texture detail we do now is because we don't have the geometry detail we want. If we had the geometry detail we wanted, it stands that we wouldn't need quite so much texture/color detail.

#27 A Brain in a Vat   Members   -  Reputation: 313

Like
0Likes
Like

Posted 08 August 2011 - 03:37 PM

Why do you need normals that are any different than what can be generated from the voxels themselves when you have sufficient detail to the pixel level? You do not need different diffuse colors to get different colors any more with voxels than you do with polygons. You just let the light and the detail do the footwork.

The only reason we have half the texture detail we do now is because we don't have the geometry detail we want. If we had the geometry detail we wanted, it stands that we wouldn't need quite so much texture/color detail.



Why do you need normals that are different from what can be generated from the voxels themselves? You say "let the light and the detail do the footwork". That implies you need surface normals. Where are you going to get the surface normals if they're not stored? Are you going to generate them by analyzing neighbors? That hasn't been shown to be practical.


Regarding color.. so in your world, surfaces are all completely monochromatic? They aren't like that in my world. Most objects aren't made up of a single compound. You're greatly underestimating the sheer number of voxels you'd need to represent a surface and have it not look like molten plastic.

#28 szecs   Members   -  Reputation: 2092

Like
0Likes
Like

Posted 08 August 2011 - 04:04 PM

Maybe a stupid question, wouldn't nice textures (not in the CS meaning) require interpolating between the voxels? I mean, we all know how crap textures (in the CS meaning) look even with bilinear interpolation. A lot of times mip-mapping isn't good enough either.

I can imagine that monochromatic stuff would look right without interpolation but textures (not in the CS meaning) seems to be different.

Or it isn't an issue? Are voxels interpolated anyway? Did my post made any sense written before going sleeping?

#29 way2lazy2care   Members   -  Reputation: 782

Like
1Likes
Like

Posted 08 August 2011 - 04:19 PM

Why do you need normals that are different from what can be generated from the voxels themselves? You say "let the light and the detail do the footwork". That implies you need surface normals. Where are you going to get the surface normals if they're not stored? Are you going to generate them by analyzing neighbors? That hasn't been shown to be practical.

you don't need user specified normals period. The purpose of a normal is just to simulate light bouncing off a surface that doesn't exist. If the surface exists, there's no point in generating a normal that's any more complicated than just using the surface.

Regarding color.. so in your world, surfaces are all completely monochromatic? They aren't like that in my world. Most objects aren't made up of a single compound. You're greatly underestimating the sheer number of voxels you'd need to represent a surface and have it not look like molten plastic.

Most surfaces are monochromatic. At the level of detail we are talking about almost everything is entirely monochromatic. You don't need to change the color of ground that severely when you can add visual interest by just adding a pothole or tire tracks to the actual geometry. All you have to do is walk around. All the walls in my apartment are exactly the same color but get all their different values from the light they take in. Same with most of the chairs and other fabric. Even the wood doors have relatively huge bands of monochrome when you're comparing them to the voxel. The coat hangers are all the same matte metal. The knives and kitchen utencils are 3/4s metal and the other quarter is all black. The lamp behind me is monochrome matte metal even though it has details in the metal with a monochrome lamp shade. The TV is entirely the same glossy black. The speakers around the room are all solid matte black. The vents are all single color matte white.

You severely underestimate how much of the color differences you see are just value changes caused by shadow or different shades of light if you think you can't inherit colors from parents as far as voxels are concerned. Look at how a jpg is stored. I mean really it's just RLE applied to volumes the same way you might apply it in a jpg or other image file. Why should we think volumes need to be any different? It's not like I'm talking about reinventing the wheel. Just sticking the same old wheels to a new engine.



#30 Sirisian   Crossbones+   -  Reputation: 1635

Like
3Likes
Like

Posted 08 August 2011 - 11:03 PM

Maybe a stupid question, wouldn't nice textures (not in the CS meaning) require interpolating between the voxels? I mean, we all know how crap textures (in the CS meaning) look even with bilinear interpolation. A lot of times mip-mapping isn't good enough either.

I can imagine that monochromatic stuff would look right without interpolation but textures (not in the CS meaning) seems to be different.

Or it isn't an issue? Are voxels interpolated anyway? Did my post made any sense written before going sleeping?

One of the most difficult topics I've seen actually. Cone tracing and other sampling methods work. Also simply relying on voxels to collapse their subtrees into their parents to store information is key. That is from very far away an object that is less than a pixel can merge the color of their main subtrees into a single color. As you move closer the ray only traverses into the first level grabbing the merged color. So in actuality the dataset is only 8 color values (assuming a subtree for the highest levels). This leads a lot of people to realize you don't need to load that much data to get visually amazing detail. It's the same theory behind not loading the highest mip level of a texture since the user can never get close enough to see it. Carmack discussed this technique actually in his 2011 Quakecon speech recently when he talked about how they performed a visibility test so they could lower the quality of a lot of textures the player couldn't get to. In the same way a space station that might be 10 GBs of realistic voxel data would stream the top nodes a la google images and it would look perfectly fine. This is where the idea of automatic level of detail comes from.

Anyway the mipmapping problem with voxel is an interesting one with a lot of approximate solutions. If you want an exact solution though imagine your screen then for each pixel there is a frustum emanating out with its faces adjacent to the faces of the adjacent pixel's frustum. Your goal is to find a way to pull back all the voxel data inside of the frustum while also discarding voxels that are behind other voxels. In a way it's similar to the optimal octree frustum culling algorithm. (That is the one that uses only addition pretty much and works with unlimited frustum planes. If you don't know what I mean implement this with a quadtree and you brain will explode with ideas). The caveat is that you start scanning front to back and subtract the frustum generated by voxels that you include. You clip and track the color of the shapes used to create the volume. It is an extraordinarily complicated algorithm that I myself have sketched out on paper. You end up getting back a square region that looks like a bunch of colored regions. You merge all the colors based on their area to get the final pixel value.

As an example if you looked and saw only two voxels in your pixels frustum then it might look like this:
Posted Image
I colored the sides of one voxel differently so the perspective can be seen.

The nice thing about this is that you get amazing anti-aliasing especially if your voxel format defines infinite detail contour data. (That is you have subtrees that loop back around a few times to generate extra detail or nodes that define contours from a map in order to procedurally generate detail).

It's a fun topic with not very much research. A lot of the research papers you find though cover raytracing concepts. I wish someone would invest in making raycasting hardware if only to run via Gaikai or Onlive. :P

I recommend reading Laine's papers on SVO stuff.

How this relates to point-cloud rendering I have no idea. I assume they found an interesting algorithm that might be different than the normal raycasting/3DDDA stuff.

#31 way2lazy2care   Members   -  Reputation: 782

Like
0Likes
Like

Posted 09 August 2011 - 06:45 AM

How this relates to point-cloud rendering I have no idea. I assume they found an interesting algorithm that might be different than the normal raycasting/3DDDA stuff.


There's an interview with the dude on kotaku that goes more in depth; though still not that in depth. Really what it came down to was that they just found a better way to organize their point cloud data that made it more easily traversed as best I can tell.

#32 A Brain in a Vat   Members   -  Reputation: 313

Like
1Likes
Like

Posted 09 August 2011 - 08:24 AM

You're very confused.

The purpose of a normal is just to simulate light bouncing off a surface that doesn't exist. If the surface exists, there's no point in generating a normal that's any more complicated than just using the surface.


You seem to want to call voxels a "surface", which is highly inaccurate, but let's accept it for the sake of argument. You still need to get the surface normal (and binormal). How else will you apply the lighting equation? Could you compute the surface normal? Sure, with voxels that's typically done by computing the gradient at a given point. But that is not practical. They may derive the surface normals, but then they store them.

I suggest you do a survey of voxel renderers in the field instead of talking out of your ass. They all either store the normal or they do away with shading completely (which we cannot do in games).

Most surfaces are monochromatic. At the level of detail we are talking about almost everything is entirely monochromatic. You don't need to change the color of ground that severely when you can add visual interest by just adding a pothole or tire tracks to the actual geometry. All you have to do is walk around. All the walls in my apartment are exactly the same color but get all their different values from the light they take in. Same with most of the chairs and other fabric. Even the wood doors have relatively huge bands of monochrome when you're comparing them to the voxel. The coat hangers are all the same matte metal. The knives and kitchen utencils are 3/4s metal and the other quarter is all black. The lamp behind me is monochrome matte metal even though it has details in the metal with a monochrome lamp shade. The TV is entirely the same glossy black. The speakers around the room are all solid matte black. The vents are all single color matte white.


LOL. Okay, I tell you what. You go make a game in which all surfaces are monochromatic, and we'll see how good it looks...... I am so tired of arguing with you, you just spew nonsense.

#33 Hodgman   Moderators   -  Reputation: 27490

Like
0Likes
Like

Posted 09 August 2011 - 09:04 AM

Really what it came down to was that they just found a better way to organize their point cloud data that made it more easily traversed as best I can tell.

This acceleration structure is really the cornerstone of the whole UD tech. If you knew what this structure was, you could replicate it... Plenty of other people have designed similar data structures before and have published their research.

For now, for all we know, he's just voxelized his point-clouds and put them in an SVO Posted Image

BTW crytek actually used voxels to model large parts of the Crysis 2 environments, and then they compressed them into a really great acceleration data structure: triangulated meshes Posted Image

#34 way2lazy2care   Members   -  Reputation: 782

Like
0Likes
Like

Posted 09 August 2011 - 09:36 AM

You seem to want to call voxels a "surface", which is highly inaccurate, but let's accept it for the sake of argument. You still need to get the surface normal (and binormal). How else will you apply the lighting equation? Could you compute the surface normal? Sure, with voxels that's typically done by computing the gradient at a given point. But that is not practical. They may derive the surface normals, but then they store them.

That depends entirely on how you are drawing your voxels. There are plenty of solutions, but they are all implementation specific. You don't need a very complex normal with voxels because they shouldn't be large enough to need them that accurate. If you're using them for static geometry you can easily store 6 bits and calculate the normal cheaply at runtime. Even with a non-static SVO you can get around it fairly quickly depending on how your tree is set up in memory. In fact, the more detailed and small your voxels get the less complicated your normals have to get. Ideally your voxels should only be the size of a pixel on the screen where the difference between a normal pointing at 0,0,1 would be practically the same as 0,1,1 especially after anti-aliasing/blurring which every voxel engine I've seen does already.

You also talk like these aren't also problems with textures and polys. We store tons of color data in games already, we store tons of geometry data. All of that is redundant when you use voxels. Because of the lack of geometry detail we need to store a lot more color data than we'd need to with highly detailed voxels.

LOL. Okay, I tell you what. You go make a game in which all surfaces are monochromatic, and we'll see how good it looks...... I am so tired of arguing with you, you just spew nonsense.

It's totally true dude. The biggest reason we have such fine detail in our textures in games is to simulate geometry that does not exist. If the geometry exists, we don't need complicated textures. You don't need a brick wall texture, you just need a brick color a grout color and to model the bricks. All of the voxels in a brick can all use the same brick color. All of the voxels in the grout can all use the same grout color. There's no reason to store the grout and brick color in every single voxel.

#35 rouncer   Members   -  Reputation: 355

Like
0Likes
Like

Posted 09 August 2011 - 10:32 AM

Really what it came down to was that they just found a better way to organize their point cloud data that made it more easily traversed as best I can tell.

This acceleration structure is really the cornerstone of the whole UD tech. If you knew what this structure was, you could replicate it... Plenty of other people have designed similar data structures before and have published their research.

For now, for all we know, he's just voxelized his point-clouds and put them in an SVO Posted Image

BTW crytek actually used voxels to model large parts of the Crysis 2 environments, and then they compressed them into a really great acceleration data structure: triangulated meshes Posted Image


Thats actually not silly at all!
thats actually what im planning on doing, use voxels to make the world, but then ill pick say the 5th or so lod then displacement map the rest onto this lod level, its got to have much better compression than voxels, like with jpeg compression on the textures.
probably have better performance too.

#36 Syranide   Members   -  Reputation: 375

Like
0Likes
Like

Posted 09 August 2011 - 01:55 PM


You seem to want to call voxels a "surface", which is highly inaccurate, but let's accept it for the sake of argument. You still need to get the surface normal (and binormal). How else will you apply the lighting equation? Could you compute the surface normal? Sure, with voxels that's typically done by computing the gradient at a given point. But that is not practical. They may derive the surface normals, but then they store them.

That depends entirely on how you are drawing your voxels. There are plenty of solutions, but they are all implementation specific. You don't need a very complex normal with voxels because they shouldn't be large enough to need them that accurate. If you're using them for static geometry you can easily store 6 bits and calculate the normal cheaply at runtime. Even with a non-static SVO you can get around it fairly quickly depending on how your tree is set up in memory. In fact, the more detailed and small your voxels get the less complicated your normals have to get. Ideally your voxels should only be the size of a pixel on the screen where the difference between a normal pointing at 0,0,1 would be practically the same as 0,1,1 especially after anti-aliasing/blurring which every voxel engine I've seen does already.

You also talk like these aren't also problems with textures and polys. We store tons of color data in games already, we store tons of geometry data. All of that is redundant when you use voxels. Because of the lack of geometry detail we need to store a lot more color data than we'd need to with highly detailed voxels.


That is a rediculous statement, that textures would need to store more color data than voxels? Voxels needs to store way more color data, textures can be overlapped, tiled and procedurally composited at runtime to create visually stunning textures with relatively little storage needs, and also stretched over huge distances of terrain. No such thing for voxels, every single voxel of every single square meter of terrain needs a unique color.


LOL. Okay, I tell you what. You go make a game in which all surfaces are monochromatic, and we'll see how good it looks...... I am so tired of arguing with you, you just spew nonsense.

It's totally true dude. The biggest reason we have such fine detail in our textures in games is to simulate geometry that does not exist. If the geometry exists, we don't need complicated textures. You don't need a brick wall texture, you just need a brick color a grout color and to model the bricks. All of the voxels in a brick can all use the same brick color. All of the voxels in the grout can all use the same grout color. There's no reason to store the grout and brick color in every single voxel.


I would have to agree with the other dude.

I really don't see how you could possibly use monochromatic colors or somehow benefit from not baking lighting with voxels. You cannot represent textures as monochromatic colors and expect lighting to fill in the blanks, that is absurd in my opinion, a texture consists of different COLORS, your suggestion would at best be different SHADES of a single color. Meaning, it will always look like a single color with different shades. Also, you assume that we don't want to bake lighting into the voxels, which is probably a necessity right now, and will be for a very long time, forget baking ambient occlusion too (which if not for memory issues could be really nice).

I'm pretty sure that the only reason UD even looks half-decent right now is because he's baking shadows and lighting into the voxels.

Go out into the wild, hell, even the city, bring the brightest light source could ever find and photograph a bunch of things, I'm pretty sure you could not find a single thing that would end up looking like a single flat color and not be plastic or painted... and even those will probably have a slight variation to them. Even more so, you'll find that all materials reflect differently and give off different colors depending on their surrondings (also, subsurface scattering)... you try and compress that efficiently into a voxel for render with dynamic lights.

It's rediculous to suggest that we could recreate objects in nature with a single color and then let light do the work... especially when the lights most certainly would never be able consider radiance transfer, etc, in realtime.



#37 Sirisian   Crossbones+   -  Reputation: 1635

Like
0Likes
Like

Posted 09 August 2011 - 03:18 PM

That is a rediculous statement, that textures would need to store more color data than voxels? Voxels needs to store way more color data, textures can be overlapped, tiled and procedurally composited at runtime to create visually stunning textures with relatively little storage needs, and also stretched over huge distances of terrain. No such thing for voxels, every single voxel of every single square meter of terrain needs a unique color.

Go play with 3D noise functions. Terrain is a good example of something that can be procedurally generated with fractal noise. As the ray enters the terrain box (or starts inside) it performs tests with the higher octaves which allows it skip large areas of open terrain. Each of these octave points can be generated independent of one another. Not to mention you stop traversing when you have enough detail based on the distance. That is if designed correctly you would have infinite procedural detail for even the closest objects.

You can speed this up by caching the results in a tree so the data around the camera would be able to traverse quickly. Sadly procedurally generating data as you traverse each octave is rather intensive. It doesn't mean you can't define say a basic meter resolution mountain then procedurally generate more detail as you get close to it and discard subtrees when you move away.

There's a trick often used in voxel formats to store metadata in higher nodes. Someone mentioned normals earlier. If you store normals/contour data at higher levels in the tree you can feed that into a procedural algorithm along with say texture data to procedurally generate a surface with a certain texture. The lack of research into those areas doesn't mean it's not possibly. :wink:

#38 A Brain in a Vat   Members   -  Reputation: 313

Like
-1Likes
Like

Posted 09 August 2011 - 04:09 PM

There's a trick often used in voxel formats to store metadata in higher nodes. Someone mentioned normals earlier. If you store normals/contour data at higher levels in the tree you can feed that into a procedural algorithm along with say texture data to procedurally generate a surface with a certain texture. The lack of research into those areas doesn't mean it's not possibly. :wink:


How could you possibly "store normals at higher levels in the tree"?? The real surface normal at any given surface voxel depends enormously on the positions of the surface voxels around it. Two voxels at the same SVO level might have normals that are pointing 180 degrees from each other. How could that information be stored higher up in the tree?

You're suggesting that we procedurally generate normals and map them to voxels? That will look like shit, and that's why no one has done research on it. The only two options that make sense are to 1) store the lighting information or 2) generate it by analyzing the neighboring voxel information.

Imagine a traditional mesh. Imagine how shitty it would look if we procedurally generated the normals at each vertex. We don't do that -- we either store the lighting information at each vertex, or we have map a texture to it that is of finer scale than our vertices. We don't do it in meshes and no one would do it with voxels.

What you're trying to get at is that it's certainly conceivable to procedurally generate lighting perturbations at a finer scale than our voxels, but that's not really relevant to what we're talking about. We're talking about whether you'd need to store lighting information at each voxel.

#39 Sirisian   Crossbones+   -  Reputation: 1635

Like
1Likes
Like

Posted 09 August 2011 - 08:04 PM

How could you possibly "store normals at higher levels in the tree"?? The real surface normal at any given surface voxel depends enormously on the positions of the surface voxels around it. Two voxels at the same SVO level might have normals that are pointing 180 degrees from each other. How could that information be stored higher up in the tree?

At what detail level are you talking about? I have a desk in front of me that has a bumpy grain texture. The normals at the surface of it don't differ by more than 90 degrees. In fact the surface of most objects at 1 cm detail don't differ by that much. Normal maps exploit this at the triangle level. The same idea can be applied to voxels with overrides at lower voxel levels for interesting features. Just looking around my my phone and grainy desk all have smooth normals. At their "mip-level" in a voxel tree they have a very uniform normals. It's only until you look closer you see the surface normals are "jittered smoothly". Procedurally generating these jitters isn't out of the question.

You're suggesting that we procedurally generate normals and map them to voxels? That will look like shit, and that's why no one has done research on it. The only two options that make sense are to 1) store the lighting information or 2) generate it by analyzing the neighboring voxel information.

No I was referring to procedurally generating the detail after a certain level. Generating a cement texture (the feeling, not the 2D color one) for instance with normals isn't as difficult as it first sounds.

You don't need to analyze the neighboring voxel information if you input a normal. The normals of the generated sub-tree would use their parent normal to create a surface of voxels with a smooth change of the normal over the surface. Old articles that help paint a picture.

It's hard to explain if you've never messed with noise functions and how they work, but extracting normal information and data is very easy. Caching it in a sub-tree is also something that would be interesting.

I'm not sure why I'm defending voxels. :lol: Personally without hardware support it's very hard to get the same performance as triangles. It's more interesting academically it seems.

#40 way2lazy2care   Members   -  Reputation: 782

Like
0Likes
Like

Posted 09 August 2011 - 09:46 PM



LOL. Okay, I tell you what. You go make a game in which all surfaces are monochromatic, and we'll see how good it looks...... I am so tired of arguing with you, you just spew nonsense.

It's totally true dude. The biggest reason we have such fine detail in our textures in games is to simulate geometry that does not exist. If the geometry exists, we don't need complicated textures. You don't need a brick wall texture, you just need a brick color a grout color and to model the bricks. All of the voxels in a brick can all use the same brick color. All of the voxels in the grout can all use the same grout color. There's no reason to store the grout and brick color in every single voxel.


I would have to agree with the other dude.

I really don't see how you could possibly use monochromatic colors or somehow benefit from not baking lighting with voxels. You cannot represent textures as monochromatic colors and expect lighting to fill in the blanks, that is absurd in my opinion, a texture consists of different COLORS, your suggestion would at best be different SHADES of a single color. Meaning, it will always look like a single color with different shades. Also, you assume that we don't want to bake lighting into the voxels, which is probably a necessity right now, and will be for a very long time, forget baking ambient occlusion too (which if not for memory issues could be really nice).

Firstly, I was talking at the sub-meter level as we were talking about not being able to use a parent node's colors. There's no reason the majority of dirt nodes have to have a unique color. The majority of them are just brownish orange. You can still have children that are their own unique color, but most of them can just be the same orangish brown with the majority of the interest coming from shadow and light differences over the surface.

A Brain in a Vat said that the majority of voxels would need their own color data, and I just don't see that being the case. I went so far as to say that the majority of voxels in a SVO wouldn't need their own color, but could just inherit from their parents. I stand by that.

I'm pretty sure that the only reason UD even looks half-decent right now is because he's baking shadows and lighting into the voxels.

He never mentions voxels in the video or any interviews. I'm not sure why so many people jumped to voxels, when the only technology he confirms he's using is point clouds.

Go out into the wild, hell, even the city, bring the brightest light source could ever find and photograph a bunch of things, I'm pretty sure you could not find a single thing that would end up looking like a single flat color and not be plastic or painted... and even those will probably have a slight variation to them. Even more so, you'll find that all materials reflect differently and give off different colors depending on their surrondings (also, subsurface scattering)... you try and compress that efficiently into a voxel for render with dynamic lights.

It's rediculous to suggest that we could recreate objects in nature with a single color and then let light do the work... especially when the lights most certainly would never be able consider radiance transfer, etc, in realtime.

Posted Image

I'll use this picture as an example. Imagine that that cliff is part of an SVO. It's root node might have the light sandy color near the top of the cliff. How many voxels in a model of this cliff would have exactly the same color. All of the voxels under that root with the same color could use the exact same color data stored in the root. The more orangy parts of the cliff next. Of those, how many voxels do you think might be the same color? It only needs to be stored in what 20 places and inherited by children? The rocks on the beach hardly need any more color than light sandyness with the detail they have.

The cliffs in the background, which wouldn't get traversed all the way to the leaf nodes; they don't even need anything other than the root color really.

Here's another example:
Posted Image

How many of the voxels in a model of this bank would just use the same salmon color? Sure there are places like what I am guessing is bird poo over the sign, but those are easily stored in voxels containing color data while all their salmon neighbors just have to sit there and exist.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS