Sign in to follow this  
Bombshell93

[Theory] Unraveling the Unlimited Detail plausibility

Recommended Posts

[quote name='A Brain in a Vat' timestamp='1312836146' post='4846360']
No, you couldn't actually, because every single little pixel that's a slightly different shadow color from the one next to it [b]would have to have its own normal[/b]. Every little pixel that's a slightly different grass or rock color from the one next to it [b]would have to have its own color[/b]. How do your propose to get the detailed color variations if each fine-grained voxel is exactly like its neighbor?
[/quote]


Why do you need normals that are any different than what can be generated from the voxels themselves when you have sufficient detail to the pixel level? You do not need different diffuse colors to get different colors any more with voxels than you do with polygons. You just let the light and the detail do the footwork.

The only reason we have half the texture detail we do now is because we don't have the geometry detail we want. If we had the geometry detail we wanted, it stands that we wouldn't need quite so much texture/color detail.

Share this post


Link to post
Share on other sites
[quote name='way2lazy2care' timestamp='1312837832' post='4846377']
Why do you need normals that are any different than what can be generated from the voxels themselves when you have sufficient detail to the pixel level? You do not need different diffuse colors to get different colors any more with voxels than you do with polygons. You just let the light and the detail do the footwork.

The only reason we have half the texture detail we do now is because we don't have the geometry detail we want. If we had the geometry detail we wanted, it stands that we wouldn't need quite so much texture/color detail.
[/quote]


Why do you need normals that are different from what can be generated from the voxels themselves? You say "let the light and the detail do the footwork". That implies you need surface normals. Where are you going to get the surface normals if they're not stored? Are you going to generate them by analyzing neighbors? That hasn't been shown to be practical.


Regarding color.. so in your world, surfaces are all completely monochromatic? They aren't like that in my world. Most objects aren't made up of a single compound. You're greatly underestimating the sheer number of voxels you'd need to represent a surface and have it not look like molten plastic.

Share this post


Link to post
Share on other sites
Maybe a stupid question, wouldn't nice textures (not in the CS meaning) require interpolating between the voxels? I mean, we all know how crap textures (in the CS meaning) look even with bilinear interpolation. A lot of times mip-mapping isn't good enough either.

I can imagine that monochromatic stuff would look right without interpolation but textures (not in the CS meaning) seems to be different.

Or it isn't an issue? Are voxels interpolated anyway? Did my post made any sense written before going sleeping?

Share this post


Link to post
Share on other sites
[quote name='A Brain in a Vat' timestamp='1312839441' post='4846386']
Why do you need normals that are different from what can be generated from the voxels themselves? You say "let the light and the detail do the footwork". That implies you need surface normals. Where are you going to get the surface normals if they're not stored? Are you going to generate them by analyzing neighbors? That hasn't been shown to be practical.[/quote]
you don't need user specified normals period. The purpose of a normal is just to simulate light bouncing off a surface that doesn't exist. If the surface exists, there's no point in generating a normal that's any more complicated than just using the surface.

[quote]Regarding color.. so in your world, surfaces are all completely monochromatic? They aren't like that in my world. Most objects aren't made up of a single compound. You're greatly underestimating the sheer number of voxels you'd need to represent a surface and have it not look like molten plastic.
[/quote]
Most surfaces are monochromatic. At the level of detail we are talking about almost everything is entirely monochromatic. You don't need to change the color of ground that severely when you can add visual interest by just adding a pothole or tire tracks to the actual geometry. All you have to do is walk around. All the walls in my apartment are exactly the same color but get all their different values from the light they take in. Same with most of the chairs and other fabric. Even the wood doors have relatively huge bands of monochrome when you're comparing them to the voxel. The coat hangers are all the same matte metal. The knives and kitchen utencils are 3/4s metal and the other quarter is all black. The lamp behind me is monochrome matte metal even though it has details in the metal with a monochrome lamp shade. The TV is entirely the same glossy black. The speakers around the room are all solid matte black. The vents are all single color matte white.

You severely underestimate how much of the color differences you see are just value changes caused by shadow or different shades of light if you think you can't inherit colors from parents as far as voxels are concerned. Look at how a jpg is stored. I mean really it's just RLE applied to volumes the same way you might apply it in a jpg or other image file. Why should we think volumes need to be any different? It's not like I'm talking about reinventing the wheel. Just sticking the same old wheels to a new engine.

Share this post


Link to post
Share on other sites
[quote name='szecs' timestamp='1312841094' post='4846396']
Maybe a stupid question, wouldn't nice textures (not in the CS meaning) require interpolating between the voxels? I mean, we all know how crap textures (in the CS meaning) look even with bilinear interpolation. A lot of times mip-mapping isn't good enough either.

I can imagine that monochromatic stuff would look right without interpolation but textures (not in the CS meaning) seems to be different.

Or it isn't an issue? Are voxels interpolated anyway? Did my post made any sense written before going sleeping?
[/quote]
One of the most difficult topics I've seen actually. [url="http://en.wikipedia.org/wiki/Cone_tracing"]Cone tracing[/url] and other sampling methods work. Also simply relying on voxels to collapse their subtrees into their parents to store information is key. That is from very far away an object that is less than a pixel can merge the color of their main subtrees into a single color. As you move closer the ray only traverses into the first level grabbing the merged color. So in actuality the dataset is only 8 color values (assuming a subtree for the highest levels). This leads a lot of people to realize you don't need to load that much data to get visually amazing detail. It's the same theory behind not loading the highest mip level of a texture since the user can never get close enough to see it. Carmack discussed this technique actually in his 2011 Quakecon speech recently when he talked about how they performed a visibility test so they could lower the quality of a lot of textures the player couldn't get to. In the same way a space station that might be 10 GBs of realistic voxel data would stream the top nodes a la google images and it would look perfectly fine. This is where the idea of automatic level of detail comes from.

Anyway the mipmapping problem with voxel is an interesting one with a lot of approximate solutions. If you want an exact solution though imagine your screen then for each pixel there is a [url="http://en.wikipedia.org/wiki/Frustum"]frustum[/url] emanating out with its faces adjacent to the faces of the adjacent pixel's frustum. Your goal is to find a way to pull back all the voxel data inside of the frustum while also discarding voxels that are behind other voxels. In a way it's similar to the optimal octree frustum culling algorithm. (That is the one that uses only addition pretty much and works with unlimited frustum planes. [url="http://software.intel.com/en-us/articles/rasterization-on-larrabee/"]If you don't know what I mean implement this with a quadtree and you brain will explode with ideas[/url]). The caveat is that you start scanning front to back and subtract the frustum generated by voxels that you include. You clip and track the color of the shapes used to create the volume. It is an extraordinarily complicated algorithm that I myself have sketched out on paper. You end up getting back a square region that looks like a bunch of colored regions. You merge all the colors based on their area to get the final pixel value.

As an example if you looked and saw only two voxels in your pixels frustum then it might look like this:
[img]http://assaultwars.com/pictures/frustumpixel.png[/img]
I colored the sides of one voxel differently so the perspective can be seen.

The nice thing about this is that you get amazing anti-aliasing especially if your voxel format defines infinite detail contour data. (That is you have subtrees that loop back around a few times to generate extra detail or nodes that define contours from a map in order to procedurally generate detail).

It's a fun topic with not very much research. A lot of the research papers you find though cover raytracing concepts. I wish someone would invest in making raycasting hardware if only to run via Gaikai or Onlive. :P

I recommend reading [url="http://research.nvidia.com/users/samuli-laine"]Laine's papers[/url] on SVO stuff.

How this relates to point-cloud rendering I have no idea. I assume they found an interesting algorithm that might be different than the normal raycasting/3DDDA stuff.

Share this post


Link to post
Share on other sites
[quote name='Sirisian' timestamp='1312866190' post='4846540']
How this relates to point-cloud rendering I have no idea. I assume they found an interesting algorithm that might be different than the normal raycasting/3DDDA stuff.
[/quote]

There's an interview with the dude on kotaku that goes more in depth; though still not that in depth. Really what it came down to was that they just found a better way to organize their point cloud data that made it more easily traversed as best I can tell.

Share this post


Link to post
Share on other sites
You're very confused.

[quote name='way2lazy2care' timestamp='1312841948' post='4846406']
The purpose of a normal is just to simulate light bouncing off a surface that doesn't exist. If the surface exists, there's no point in generating a normal that's any more complicated than just using the surface.
[/quote]

You seem to want to call voxels a "surface", which is highly inaccurate, but let's accept it for the sake of argument. [b]You still need to get the surface normal (and binormal)[/b]. How else will you apply the lighting equation? Could you compute the surface normal? Sure, with voxels that's typically done by computing the gradient at a given point. [b]But that is not practical[/b]. They may derive the surface normals, but then [b]they store them[/b].

I suggest you do a survey of voxel renderers in the field instead of talking out of your ass. They all either store the normal or they do away with shading completely (which we cannot do in games).

[quote]
Most surfaces are monochromatic. At the level of detail we are talking about almost everything is entirely monochromatic. You don't need to change the color of ground that severely when you can add visual interest by just adding a pothole or tire tracks to the actual geometry. All you have to do is walk around. All the walls in my apartment are exactly the same color but get all their different values from the light they take in. Same with most of the chairs and other fabric. Even the wood doors have relatively huge bands of monochrome when you're comparing them to the voxel. The coat hangers are all the same matte metal. The knives and kitchen utencils are 3/4s metal and the other quarter is all black. The lamp behind me is monochrome matte metal even though it has details in the metal with a monochrome lamp shade. The TV is entirely the same glossy black. The speakers around the room are all solid matte black. The vents are all single color matte white.[/quote]

LOL. Okay, I tell you what. You go make a game in which all surfaces are monochromatic, and we'll see how good it looks...... I am so tired of arguing with you, you just spew nonsense.

Share this post


Link to post
Share on other sites
[quote]Really what it came down to was that they just found a better way to organize their point cloud data that made it more easily traversed as best I can tell.[/quote]This acceleration structure is really the cornerstone of the whole UD tech. If you knew what this structure was, you could replicate it... Plenty of other people have designed similar data structures before and have published their research.

For now, for all we know, he's just voxelized his point-clouds and put them in an SVO [img]http://public.gamedev.net/public/style_emoticons/default/wink.gif[/img]

BTW crytek actually used voxels to model large parts of the Crysis 2 environments, and then they compressed them into a really great acceleration data structure: triangulated meshes [img]http://public.gamedev.net/public/style_emoticons/default/laugh.gif[/img]

Share this post


Link to post
Share on other sites
[quote name='A Brain in a Vat' timestamp='1312899849' post='4846692']
You seem to want to call voxels a "surface", which is highly inaccurate, but let's accept it for the sake of argument. [b]You still need to get the surface normal (and binormal)[/b]. How else will you apply the lighting equation? Could you compute the surface normal? Sure, with voxels that's typically done by computing the gradient at a given point. [b]But that is not practical[/b]. They may derive the surface normals, but then [b]they store them[/b].[/quote]
That depends entirely on how you are drawing your voxels. There are plenty of solutions, but they are all implementation specific. You don't need a very complex normal with voxels because they shouldn't be large enough to need them that accurate. If you're using them for static geometry you can easily store 6 bits and calculate the normal cheaply at runtime. Even with a non-static SVO you can get around it fairly quickly depending on how your tree is set up in memory. In fact, the more detailed and small your voxels get the less complicated your normals have to get. Ideally your voxels should only be the size of a pixel on the screen where the difference between a normal pointing at 0,0,1 would be practically the same as 0,1,1 especially after anti-aliasing/blurring which every voxel engine I've seen does already.

You also talk like these aren't also problems with textures and polys. We store tons of color data in games already, we store tons of geometry data. All of that is redundant when you use voxels. Because of the lack of geometry detail we need to store a lot more color data than we'd need to with highly detailed voxels.

[quote]
LOL. Okay, I tell you what. You go make a game in which all surfaces are monochromatic, and we'll see how good it looks...... I am so tired of arguing with you, you just spew nonsense.
[/quote]
It's totally true dude. The biggest reason we have such fine detail in our textures in games is to simulate geometry that does not exist. If the geometry exists, we don't need complicated textures. You don't need a brick wall texture, you just need a brick color a grout color and to model the bricks. All of the voxels in a brick can all use the same brick color. All of the voxels in the grout can all use the same grout color. There's no reason to store the grout and brick color in every single voxel.

Share this post


Link to post
Share on other sites
[quote name='Hodgman' timestamp='1312902289' post='4846721']
[quote]Really what it came down to was that they just found a better way to organize their point cloud data that made it more easily traversed as best I can tell.[/quote]This acceleration structure is really the cornerstone of the whole UD tech. If you knew what this structure was, you could replicate it... Plenty of other people have designed similar data structures before and have published their research.

For now, for all we know, he's just voxelized his point-clouds and put them in an SVO [img]http://public.gamedev.net/public/style_emoticons/default/wink.gif[/img]

BTW crytek actually used voxels to model large parts of the Crysis 2 environments, and then they compressed them into a really great acceleration data structure: triangulated meshes [img]http://public.gamedev.net/public/style_emoticons/default/laugh.gif[/img]
[/quote]

Thats actually not silly at all!
thats actually what im planning on doing, use voxels to make the world, but then ill pick say the 5th or so lod then displacement map the rest onto this lod level, its got to have much better compression than voxels, like with jpeg compression on the textures.
probably have better performance too.

Share this post


Link to post
Share on other sites
[quote name='way2lazy2care' timestamp='1312904170' post='4846735']
[quote name='A Brain in a Vat' timestamp='1312899849' post='4846692']
You seem to want to call voxels a "surface", which is highly inaccurate, but let's accept it for the sake of argument. [b]You still need to get the surface normal (and binormal)[/b]. How else will you apply the lighting equation? Could you compute the surface normal? Sure, with voxels that's typically done by computing the gradient at a given point. [b]But that is not practical[/b]. They may derive the surface normals, but then [b]they store them[/b].[/quote]
That depends entirely on how you are drawing your voxels. There are plenty of solutions, but they are all implementation specific. You don't need a very complex normal with voxels because they shouldn't be large enough to need them that accurate. If you're using them for static geometry you can easily store 6 bits and calculate the normal cheaply at runtime. Even with a non-static SVO you can get around it fairly quickly depending on how your tree is set up in memory. In fact, the more detailed and small your voxels get the less complicated your normals have to get. Ideally your voxels should only be the size of a pixel on the screen where the difference between a normal pointing at 0,0,1 would be practically the same as 0,1,1 especially after anti-aliasing/blurring which every voxel engine I've seen does already.

You also talk like these aren't also problems with textures and polys. We store tons of color data in games already, we store tons of geometry data. All of that is redundant when you use voxels. Because of the lack of geometry detail we need to store a lot more color data than we'd need to with highly detailed voxels.
[/quote]

That is a rediculous statement, that textures would need to store more color data than voxels? Voxels needs to store way more color data, textures can be overlapped, tiled and procedurally composited at runtime to create visually stunning textures with relatively little storage needs, and also stretched over huge distances of terrain. No such thing for voxels, every single voxel of every single square meter of terrain needs a unique color.

[quote name='way2lazy2care' timestamp='1312904170' post='4846735']
[quote]
LOL. Okay, I tell you what. You go make a game in which all surfaces are monochromatic, and we'll see how good it looks...... I am so tired of arguing with you, you just spew nonsense.
[/quote]
It's totally true dude. The biggest reason we have such fine detail in our textures in games is to simulate geometry that does not exist. If the geometry exists, we don't need complicated textures. You don't need a brick wall texture, you just need a brick color a grout color and to model the bricks. All of the voxels in a brick can all use the same brick color. All of the voxels in the grout can all use the same grout color. There's no reason to store the grout and brick color in every single voxel.
[/quote]

I would have to agree with the other dude.

I really don't see how you could possibly use monochromatic colors or somehow benefit from not baking lighting with voxels. You cannot represent textures as monochromatic colors and expect lighting to fill in the blanks, that is absurd in my opinion, a texture consists of different COLORS, your suggestion would at best be different SHADES of a single color. Meaning, it will always look like a single color with different shades. Also, you assume that we don't want to bake lighting into the voxels, which is probably a necessity right now, and will be for a very long time, forget baking ambient occlusion too (which if not for memory issues could be really nice).

I'm pretty sure that the only reason UD even looks half-decent right now is because he's baking shadows and lighting into the voxels.

Go out into the wild, hell, even the city, bring the brightest light source could ever find and photograph a bunch of things, I'm pretty sure you could not find a single thing that would end up looking like a single flat color and not be plastic or painted... and even those will probably have a slight variation to them. Even more so, you'll find that all materials reflect differently and give off different colors depending on their surrondings (also, subsurface scattering)... you try and compress that efficiently into a voxel for render with dynamic lights.

It's rediculous to suggest that we could recreate objects in nature with a single color and then let light do the work... especially when the lights most certainly would never be able consider radiance transfer, etc, in realtime.

Share this post


Link to post
Share on other sites
[quote name='Syranide' timestamp='1312919708' post='4846849']
That is a rediculous statement, that textures would need to store more color data than voxels? Voxels needs to store way more color data, textures can be overlapped, tiled and procedurally composited at runtime to create visually stunning textures with relatively little storage needs, and also stretched over huge distances of terrain. No such thing for voxels, every single voxel of every single square meter of terrain needs a unique color.
[/quote]
Go play with 3D noise functions. Terrain is a good example of something that can be procedurally generated with fractal noise. As the ray enters the terrain box (or starts inside) it performs tests with the higher octaves which allows it skip large areas of open terrain. Each of these octave points can be generated independent of one another. Not to mention you stop traversing when you have enough detail based on the distance. That is if designed correctly you would have infinite procedural detail for even the closest objects.

You can speed this up by caching the results in a tree so the data around the camera would be able to traverse quickly. Sadly procedurally generating data as you traverse each octave is rather intensive. It doesn't mean you can't define say a basic meter resolution mountain then procedurally generate more detail as you get close to it and discard subtrees when you move away.

There's a trick often used in voxel formats to store metadata in higher nodes. Someone mentioned normals earlier. If you store normals/contour data at higher levels in the tree you can feed that into a procedural algorithm along with say texture data to procedurally generate a surface with a certain texture. The lack of research into those areas doesn't mean it's not possibly. :wink:

Share this post


Link to post
Share on other sites
[quote name='Sirisian' timestamp='1312924734' post='4846900']
There's a trick often used in voxel formats to store metadata in higher nodes. Someone mentioned normals earlier. If you store normals/contour data at higher levels in the tree you can feed that into a procedural algorithm along with say texture data to procedurally generate a surface with a certain texture. The lack of research into those areas doesn't mean it's not possibly. :wink:
[/quote]

How could you possibly "store normals at higher levels in the tree"?? The real surface normal at any given surface voxel depends enormously on the positions of the surface voxels around it. Two voxels at the same SVO level might have normals that are pointing 180 degrees from each other. How could that information be stored higher up in the tree?

You're suggesting that we procedurally generate normals and map them to voxels? That will look like shit, and that's why no one has done research on it. The only two options that make sense are to 1) store the lighting information or 2) generate it by analyzing the neighboring voxel information.

Imagine a traditional mesh. Imagine how shitty it would look if we procedurally generated the normals at each vertex. We don't do that -- we either store the lighting information at each vertex, or we have map a texture to it that is of finer scale than our vertices. We don't do it in meshes and no one would do it with voxels.

What you're trying to get at is that it's certainly conceivable to procedurally generate lighting perturbations [b]at a finer scale than our voxels[/b], but that's not really relevant to what we're talking about. We're talking about whether you'd need to store lighting information at each voxel.

Share this post


Link to post
Share on other sites
[quote name='A Brain in a Vat' timestamp='1312927774' post='4846922']
How could you possibly "store normals at higher levels in the tree"?? The real surface normal at any given surface voxel depends enormously on the positions of the surface voxels around it. Two voxels at the same SVO level might have normals that are pointing 180 degrees from each other. How could that information be stored higher up in the tree?
[/quote]
At what detail level are you talking about? I have a desk in front of me that has a bumpy grain texture. The normals at the surface of it don't differ by more than 90 degrees. In fact the surface of most objects at 1 cm detail don't differ by that much. Normal maps exploit this at the triangle level. The same idea can be applied to voxels with overrides at lower voxel levels for interesting features. Just looking around my my phone and grainy desk all have smooth normals. At their "mip-level" in a voxel tree they have a very uniform normals. It's only until you look closer you see the surface normals are "jittered smoothly". Procedurally generating these jitters isn't out of the question.

[quote name='A Brain in a Vat' timestamp='1312927774' post='4846922']
You're suggesting that we procedurally generate normals and map them to voxels? That will look like shit, and that's why no one has done research on it. The only two options that make sense are to 1) store the lighting information or 2) generate it by analyzing the neighboring voxel information.
[/quote]
No I was referring to procedurally generating the detail after a certain level. Generating a cement texture (the feeling, not the 2D color one) for instance with normals isn't as difficult as it first sounds.

You don't need to analyze the neighboring voxel information if you input a normal. The normals of the generated sub-tree would use their parent normal to create a surface of voxels with a smooth change of the normal over the surface. [url="http://www.iquilezles.org/www/articles/terrainmarching/terrainmarching.htm"]Old articles that help paint a picture[/url].

It's hard to explain if you've never messed with noise functions and how they work, but extracting normal information and data is very easy. Caching it in a sub-tree is also something that would be interesting.

I'm not sure why I'm defending voxels. :lol: Personally without hardware support it's very hard to get the same performance as triangles. It's more interesting academically it seems.

Share this post


Link to post
Share on other sites
[quote name='Syranide' timestamp='1312919708' post='4846849']
[quote name='way2lazy2care' timestamp='1312904170' post='4846735']
[quote]
LOL. Okay, I tell you what. You go make a game in which all surfaces are monochromatic, and we'll see how good it looks...... I am so tired of arguing with you, you just spew nonsense.
[/quote]
It's totally true dude. The biggest reason we have such fine detail in our textures in games is to simulate geometry that does not exist. If the geometry exists, we don't need complicated textures. You don't need a brick wall texture, you just need a brick color a grout color and to model the bricks. All of the voxels in a brick can all use the same brick color. All of the voxels in the grout can all use the same grout color. There's no reason to store the grout and brick color in every single voxel.
[/quote]

I would have to agree with the other dude.

I really don't see how you could possibly use monochromatic colors or somehow benefit from not baking lighting with voxels. You cannot represent textures as monochromatic colors and expect lighting to fill in the blanks, that is absurd in my opinion, a texture consists of different COLORS, your suggestion would at best be different SHADES of a single color. Meaning, it will always look like a single color with different shades. Also, you assume that we don't want to bake lighting into the voxels, which is probably a necessity right now, and will be for a very long time, forget baking ambient occlusion too (which if not for memory issues could be really nice).[/quote]
Firstly, I was talking at the sub-meter level as we were talking about not being able to use a parent node's colors. There's no reason the majority of dirt nodes have to have a unique color. The majority of them are just brownish orange. You can still have children that are their own unique color, but most of them can just be the same orangish brown with the majority of the interest coming from shadow and light differences over the surface.

A Brain in a Vat said that the majority of voxels would need their own color data, and I just don't see that being the case. I went so far as to say that the majority of voxels in a SVO wouldn't need their own color, but could just inherit from their parents. I stand by that.

[quote]I'm pretty sure that the only reason UD even looks half-decent right now is because he's baking shadows and lighting into the voxels.[/quote]
He never mentions voxels in the video or any interviews. I'm not sure why so many people jumped to voxels, when the only technology he confirms he's using is point clouds.

[quote]Go out into the wild, hell, even the city, bring the brightest light source could ever find and photograph a bunch of things, I'm pretty sure you could not find a single thing that would end up looking like a single flat color and not be plastic or painted... and even those will probably have a slight variation to them. Even more so, you'll find that all materials reflect differently and give off different colors depending on their surrondings (also, subsurface scattering)... you try and compress that efficiently into a voxel for render with dynamic lights.

It's rediculous to suggest that we could recreate objects in nature with a single color and then let light do the work... especially when the lights most certainly would never be able consider radiance transfer, etc, in realtime.
[/quote]
[img]http://s0.geograph.org.uk/photos/23/40/234056_08aab1f7.jpg[/img]

I'll use this picture as an example. Imagine that that cliff is part of an SVO. It's root node might have the light sandy color near the top of the cliff. How many voxels in a model of this cliff would have exactly the same color. All of the voxels under that root with the same color could use the exact same color data stored in the root. The more orangy parts of the cliff next. Of those, how many voxels do you think might be the same color? It only needs to be stored in what 20 places and inherited by children? The rocks on the beach hardly need any more color than light sandyness with the detail they have.

The cliffs in the background, which wouldn't get traversed all the way to the leaf nodes; they don't even need anything other than the root color really.

Here's another example:
[img]http://3.bp.blogspot.com/_iCvhEGWAIFg/TL9vqLIj8RI/AAAAAAAAARc/bjXGtmU9sl4/s1600/800px-ScotiaBankSandstone.jpg[/img]

How many of the voxels in a model of this bank would just use the same salmon color? Sure there are places like what I am guessing is bird poo over the sign, but those are easily stored in voxels containing color data while all their salmon neighbors just have to sit there and exist.

Share this post


Link to post
Share on other sites
Way2lazy2care you are simply trolling now.

I went to [b]my[/b] [b]bathroom[/b] this morning. Wooden furniture with marmoreal counter. Smooth as ice. Tiles: flowers and a noise like pattern. Smooth as ice. Surface-plate: also marmoreal. Not as smooth as ice, but the pattern has nothing to do with the surface topography.

[i]Maybe you are color-blind[/i], but the cliff you've shown has many more colors than one or two. I see shades of yellowish-brown, somewhere more yellow, somewhere more red, even some green too. With transitions. How would you deal with those transitions? Lots of color data?

Even the building example has shades of the brick color, which has nothing to do with lighting.

Or are you suggesting that with should really go back to atomic scale with accurate physical simulation of the atoms? I remember a thread about it....

Share this post


Link to post
Share on other sites
[quote name='way2lazy2care' timestamp='1312948019' post='4847015']
Firstly, I was talking at the sub-meter level as we were talking about not being able to use a parent node's colors. There's no reason the majority of dirt nodes have to have a unique color. The majority of them are just brownish orange. You can still have children that are their own unique color, but most of them can just be the same orangish brown with the majority of the interest coming from shadow and light differences over the surface.

...

How many of the voxels in a model of this bank would just use the same salmon color? Sure there are places like what I am guessing is bird poo over the sign, but those are easily stored in voxels containing color data while all their salmon neighbors just have to sit there and exist.
[/quote]

It seems like you don't really appreciate the difference between shades of a single color, and small variations of a single color. To demonstrate, I took your mountain and approximated it with a single color.

[url="http://imageshack.us/photo/my-images/52/mountxr.jpg/"]http://imageshack.us/photo/my-images/52/mountxr.jpg/[/url]

First image is the reference, the second is the same but with a single color applied... however, I would be seriously impressed if you manage to get shadows that look anyway as good as that, in realtime, in UD.
Does it look like a mountain, sure it does, does it look like a good mountain, no it does not, the lack of nuance and variation makes it look dull. And you are forgetting that, while at a distance, things may look rather even in color, but up-close there is a lot more and important color variations going on. Additionally, if you do note bake lighting into the voxels, then you need to store MORE DATA, unless you want everything shaded using the same method and same parameters, which rarely looks very neat when trying to render realistic scenes.

Quite simply, I don't buy your argument unless you show me that it actually works.

Share this post


Link to post
Share on other sites
[quote name='Syranide' timestamp='1312969740' post='4847097']
cut
[/quote]

Did you read what I said? I didn't say it was all one color. I said there was one color that could easily be reused throughout the entire SVO. That does not mean that every voxel is the same color, just that the majority of them could be the same color. I even went into detail specifically calling out areas that are different colors and how you'd handle those as well. It doesnt even mean a voxel is the same color as it's neighbors. All it means is that many of the voxels can inherit their color from their parent.

I mean it's pretty obvious I saw the colors as I address them specifically in my post. Is it that hard to actually read what I said before going, "OHPICTURES! I KNOW EVERYTHING HE SAID NOW!

edit: This is really just like applying a sort of volumetric RLE using the existing SVO.

Share this post


Link to post
Share on other sites
*facepalm*

Anyway, moving on. Guys, I found some tech that's related to what we're talking about. Don't know if anyone's seen it, [url="http://www.youtube.com/watch?v=00gAbgBu8R4"]check it out[/url].

Share this post


Link to post
Share on other sites
[quote name='A Brain in a Vat' timestamp='1312985822' post='4847166']
*facepalm*

Anyway, moving on. Guys, I found some tech that's related to what we're talking about. Don't know if anyone's seen it, [url="http://www.youtube.com/watch?v=00gAbgBu8R4"]check it out[/url].
[/quote]

that's the same link in the op.

Share this post


Link to post
Share on other sites
[quote name='way2lazy2care' timestamp='1312841948' post='4846406']
Most surfaces are monochromatic. At the level of detail we are talking about almost everything is entirely monochromatic. You don't need to change the color of ground that severely when you can add visual interest by just adding a pothole or tire tracks to the actual geometry. All you have to do is walk around. All the walls in my apartment are exactly the same color but get all their different values from the light they take in. Same with most of the chairs and other fabric. Even the wood doors have relatively huge bands of monochrome when you're comparing them to the voxel. The coat hangers are all the same matte metal. The knives and kitchen utencils are 3/4s metal and the other quarter is all black. The lamp behind me is monochrome matte metal even though it has details in the metal with a monochrome lamp shade. The TV is entirely the same glossy black. The speakers around the room are all solid matte black. The vents are all single color matte white.[/quote]
I disagree. There is a reason why Gouraud and Phong shading models look horribly fake. If you do spectral analysis of materials, you will find that various parts of materials exhibit different spectral reflectivity. Those differences might be subtle in some cases, but they can be quite striking as far as realism is concerned. Not only that, but there is anisotropy as well. Try rendering velvet with your proposal. Brushed metal. Diffraction grating. Sub surface scattering. Even grains of wood on your door will look different at various viewing angles. To achieve those effects, you will need a hell of a lot more information than just a solid colour.

Share this post


Link to post
Share on other sites
[quote name='Tachikoma' timestamp='1312995418' post='4847224']
I disagree. There is a reason why Gouraud and Phong shading models look horribly fake. If you do spectral analysis of materials, you will find that various parts of materials exhibit different spectral reflectivity. Those differences might be subtle in some cases, but they can be quite striking as far as realism is concerned. Not only that, but there is anisotropy as well. Try rendering velvet with your proposal. Brushed metal. Diffraction grating. Sub surface scattering. Even grains of wood on your door will look different at various viewing angles. To achieve those effects, you will need a hell of a lot more information than just a solid colour.
[/quote]
You realize rendering what you described through a volumetric object is much easier than triangles right? The concept though is pretty easy. way2lazy2care already explained the parent inheritance system. This applies for all metadata in the data structure. If you define translucency for instance on a skin voxel the parent can define that and then be overridden on lower levels to turn it on or tweak its parameters. Also brushed metal and other such surfaces inherit their specular and diffusion attributes in the same way. The detail and subtle rendering effects can be achieved by recursive linking of children nodes to higher parents. (This allows a grainy surface to appear grainy even as you zoom in). Not to mention noise perturbation techniques and then thousands of possible procedural routes to generate detail where none is defined. (I wish more people studied noise functions).

I agree that you do need a lot of data. It's surprisingly similar to the amount of data needed for a regular triangle based object in fact. Most people overlook that though and assume that a different data encoding would need to store more. :unsure: In any case there's still a lot of research to be done on the subject and a lot of bias toward non-triangle based graphics since we've been using them for years.

Share this post


Link to post
Share on other sites
The chances that Euclideon is not a scam are very very very slim (I just don't like being 100% absolute). I find the tone of the narration in the videos and the claims totally ridiculous.... Other projects such as the [url="http://www.atomontage.com/"]Atomontage[/url] engine are actually serious and realistic.

Share this post


Link to post
Share on other sites
[quote][i]You say the technology has unlimited power?[/i]
Umm, yes, yes we do.
We have a search algorithm [which] grabs one atom for every pixel on the screen. So if you do it that way, you end up being able to have unlimited geometry.[/quote]When he says "search algorithm", he's obviously referring to their spatial acceleration structure, such as an SVO.
What he's implying here, is that their data structure has a computational complexity of [font="Courier New"]O(P)[/font], where [font="Courier New"]P[/font] is the number of pixels rendered.
So, if you're rendering a single pixel, that's a structure with complexity of [font="Courier New"]O(1)[/font]. Not [font="Courier New"]O(N)[/font] ([i]where [font="Courier New"]N[/font] is the amount of geometry[/i]), or [font="Courier New"]O(K.N)[/font], or [font="Courier New"]O(log(N))[/font], or [font="Courier New"]O(sqrt(N))[/font], no... [font="Courier New"]O(1)[/font].

They've got an acceleration data-structure where the search time is unrelated to the amount of data being searched, [i]whatsoever[/i]. A search within 10KB of data is the exact same complexity as a search within 10PB of data.

They've apparently proved that there is such a thing as a free lunch, which isn't just a revolution for computer graphics, but for computer science in general. [b]Google should be buying his company and patenting this discovery. Seriously, it's that much of a big deal.[/b]

[quote][i]People were claiming that you must have some sort of memory limitations?[/i]
Umm. No. The simple answer is: no. Our memory compaction is going remarkably well.[/quote]So not only do they have [font="Courier New"]O(1)[/font] search on unlimited data, they've also got infinite compression ratios on unlimited data.
There are no memory limitations at all; they'll just compress infinity into finite space.

And he wonders why people are having a bad reaction to his presentation? He wonders why he's being called a liar when he's saying things that can't be true?

[quote]The video JPEG'ed the poor thing.
We know what LOD'ing is; level of distance.
Most people don't really know what tesselation is ... it means the polygons .... have information about how high they are, [which] is used to break them up into little polygons and create bumps. Tesselation bumpy map.
...if you want to put it on a Game Boy DS or something like that, you've got to rebuild all the graphics.
Hello my name is John, I do *raises eyebrows* data compaction. I take *raises eyebrows* atoms, and I smash them *raises eyebrows* with a sledgehammer, until they submit to me.[/quote]W.T.F.


He goes on a lot about how great it is to import graphics from the real world -- but this is a red herring. He mentions their elephant a lot, how it was scanned into a 500k poly model, and then converted the polygonal model to 'atoms' for rendering... which means the scanning stuff is in no way related to their rendering tech - it's just as applicable to polygon renderers.

He also completely misinterprets Carmack and Notch's objections, and creates a false dichotomy between their statements. That was just painful to watch.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this