# [Theory] Unraveling the Unlimited Detail plausibility

## Recommended Posts

Bombshell93    245
Now I'm not too experienced with graphics programming so I'll probably not be as innovative when thinking of how this could be done, but I'll give it my best.
I'm assuming most of you have seen the unlimited detail video? if not here is a link
for obvious reasons many people remain skeptical. I for one won't think of it as impossible like most claim until I am sure there isn't a way to do it.
Hopefully a lot of people here are open thinkers so give it a shot, think of how efficient you could get point clouds to work in theory.

At the moment the best I can think of is computational estimation as the memory consumption seems far too great.
and to throw a hole into notches belief only surfaces need to be stored so using the distance between 2/3/4 points and a value representing a shape or a series of values representing the form of the surface, a large number of shapes could be interpreted via a small amount of bits. (think like compression of an image by using a colour index) Mixxing this with an octree / quadtree shapes being held within shapes without putting them in too deep could save massive amounts of space.
here's a crudely drawn example.
[img]http://img200.imageshack.us/img200/5595/diagz.png[/img]

here 2-bits for identifying corners (possibly 3 with 1 being bool for "this is a styled corner") and then obviously the identity of the shape in the index
Not the best of examples but its basicly what I'm getting at.

Anyone more experienced willing to throw in an argument on this please go ahead,
I'll be researching more into this so next time I post I don't seem as dumb
thanks for reading and or posting,
Bombshell

##### Share on other sites
Hodgman    51338
Here's some starting points for your research [img]http://public.gamedev.net/public/style_emoticons/default/smile.gif[/img]
[url="http://www.nvidia.com/object/nvidia_research_pub_018.html"]Efficient SVOs[/url]

[url="http://artis.imag.fr/Publications/2009/CNLE09/"]GigaVoxels[/url]

[url="http://bautembach.de/wordpress/?page_id=7"]Animated SVOs[/url]

[url="http://graphics.stanford.edu/software/qsplat/"]Point rendering[/url]

N.B. these types of renderers actually used to be quite popular in the [url="http://upload.wikimedia.org/wikipedia/en/2/22/Df2_terrain.jpg"]90's[/url].

##### Share on other sites
Bombshell93    245
skimming at each of these papers (and correct me if I'm wrong) I keep seeing a problem with how data is stored.
storing the surface data, good. storing every voxel of the surface, bad.
Think of it like a triangle, only key points are needed. Circling to computational estimation to fill in the gaps.
However if true voxel's are key then why store the full surface? if the data was treated as "surface point" and the surface is interpreted as there until it is define "surface not here" that's knocking off everything in between flat surfaces, leaving room for a int or pointer to a computational estimation of the surfaces shape or even texture coordinates allowing voxels to fall into texture use rather than surface voxel by voxel colour/normal/any other data.
thinking over this its pushing more towards triangles but like other aspects of graphics programming (deferred vs forward) there had to be a best of both worlds middle ground.

I'm sure while reading through those links someones going to post bashing my idea in with a sledgehammer, but if I'm going to think of anything good I cant rule out the crazy or the stupid.
Thanks for the links by the way their a great help... I'd expand on a way I'm thinking of animation becoming simple but I've got to think it through while reading the animated SVO paper.

##### Share on other sites
Syranide    375
[quote name='bombshell93' timestamp='1312510786' post='4844831']
Now I'm not too experienced with graphics programming so I'll probably not be as innovative when thinking of how this could be done, but I'll give it my best.
I'm assuming most of you have seen the unlimited detail video? if not here is a link
for obvious reasons many people remain skeptical. I for one won't think of it as impossible like most claim until I am sure there isn't a way to do it.
Hopefully a lot of people here are open thinkers so give it a shot, think of how efficient you could get point clouds to work in theory.

At the moment the best I can think of is computational estimation as the memory consumption seems far too great.
and to throw a hole into notches belief only surfaces need to be stored so using the distance between 2/3/4 points and a value representing a shape or a series of values representing the form of the surface, a large number of shapes could be interpreted via a small amount of bits. (think like compression of an image by using a colour index) Mixxing this with an octree / quadtree shapes being held within shapes without putting them in too deep could save massive amounts of space.
here's a crudely drawn example.
[img]http://img200.imageshack.us/img200/5595/diagz.png[/img]

here 2-bits for identifying corners (possibly 3 with 1 being bool for "this is a styled corner") and then obviously the identity of the shape in the index
Not the best of examples but its basicly what I'm getting at.

Anyone more experienced willing to throw in an argument on this please go ahead,
I'll be researching more into this so next time I post I don't seem as dumb
thanks for reading and or posting,
Bombshell
[/quote]

Unless I misunderstand something then I really don't see the point of this, as far as I know UD works on the idea of voxels and the voxels having a color (and that's where the texture comes from) ... so, by necessity the objects needs to consist of very very small voxels or the quality of both the "models and textures" would degrade significantly. You should never be able to see the individual voxels, so the shape of them should be irrelevant and if you make them larger then the texture quality will suffer... and although your idea seems sound at first... I'm not sure how you would actually make the shapes blend well into it's neighbours without even more bits.

And at that level, it doesn't really matter if they are square or round, or just interpolating somehow... the idea is for the voxels to never really be significantly larger than a single pixel or the texture quality goes out the window immediately. So unless I missed something, I really don't see how this would actually change anything? I would even guess that texturing and materials will always be a weakness of UD, perhaps it can be overcome by the overall quality of the models, but it seems to me that unless you come up with a really really good and fast system for sampling, then the "texture" will always have a washed out or aliased look to it.

##### Share on other sites
Bombshell93    245
my idea is not at voxel level. the crudely drawn example is meant to depict that of a model, so the computational expectation is meant to calculate the position of voxels in between key points. Also on the subject of textures looking at UD the colour detail isn't very... well. detailed. it does seem as if most of the power goes into form and the colour is rather low resolution in comparison. This is a shame but if I can think of a way to further reduce memory consumption without creating too much overhead using textures as with triangles could become a viable replacement. As for sampling I've got ideas already, I'll need to try them in practice.
thinking on the subject of saving memory if we think of hierarchical point clouds inside each level of detail the vector position is not needed, simply an identifier of position relative to the parenting level of detail, doing this within an octree 3-bits per chunk of level of detail / 12-bits in a single chunk including its lower level and the model itself holding the position in world space.

I'm sure if I keep at this and with the help of these forums we can think of a system that's powerful enough.
Bombshell

##### Share on other sites
Syranide    375
To my knowledge, the key point of SVO is the simplicity of the voxels and that's what makes it relatively fast, just like triangles.

However, the more complexity you add the closer you get to raytracing, things get hairy when you have to start evaluating shapes. Without much to back it up, I feel confident that much of the speed in UD comes from the overly apparent grid pattern, that is, there is no overlap nor angles which should make it significantly faster to query (but even more useless in practice).

It would surprise me if you couldn't use SVO as some kind of early "hit test" to speed up realtime raytracing given a reasonably small and static scene... the key being static. My point being, the further you go from the simplicity of the voxel, the further you go into raytracing land.

##### Share on other sites
Bombshell93    245
I disagree, straying from the simplicity without going into ray tracing complexity, rasterization can occur via transforming a certain detail level chunk by world view and projection matrices then drawing the contents along its transformed axis (which was actually my idea for animation of voxel octree) set up properly culling can occur appropriate to the viewed angle. I mention a certain detail level chunk because calculation of offset / size of a voxel for the camera won't effect each individual voxel differently, so computing time can be saved by calculating in the lower detail levels to what degree does the octree need to divide before voxels reach pixel level. This could be done at lower detail levels than necessary for lower quality but faster rendering and visa versa with higher quality levels.

##### Share on other sites
SuperVGA    1132
[quote name='bombshell93']
At the moment the best I can think of is computational estimation as the memory consumption seems far too great.
and to throw a hole into notches belief only surfaces need to be stored so using the distance between 2/3/4 points and a value representing a shape or a series of values representing the form of the surface,
[/quote]
That sounds like you're thinking of marching cubes. It takes 8 bits (one for the filled/empty value of each cell corner),
and that maps directly into a table of polygons, -it's very fast.

[quote name='bombshell93' timestamp='1312539158' post='4844936']
my idea is not at voxel level. the crudely drawn example is meant to depict that of a model, so the computational expectation is meant to calculate the position of voxels in between key points.
[/quote]
Why would you calculate the position of any voxel? Voxels are regular-size cells with only value stored, and solid leaves of a voxel tree typically don't have their position stored, since it will already be relative to it's place in the tree or grid.

[quote name='bombshell93' timestamp='1312539158' post='4844936']
Also on the subject of textures looking at UD the colour detail isn't very... well. detailed. it does seem as if most of the power goes into form and the colour is rather low resolution in comparison.[/quote]
Heh, we can agree that the colours aren't very varied. Maybe they're looked up in a palette of some kind.

[quote name='bombshell93' timestamp='1312539158' post='4844936']
This is a shame but if I can think of a way to further reduce memory consumption without creating too much overhead using textures as with triangles could become a viable replacement. As for sampling I've got ideas already, I'll need to try them in practice.
[/quote]
I don't believe that (textured) polygons "ultimately" is the solution. My GPU raycaster generally performs better than my voxel-polygon hybrid, but I'd love to see your results. I think Manic Digger etc. lives on acceptable framerates because they generally show a fairly low resolution.

[quote name='bombshell93' timestamp='1312539158' post='4844936']
thinking on the subject of saving memory if we think of hierarchical point clouds inside each level of detail the vector position is not needed, simply an identifier of position relative to the parenting level of detail, doing this within an octree 3-bits per chunk of level of detail / 12-bits in a single chunk including its lower level and the model itself holding the position in world space.
[/quote]
How about using a KD-tree, and sparing the positions/offsets? Or you could use wave surfing like Ken Silverman does it. That is, however, very difficult to implement in a regular pixel shader. For me at least, I gave it a try. But perhaps it's doable in CUDA? I never got to mess around with that...

I'm sure Ken already found a system powerful enough, and aside from the fact that I have yet to imitate those standards, UD doesn't look that impressive. And they're lacking trustworthiness somehow. It's like they're selling something, but have no documented product.
They're not publishing their "findings" for the academic value, that's for sure.

##### Share on other sites
Ezbez    1164
I think you're missing some of the point of this argument. I don't think many people are disbelieving that the videos they show are real. The problem is with the description of the video. Vague claims of "unlimited detail" are made without any explanation of what they really mean by that. The technique they use is well-known and existing, yet they act as if this is never-before-seen. The advantages and downsides of this technique are also well known, yet they only ever bring up the advantages. They fail to address any of the concerns that people have (sorry but "don't worry angry forum-posters we have animation in the works" doesn't count for much). These are the real concerns, not whether they have been totally faking everything.

Notch's point about memory is to show that sure, they can have this level of detail, but if they do then they need to repeat things. A lot. This has obvious downsides for games - just look how repetitive their world looks.

##### Share on other sites
Bombshell93    245
@Ezbez yeah I realized this not long after posting the topic but there's no way to change the topic name. Now I'm considering it more of a way to try and get past these problems, if we manage to get something better than UD in terms of practical use it could be a huge jump in game graphics. I know such an assumption doesn't come easy which is why I'm trying to do my research to understand the concepts and why some of my concepts may not work.
I'm also trying to build this so I can see how ideas work out in practice, I'm currently looking up some techniques I may have overlooked so I can get the best performance out of it. I'm definitely not the best programmer to give it a try but there's no sense in giving up without trying.

[size=2]@SuperVGA well I'm currently looking through some comparisons of CUDA, DirectComput and OpenCL to see which would give me the freedom / power I'd need to render voxel / point cloud (whichever I settle on I'm still partial to both) hardware accelerated without changing to tri's. My current Idea is fairly straight forward in doing a Deferred rendering kind of pipeline, render the geometry as efficiently as possible hopefully hardware accelerated generating a G-Buffer to leave the rest of the effects and etc in the hands of shaders on a full-screen quad. This could probably make it [/size]mixable[size=2] with triangle based geometry by using the depth to decide which of the Renders (triangle and voxel / point cloud) will be in front. Though "mixable" in this case means "if you want double the memory consumption" so I may try and get around that later.
Marching cubes I've heard of frequently, call me a fool but I've never looked into it, I'll start reading up soon though.
Calculating the position of voxels... in that case I was taking into account if I were to use point clouds (my concepts seem to be mixed in both) but as for calculating position in voxels case it would be calculating whats there and whats not, which could help eliminate the need to label higher detail levels with "is this solid space" bools. at least to a degree.
Texture wise I'm thinking I could get it optimized by assigning blocks of UV to the lower detail chunks, hard to work with raw but with a simple system for moving the image around and back would make it easily artist friendly.[/size]

##### Share on other sites
Syranide    375
[quote name='bombshell93' timestamp='1312557680' post='4845047']
@Ezbez yeah I realized this not long after posting the topic but there's no way to change the topic name. Now I'm considering it more of a way to try and get past these problems, if we manage to get something better than UD in terms of practical use it could be a huge jump in game graphics. I know such an assumption doesn't come easy which is why I'm trying to do my research to understand the concepts and why some of my concepts may not work.
I'm also trying to build this so I can see how ideas work out in practice, I'm currently looking up some techniques I may have overlooked so I can get the best performance out of it. I'm definitely not the best programmer to give it a try but there's no sense in giving up without trying.
[/quote]

Not to be a downer, but I really have my doubts about anyone ever finding a good replacement point/voxel [u]replacement[/u] for shapes/polygons/triangles. There very likely could be good alternatives for special circumstances (say, rendering terrain, clouds, hair, etc), but then you end up with two very different code paths where the distribution between the two can vary wildly, to me that sounds like a recipe for performance balancing nightmares in many cases... which is very much exists with shaders too, but everything sharing the same code path makes it a lot more predictable. Also, let's not forget that combining SVO and triangles may sound like a good idea... making them artistically "compatible" can be an equally huge problem.

Polygons are awesome because of their simplicity, yet being so extremely flexible and fast... they are fast enough to be rendered in massive quantities, shaders make them flexible enough that with you can render pretty much any effect you like and their simplicity means it all lends itself to be cheaply animated in all kinds of complex ways. Now add tesselation and what really stands between the perfect end-result is project deadlines, experienced developers/artists, having really good tools and hardware performance... triangles and shaders allow for pretty much everything you want, the major issue today is the tools it would seem. So to me it seems, unless any other technique manages to bring the flexibility, performance and quality up to that of our triangle buddies AND provide better tools, then I don't see how it would ever catch on. UD only seems to deliver on simpler tools at this point (and arguably better quality, or rather, better details).

You mention that we could see a "huge jump in game graphics", I really don't see how we could. UD performance is crap right now as everyone knows... add to that that it scales linearly with the number of pixels makes it a very bad idea to combine with triangles unless the performance shoots through the roof (and let's not forget that it currently is CPU only so forget about any gameplay). So even if you somehow solve the massive memory issue... then performance must be solved, shading must be solved and flexibility would need to be addressed too. And still, I don't know what it would actually offer over triangles other than better tools? I've seen plenty of rediculously cool and detailed demos running at 60FPS on current high-end hardware (which will be low-end hardware in a year or two), all of which currently far "overall" surpass the quality of UD.

Again, not trying to be a downer, but at the moment I personally don't really see how this could ever work out to become a replacement or commonly used technique. And let's not forget that triangles actually scale very well too if given enough attention, as far as I can tell, SVO really has no way of doing that other than lowering the resolution.

##### Share on other sites
Bombshell93    245
well from what I'm thinking of I may be skewing away from conventional methods of SVO, it may crash and burn, it may be great.
I don't see you as "a downer" any input is good input (so long as its not something silly) For now I'm just going to give it a go and see what comes out of it, with me only just starting more complex levels of programming, regardless if it works or not it'll be a good theory and coding exorcise. The "[color="#1c2837"][size="2"]huge jump in game graphics" bit was as in context, IF somehow we overcome the problems of UD, in which case it would more than likely be a huge jump. Its a small knack I have for impossible optimism, I have a programming friend keeps telling to stop it, to which I always reply "if I want to do something great I've got to aim high".[/size][/color]

##### Share on other sites
Antheus    2409
How do you fit 4512 elephants in a refrigerator? Maybe we can find a way to grind them up...

No matter the technology, no matter the magic - we have hard numbers:
- 1GB memory on GPU
- ~5GB/sec RAM bandwidth, ~8GB total
- 25-160GB disk space (SSDs or blu-rays)
- 128kb/sec network, 250ms latency (reliable broadband is considerable limitation)

In information theory, there is a very important baseline: data != information.

Various laws and formulas define certain terms like entropy. These are universal and go beyond various implementations.

Whether a model is represented with polygons or voxels is irrelevant, those are data. What matters is the information they carry. An HD cube rendered stereoscopically carries exactly two piece of information - one stating it's a cube and length of the side of cube. It can now be represented with words, coordinates, voxels or anything else, entropy does not change.

Whether a stone is modeled using voxels or polygons, equal detail will always require same amount of information. There is nothing preventing polygons from being encoded in a more optimal manner. Add animation, and we get another dimension of data.

By the time hardware advances to the point where we can render enough voxels - polygons will be capable of doing not only the same, but will have 20-30 years of hands-on experience with them.

Building stuff out of voxels isn't inherently better since it doesn't solve any of the problems advertised here. it's just different encoding of same information. Where such representation would be useful (and is used) is modeling materials. Sand, water, fire. Or even more detailed, heat and pressure. Instead of having sound played as a .wav and muted by distance, how about modeling true pressure propagation?

That is where "voxels" actually solve a problem (aka discrete sampling of space). And even there they aren't necessarily the best, since many such solutions use different representations using grids or graphs simply due to being more efficient.

Voxels fall into similar area as genetic programming. It works, it's proven, it is superior. But for every solution, different, slightly less general solutions provide considerably more practical solutions today. One could say that voxel approach doesn't "scale" with the domain.

##### Share on other sites
Syranide    375
[quote name='bombshell93' timestamp='1312572071' post='4845163']
well from what I'm thinking of I may be skewing away from conventional methods of SVO, it may crash and burn, it may be great.
I don't see you as "a downer" any input is good input (so long as its not something silly) For now I'm just going to give it a go and see what comes out of it, with me only just starting more complex levels of programming, regardless if it works or not it'll be a good theory and coding exorcise. The "[color="#1c2837"][size="2"]huge jump in game graphics" bit was as in context, IF somehow we overcome the problems of UD, in which case it would more than likely be a huge jump. Its a small knack I have for impossible optimism, I have a programming friend keeps telling to stop it, to which I always reply "if I want to do something great I've got to aim high".[/size][/color]
[/quote]

Yeah, perhaps I wasn't really clear when I quoted you on "[color="#1c2837"][size="2"]huge jump in game graphics"... my point was that I don't really agree with it, even if UD was everything it promised to be in the video... I still wouldn't consider it a huge jump in graphics, or even a jump in graphics at all considering that we can already render some pretty impressive terrains today as well (personally I don't even think that UD looks all that impressive as-is, to me as a developer I'm most impressed by the details because it's currently "unheard of"), the issue again being that of having to balance between terrain, players, objects and effects, as well as keeping within respectable memory and storage limits. I'm pretty sure that you can do much more artistically impressive graphics today than anything shown in the UD video as it completely lacks quality in all areas other than being very detailed, even if the performance would have been acceptable.

To put what I'm saying in perspective; look at what th hugely impressive demos 3DMark was able to render fluidly on the hardware of those days, and the hugely impressive demos that nVidia has put out over the years... and you realize that there is/was an enormous gap of what is technically possible and what is feasible and justifiable in a game at the time... and what is possible simply because demos solve a much smaller problem and often hide the real cost (expensive facial shader is offset by rendering a smaller face and rendering dirt cheap backgrounds). As well as the hugely important fact that making something visually impressive isn't perhaps the real issue, it's making it also scale to hardware other than your own that makes the quality take a dive (again, look at what has been made with the "slow" PS3 and X360).

And then you consider the reasonably compact representation of todays game, with triangles, heightmaps, reused textures, etc, etc... and still they consume huge amounts of storage. I don't see how one could possibly choose a less compact representation and achieve better quality without overshooting performance/memory/storage contraints. If you set aside 10x the storage for "your own technique", then consider what could be done if todays techniques had 10x storage to work with too, and so on.

What I find fascinating though is the approach John Carmack took with RAGE and how in my mind, the tools are the issue, not the final representation. Freeing the artists of the burden of "performance troubleshooting" when constructing the world... they sacrifice immediate quality which the artists makes up for as they have a lot better tools and thus devote more time to being artists and less time spent "performance troubleshooting".[/size][/color]

##### Share on other sites
rouncer    294
really, if you wanted unlimited detail, you can render it, animation is iffy, but storing a decent sized world (like for an mmorpg for example) would be a TB. or 200 gigs if you didnt mind it being a little smaller.
what would work tho, is if you stopped trying to beat polygons and not mind the cubes showing, it would work now, with a lot less to store, and editing would go much faster too.

also another idea, avoid fps view cause it is too close to the world, step back a bit and do top view, then youd need to store less and you wouldnt need as much detail cause you cant even percieve it through the resolution. that would also make the general edit plot size smaller, and making the world would go faster too.

but it actually would still make a "hi res" game, just stepped back a little bit.

remember, bombshell, even if your idea worked, youd still have to store all the colours at least, it would still be a massive amount to store just colours.

##### Share on other sites
[quote name='rouncer' timestamp='1312641895' post='4845434']
remember, bombshell, even if your idea worked, youd still have to store all the colours at least, it would still be a massive amount to store just colours.
[/quote]

You could have color data stored in parents in the SVO. There's no reason you'd need every single voxel to store it's own color data when it could just inherit it from it's parent if it doesn't need higher detail. There's no reason every voxel has to be at full resolution as though you were 2 inches away from it either. Those two things alone cut notch's estimate on data use by a hefty amount. There's no reason to assume that we'd need to store any more color data than the data we already store; perhaps not stored quite as efficiently for pure color data, but the amount shouldn't change.

##### Share on other sites
inavat    317
[quote name='way2lazy2care' timestamp='1312828852' post='4846295']
You could have color data stored in parents in the SVO. There's no reason you'd need every single voxel to store it's own color data when it could just inherit it from it's parent if it doesn't need higher detail. There's no reason every voxel has to be at full resolution as though you were 2 inches away from it either. Those two things alone cut notch's estimate on data use by a hefty amount. There's no reason to assume that we'd need to store any more color data than the data we already store; perhaps not stored quite as efficiently for pure color data, but the amount shouldn't change.
[/quote]

Eh... how exactly would you do this? Show me a data structure that can have this "optional" color data and not take up the space for it otherwise.

##### Share on other sites
[quote name='A Brain in a Vat' timestamp='1312829669' post='4846300']
Eh... how exactly would you do this? Show me a data structure that can have this "optional" color data and not take up the space for it otherwise.
[/quote]
It would work fine generally using an existing SVO pretty much. You just add another child type that gets it's color data from it's parent rather than storing it itself. I don't see which part you find impossible? You're essentially casting a ray through the tree, so why could you not grab the color data of a parent and just reuse the same color data for all it's children?

It's just like drawing a low resolution texture on a high resolution model; you don't need a pixel of texture data for every pixel of screen space taken up by a piece of the model, so why should you expect you'd need more for an SVO?

##### Share on other sites
inavat    317
[quote name='way2lazy2care' timestamp='1312830396' post='4846308']
[quote name='A Brain in a Vat' timestamp='1312829669' post='4846300']
Eh... how exactly would you do this? Show me a data structure that can have this "optional" color data and not take up the space for it otherwise.
[/quote]
It would work fine generally using an existing SVO pretty much. You just add another child type that gets it's color data from it's parent rather than storing it itself. I don't see which part you find impossible? You're essentially casting a ray through the tree, so why could you not grab the color data of a parent and just reuse the same color data for all it's children?

It's just like drawing a low resolution texture on a high resolution model; you don't need a pixel of texture data for every pixel of screen space taken up by a piece of the model, so why should you expect you'd need more for an SVO?
[/quote]

To "just add another child type" implies virtual inheritance, which adds 4 bytes (the typical size of a color) to every object. So where exactly have you saved versus just replicating the color in every single child?

##### Share on other sites
[quote name='A Brain in a Vat' timestamp='1312831691' post='4846324']
To "just add another child type" implies virtual inheritance, which adds 4 bytes (the typical size of a color) to every object. So where exactly have you saved versus just replicating the color in every single child?
[/quote]
It only adds data if there's a virtual function to be called. If you're traversing the tree from the top down, you don't need to call anything in the children, you just need to skip them. As far as I know SVOs generally use compression techniques based off just storing whether or not a child exists, which is why they are so efficient. It doesn't seem to be a huge stretch to expand that to use a parent's color data or not; ~6 bits total/voxel for static geometry if we double the amount I've heard is what is needed for an SVO (3bits/voxel).

It would get more complex for voxels that do have color data, but I'm not going to come up with a voxel rendering scheme off the top of my head without putting more thought into it. There's still no reason to store color data for every voxel.

##### Share on other sites
inavat    317
Okay, assuming that's feasible, do you feel that it's realistic to assume that a large number of child voxels will share all lighting attributes with their parents? Do you think that would make for an interesting world? Do you think it's useful to have a high level of detail, such that when you zoom in all of a sudden everything is the exact same color?

##### Share on other sites
[quote name='A Brain in a Vat' timestamp='1312834703' post='4846349']
Okay, assuming that's feasible, do you feel that it's realistic to assume that a large number of child voxels will share all lighting attributes with their parents? Do you think that would make for an interesting world? Do you think it's useful to have a high level of detail, such that when you zoom in all of a sudden everything is the exact same color?
[/quote]

Not only do I think it's realistic, I'd consider it the norm when you have that kind of geometry detail. Most of our texture data today is just replicating various lighting and volumetric effects that are totally redundant with voxels.

[url="http://img858.imageshack.us/img858/5202/32421746.jpg"]Here[/url]'s a simple example just pulled from google. If you look at the cliffs most of their diffuse data is just replicating shadows. With higher geometry detail you could easily get similar results with a single color.

##### Share on other sites
inavat    317
[quote name='way2lazy2care' timestamp='1312835320' post='4846354']
Not only do I think it's realistic, I'd consider it the norm when you have that kind of geometry detail. Most of our texture data today is just replicating various lighting and volumetric effects that are totally redundant with voxels.

[url="http://img858.imageshack.us/img858/5202/32421746.jpg"]Here[/url]'s a simple example just pulled from google. If you look at the cliffs most of their diffuse data is just replicating shadows. With higher geometry detail you could easily get similar results with a single color.
[/quote]

No, you couldn't actually, because every single little pixel that's a slightly different shadow color from the one next to it [b]would have to have its own normal[/b]. Every little pixel that's a slightly different grass or rock color from the one next to it [b]would have to have its own color[/b]. How do your propose to get the detailed color variations if each fine-grained voxel is exactly like its neighbor?

##### Share on other sites
inavat    317
[quote name='way2lazy2care' timestamp='1312835320' post='4846354']
Most of our texture data today is just replicating various lighting and volumetric effects that are totally redundant with voxels.
[/quote]

And what does this even mean? This is absolutely false.

The vast majority of voxel applications don't do any shading, and therefore they don't need to store things like normals and binormals and specularity coefficients, etc. In games we do have to, unless you're suggesting voxelizing to the level of detail of actual atoms on a surface, and simulating physics-based light transport and scattering models.

Is that what you're suggesting?