• Create Account

## [Theory] Unraveling the Unlimited Detail plausibility

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

168 replies to this topic

### #1bombshell93  Members

225
Like
0Likes
Like

Posted 04 August 2011 - 08:19 PM

Now I'm not too experienced with graphics programming so I'll probably not be as innovative when thinking of how this could be done, but I'll give it my best.
I'm assuming most of you have seen the unlimited detail video? if not here is a link
for obvious reasons many people remain skeptical. I for one won't think of it as impossible like most claim until I am sure there isn't a way to do it.
Hopefully a lot of people here are open thinkers so give it a shot, think of how efficient you could get point clouds to work in theory.

At the moment the best I can think of is computational estimation as the memory consumption seems far too great.
and to throw a hole into notches belief only surfaces need to be stored so using the distance between 2/3/4 points and a value representing a shape or a series of values representing the form of the surface, a large number of shapes could be interpreted via a small amount of bits. (think like compression of an image by using a colour index) Mixxing this with an octree / quadtree shapes being held within shapes without putting them in too deep could save massive amounts of space.
here's a crudely drawn example.

here 2-bits for identifying corners (possibly 3 with 1 being bool for "this is a styled corner") and then obviously the identity of the shape in the index
Not the best of examples but its basicly what I'm getting at.

Anyone more experienced willing to throw in an argument on this please go ahead,
I'll be researching more into this so next time I post I don't seem as dumb
thanks for reading and or posting,
Bombshell

### #2Hodgman  Moderators

49390
Like
0Likes
Like

Posted 04 August 2011 - 08:29 PM

Here's some starting points for your research
Efficient SVOs

GigaVoxels

Animated SVOs

Point rendering

N.B. these types of renderers actually used to be quite popular in the 90's.

### #3bombshell93  Members

225
Like
0Likes
Like

Posted 04 August 2011 - 09:23 PM

skimming at each of these papers (and correct me if I'm wrong) I keep seeing a problem with how data is stored.
storing the surface data, good. storing every voxel of the surface, bad.
Think of it like a triangle, only key points are needed. Circling to computational estimation to fill in the gaps.
However if true voxel's are key then why store the full surface? if the data was treated as "surface point" and the surface is interpreted as there until it is define "surface not here" that's knocking off everything in between flat surfaces, leaving room for a int or pointer to a computational estimation of the surfaces shape or even texture coordinates allowing voxels to fall into texture use rather than surface voxel by voxel colour/normal/any other data.
thinking over this its pushing more towards triangles but like other aspects of graphics programming (deferred vs forward) there had to be a best of both worlds middle ground.

I'm sure while reading through those links someones going to post bashing my idea in with a sledgehammer, but if I'm going to think of anything good I cant rule out the crazy or the stupid.
Thanks for the links by the way their a great help... I'd expand on a way I'm thinking of animation becoming simple but I've got to think it through while reading the animated SVO paper.

### #4Syranide  Members

375
Like
0Likes
Like

Posted 05 August 2011 - 02:44 AM

Now I'm not too experienced with graphics programming so I'll probably not be as innovative when thinking of how this could be done, but I'll give it my best.
I'm assuming most of you have seen the unlimited detail video? if not here is a link
for obvious reasons many people remain skeptical. I for one won't think of it as impossible like most claim until I am sure there isn't a way to do it.
Hopefully a lot of people here are open thinkers so give it a shot, think of how efficient you could get point clouds to work in theory.

At the moment the best I can think of is computational estimation as the memory consumption seems far too great.
and to throw a hole into notches belief only surfaces need to be stored so using the distance between 2/3/4 points and a value representing a shape or a series of values representing the form of the surface, a large number of shapes could be interpreted via a small amount of bits. (think like compression of an image by using a colour index) Mixxing this with an octree / quadtree shapes being held within shapes without putting them in too deep could save massive amounts of space.
here's a crudely drawn example.

here 2-bits for identifying corners (possibly 3 with 1 being bool for "this is a styled corner") and then obviously the identity of the shape in the index
Not the best of examples but its basicly what I'm getting at.

Anyone more experienced willing to throw in an argument on this please go ahead,
I'll be researching more into this so next time I post I don't seem as dumb
thanks for reading and or posting,
Bombshell

Unless I misunderstand something then I really don't see the point of this, as far as I know UD works on the idea of voxels and the voxels having a color (and that's where the texture comes from) ... so, by necessity the objects needs to consist of very very small voxels or the quality of both the "models and textures" would degrade significantly. You should never be able to see the individual voxels, so the shape of them should be irrelevant and if you make them larger then the texture quality will suffer... and although your idea seems sound at first... I'm not sure how you would actually make the shapes blend well into it's neighbours without even more bits.

And at that level, it doesn't really matter if they are square or round, or just interpolating somehow... the idea is for the voxels to never really be significantly larger than a single pixel or the texture quality goes out the window immediately. So unless I missed something, I really don't see how this would actually change anything? I would even guess that texturing and materials will always be a weakness of UD, perhaps it can be overcome by the overall quality of the models, but it seems to me that unless you come up with a really really good and fast system for sampling, then the "texture" will always have a washed out or aliased look to it.

### #5bombshell93  Members

225
Like
0Likes
Like

Posted 05 August 2011 - 04:12 AM

my idea is not at voxel level. the crudely drawn example is meant to depict that of a model, so the computational expectation is meant to calculate the position of voxels in between key points. Also on the subject of textures looking at UD the colour detail isn't very... well. detailed. it does seem as if most of the power goes into form and the colour is rather low resolution in comparison. This is a shame but if I can think of a way to further reduce memory consumption without creating too much overhead using textures as with triangles could become a viable replacement. As for sampling I've got ideas already, I'll need to try them in practice.
thinking on the subject of saving memory if we think of hierarchical point clouds inside each level of detail the vector position is not needed, simply an identifier of position relative to the parenting level of detail, doing this within an octree 3-bits per chunk of level of detail / 12-bits in a single chunk including its lower level and the model itself holding the position in world space.

I'm sure if I keep at this and with the help of these forums we can think of a system that's powerful enough.
Bombshell

### #6Syranide  Members

375
Like
0Likes
Like

Posted 05 August 2011 - 04:30 AM

To my knowledge, the key point of SVO is the simplicity of the voxels and that's what makes it relatively fast, just like triangles.

However, the more complexity you add the closer you get to raytracing, things get hairy when you have to start evaluating shapes. Without much to back it up, I feel confident that much of the speed in UD comes from the overly apparent grid pattern, that is, there is no overlap nor angles which should make it significantly faster to query (but even more useless in practice).

It would surprise me if you couldn't use SVO as some kind of early "hit test" to speed up realtime raytracing given a reasonably small and static scene... the key being static. My point being, the further you go from the simplicity of the voxel, the further you go into raytracing land.

### #7bombshell93  Members

225
Like
0Likes
Like

Posted 05 August 2011 - 06:21 AM

I disagree, straying from the simplicity without going into ray tracing complexity, rasterization can occur via transforming a certain detail level chunk by world view and projection matrices then drawing the contents along its transformed axis (which was actually my idea for animation of voxel octree) set up properly culling can occur appropriate to the viewed angle. I mention a certain detail level chunk because calculation of offset / size of a voxel for the camera won't effect each individual voxel differently, so computing time can be saved by calculating in the lower detail levels to what degree does the octree need to divide before voxels reach pixel level. This could be done at lower detail levels than necessary for lower quality but faster rendering and visa versa with higher quality levels.

### #8SuperVGA  Members

1132
Like
0Likes
Like

Posted 05 August 2011 - 07:26 AM

At the moment the best I can think of is computational estimation as the memory consumption seems far too great.
and to throw a hole into notches belief only surfaces need to be stored so using the distance between 2/3/4 points and a value representing a shape or a series of values representing the form of the surface,

That sounds like you're thinking of marching cubes. It takes 8 bits (one for the filled/empty value of each cell corner),
and that maps directly into a table of polygons, -it's very fast.

my idea is not at voxel level. the crudely drawn example is meant to depict that of a model, so the computational expectation is meant to calculate the position of voxels in between key points.

Why would you calculate the position of any voxel? Voxels are regular-size cells with only value stored, and solid leaves of a voxel tree typically don't have their position stored, since it will already be relative to it's place in the tree or grid.

Also on the subject of textures looking at UD the colour detail isn't very... well. detailed. it does seem as if most of the power goes into form and the colour is rather low resolution in comparison.

Heh, we can agree that the colours aren't very varied. Maybe they're looked up in a palette of some kind.

This is a shame but if I can think of a way to further reduce memory consumption without creating too much overhead using textures as with triangles could become a viable replacement. As for sampling I've got ideas already, I'll need to try them in practice.

I don't believe that (textured) polygons "ultimately" is the solution. My GPU raycaster generally performs better than my voxel-polygon hybrid, but I'd love to see your results. I think Manic Digger etc. lives on acceptable framerates because they generally show a fairly low resolution.

thinking on the subject of saving memory if we think of hierarchical point clouds inside each level of detail the vector position is not needed, simply an identifier of position relative to the parenting level of detail, doing this within an octree 3-bits per chunk of level of detail / 12-bits in a single chunk including its lower level and the model itself holding the position in world space.

How about using a KD-tree, and sparing the positions/offsets? Or you could use wave surfing like Ken Silverman does it. That is, however, very difficult to implement in a regular pixel shader. For me at least, I gave it a try. But perhaps it's doable in CUDA? I never got to mess around with that...

I'm sure Ken already found a system powerful enough, and aside from the fact that I have yet to imitate those standards, UD doesn't look that impressive. And they're lacking trustworthiness somehow. It's like they're selling something, but have no documented product.
They're not publishing their "findings" for the academic value, that's for sure.

### #9Ezbez  Members

1164
Like
3Likes
Like

Posted 05 August 2011 - 07:54 AM

I think you're missing some of the point of this argument. I don't think many people are disbelieving that the videos they show are real. The problem is with the description of the video. Vague claims of "unlimited detail" are made without any explanation of what they really mean by that. The technique they use is well-known and existing, yet they act as if this is never-before-seen. The advantages and downsides of this technique are also well known, yet they only ever bring up the advantages. They fail to address any of the concerns that people have (sorry but "don't worry angry forum-posters we have animation in the works" doesn't count for much). These are the real concerns, not whether they have been totally faking everything.

Notch's point about memory is to show that sure, they can have this level of detail, but if they do then they need to repeat things. A lot. This has obvious downsides for games - just look how repetitive their world looks.

### #10bombshell93  Members

225
Like
0Likes
Like

Posted 05 August 2011 - 09:21 AM

@Ezbez yeah I realized this not long after posting the topic but there's no way to change the topic name. Now I'm considering it more of a way to try and get past these problems, if we manage to get something better than UD in terms of practical use it could be a huge jump in game graphics. I know such an assumption doesn't come easy which is why I'm trying to do my research to understand the concepts and why some of my concepts may not work.
I'm also trying to build this so I can see how ideas work out in practice, I'm currently looking up some techniques I may have overlooked so I can get the best performance out of it. I'm definitely not the best programmer to give it a try but there's no sense in giving up without trying.

@SuperVGA well I'm currently looking through some comparisons of CUDA, DirectComput and OpenCL to see which would give me the freedom / power I'd need to render voxel / point cloud (whichever I settle on I'm still partial to both) hardware accelerated without changing to tri's. My current Idea is fairly straight forward in doing a Deferred rendering kind of pipeline, render the geometry as efficiently as possible hopefully hardware accelerated generating a G-Buffer to leave the rest of the effects and etc in the hands of shaders on a full-screen quad. This could probably make it mixable with triangle based geometry by using the depth to decide which of the Renders (triangle and voxel / point cloud) will be in front. Though "mixable" in this case means "if you want double the memory consumption" so I may try and get around that later.
Marching cubes I've heard of frequently, call me a fool but I've never looked into it, I'll start reading up soon though.
Calculating the position of voxels... in that case I was taking into account if I were to use point clouds (my concepts seem to be mixed in both) but as for calculating position in voxels case it would be calculating whats there and whats not, which could help eliminate the need to label higher detail levels with "is this solid space" bools. at least to a degree.
Texture wise I'm thinking I could get it optimized by assigning blocks of UV to the lower detail chunks, hard to work with raw but with a simple system for moving the image around and back would make it easily artist friendly.

### #11Syranide  Members

375
Like
0Likes
Like

Posted 05 August 2011 - 12:24 PM

@Ezbez yeah I realized this not long after posting the topic but there's no way to change the topic name. Now I'm considering it more of a way to try and get past these problems, if we manage to get something better than UD in terms of practical use it could be a huge jump in game graphics. I know such an assumption doesn't come easy which is why I'm trying to do my research to understand the concepts and why some of my concepts may not work.
I'm also trying to build this so I can see how ideas work out in practice, I'm currently looking up some techniques I may have overlooked so I can get the best performance out of it. I'm definitely not the best programmer to give it a try but there's no sense in giving up without trying.

Not to be a downer, but I really have my doubts about anyone ever finding a good replacement point/voxel replacement for shapes/polygons/triangles. There very likely could be good alternatives for special circumstances (say, rendering terrain, clouds, hair, etc), but then you end up with two very different code paths where the distribution between the two can vary wildly, to me that sounds like a recipe for performance balancing nightmares in many cases... which is very much exists with shaders too, but everything sharing the same code path makes it a lot more predictable. Also, let's not forget that combining SVO and triangles may sound like a good idea... making them artistically "compatible" can be an equally huge problem.

Polygons are awesome because of their simplicity, yet being so extremely flexible and fast... they are fast enough to be rendered in massive quantities, shaders make them flexible enough that with you can render pretty much any effect you like and their simplicity means it all lends itself to be cheaply animated in all kinds of complex ways. Now add tesselation and what really stands between the perfect end-result is project deadlines, experienced developers/artists, having really good tools and hardware performance... triangles and shaders allow for pretty much everything you want, the major issue today is the tools it would seem. So to me it seems, unless any other technique manages to bring the flexibility, performance and quality up to that of our triangle buddies AND provide better tools, then I don't see how it would ever catch on. UD only seems to deliver on simpler tools at this point (and arguably better quality, or rather, better details).

You mention that we could see a "huge jump in game graphics", I really don't see how we could. UD performance is crap right now as everyone knows... add to that that it scales linearly with the number of pixels makes it a very bad idea to combine with triangles unless the performance shoots through the roof (and let's not forget that it currently is CPU only so forget about any gameplay). So even if you somehow solve the massive memory issue... then performance must be solved, shading must be solved and flexibility would need to be addressed too. And still, I don't know what it would actually offer over triangles other than better tools? I've seen plenty of rediculously cool and detailed demos running at 60FPS on current high-end hardware (which will be low-end hardware in a year or two), all of which currently far "overall" surpass the quality of UD.

Again, not trying to be a downer, but at the moment I personally don't really see how this could ever work out to become a replacement or commonly used technique. And let's not forget that triangles actually scale very well too if given enough attention, as far as I can tell, SVO really has no way of doing that other than lowering the resolution.

### #12bombshell93  Members

225
Like
0Likes
Like

Posted 05 August 2011 - 01:21 PM

well from what I'm thinking of I may be skewing away from conventional methods of SVO, it may crash and burn, it may be great.
I don't see you as "a downer" any input is good input (so long as its not something silly) For now I'm just going to give it a go and see what comes out of it, with me only just starting more complex levels of programming, regardless if it works or not it'll be a good theory and coding exorcise. The "huge jump in game graphics" bit was as in context, IF somehow we overcome the problems of UD, in which case it would more than likely be a huge jump. Its a small knack I have for impossible optimism, I have a programming friend keeps telling to stop it, to which I always reply "if I want to do something great I've got to aim high".

### #13Antheus  Members

2409
Like
3Likes
Like

Posted 06 August 2011 - 03:12 AM

How do you fit 4512 elephants in a refrigerator? Maybe we can find a way to grind them up...

No matter the technology, no matter the magic - we have hard numbers:
- 1GB memory on GPU
- ~5GB/sec RAM bandwidth, ~8GB total
- 25-160GB disk space (SSDs or blu-rays)
- 128kb/sec network, 250ms latency (reliable broadband is considerable limitation)

In information theory, there is a very important baseline: data != information.

Various laws and formulas define certain terms like entropy. These are universal and go beyond various implementations.

Whether a model is represented with polygons or voxels is irrelevant, those are data. What matters is the information they carry. An HD cube rendered stereoscopically carries exactly two piece of information - one stating it's a cube and length of the side of cube. It can now be represented with words, coordinates, voxels or anything else, entropy does not change.

Whether a stone is modeled using voxels or polygons, equal detail will always require same amount of information. There is nothing preventing polygons from being encoded in a more optimal manner. Add animation, and we get another dimension of data.

By the time hardware advances to the point where we can render enough voxels - polygons will be capable of doing not only the same, but will have 20-30 years of hands-on experience with them.

Building stuff out of voxels isn't inherently better since it doesn't solve any of the problems advertised here. it's just different encoding of same information. Where such representation would be useful (and is used) is modeling materials. Sand, water, fire. Or even more detailed, heat and pressure. Instead of having sound played as a .wav and muted by distance, how about modeling true pressure propagation?

That is where "voxels" actually solve a problem (aka discrete sampling of space). And even there they aren't necessarily the best, since many such solutions use different representations using grids or graphs simply due to being more efficient.

Voxels fall into similar area as genetic programming. It works, it's proven, it is superior. But for every solution, different, slightly less general solutions provide considerably more practical solutions today. One could say that voxel approach doesn't "scale" with the domain.

### #14Syranide  Members

375
Like
0Likes
Like

Posted 06 August 2011 - 04:13 AM

well from what I'm thinking of I may be skewing away from conventional methods of SVO, it may crash and burn, it may be great.
I don't see you as "a downer" any input is good input (so long as its not something silly) For now I'm just going to give it a go and see what comes out of it, with me only just starting more complex levels of programming, regardless if it works or not it'll be a good theory and coding exorcise. The "huge jump in game graphics" bit was as in context, IF somehow we overcome the problems of UD, in which case it would more than likely be a huge jump. Its a small knack I have for impossible optimism, I have a programming friend keeps telling to stop it, to which I always reply "if I want to do something great I've got to aim high".

Yeah, perhaps I wasn't really clear when I quoted you on "huge jump in game graphics"... my point was that I don't really agree with it, even if UD was everything it promised to be in the video... I still wouldn't consider it a huge jump in graphics, or even a jump in graphics at all considering that we can already render some pretty impressive terrains today as well (personally I don't even think that UD looks all that impressive as-is, to me as a developer I'm most impressed by the details because it's currently "unheard of"), the issue again being that of having to balance between terrain, players, objects and effects, as well as keeping within respectable memory and storage limits. I'm pretty sure that you can do much more artistically impressive graphics today than anything shown in the UD video as it completely lacks quality in all areas other than being very detailed, even if the performance would have been acceptable.

To put what I'm saying in perspective; look at what th hugely impressive demos 3DMark was able to render fluidly on the hardware of those days, and the hugely impressive demos that nVidia has put out over the years... and you realize that there is/was an enormous gap of what is technically possible and what is feasible and justifiable in a game at the time... and what is possible simply because demos solve a much smaller problem and often hide the real cost (expensive facial shader is offset by rendering a smaller face and rendering dirt cheap backgrounds). As well as the hugely important fact that making something visually impressive isn't perhaps the real issue, it's making it also scale to hardware other than your own that makes the quality take a dive (again, look at what has been made with the "slow" PS3 and X360).

And then you consider the reasonably compact representation of todays game, with triangles, heightmaps, reused textures, etc, etc... and still they consume huge amounts of storage. I don't see how one could possibly choose a less compact representation and achieve better quality without overshooting performance/memory/storage contraints. If you set aside 10x the storage for "your own technique", then consider what could be done if todays techniques had 10x storage to work with too, and so on.

What I find fascinating though is the approach John Carmack took with RAGE and how in my mind, the tools are the issue, not the final representation. Freeing the artists of the burden of "performance troubleshooting" when constructing the world... they sacrifice immediate quality which the artists makes up for as they have a lot better tools and thus devote more time to being artists and less time spent "performance troubleshooting".

### #15samoth  Members

8928
Like
7Likes
Like

Posted 06 August 2011 - 06:26 AM

POPULAR

The major reason why the video is unbelieveable in my opinion is because it is a typical "smoke and mirrors" show as on 123-buy-cheap-crap-tv. The video has pretty much every snake oil warning sign.

Look how I can open a can with my wonder knife, and it will still <knife is out of camera for a second> cut cleanly through a tomato if I very carefully slice. Compare that to a normal knife which I press onto the tomato like a clumsy clot instead of cutting. Do not think that I am trying to cheat you, see how I even cut through this nail and chop through a cucumber right afterwards.

What's wrong here? Your lawyer would say "nothing", but it's nevertheless total bollocks. You can certainly cut a tomato with a knife after cutting open a can with a different knife. You can certainly chop through a cucumber with any knife. Any blunt piece of plastic will chop through a cucumber, so what. You can cut through a non-tempered nail (of unknown material, might quite possibly be lead or tin) with any tempered knife, so what. The point is that what you are shown is not related in any way to whether the knife you buy will make your day in the kitchen a better day.

Competition and increasing triples are typical warning signs of a scam: You get 5 of my fine knives not for 199 dollars, not for 149 dollars, but for only 99 dollars. Buy quick, we only have 10 left. Now 9.
Compare that to: Any video game that has twice as many polygons typically totally anihilates its competitors (which, besides, is total bull...). We do not double the number of polys, we do not triple them, we make them infinite. Buy quick before your competitor does.

Words and quotes that typically initiate "dummy mode" such as "technology used in medicine" or "developed for space travel" are a serious warning sign. Very clearly, something that takes 2 minutes to preprocess on a 5 million dollar machine will run in realtime on your Core i3 with a consumer graphics card. That's because all those idiots working for leading technology companies (and, the NASA) are useless, I wrote something better alone in under a year. In my garage. And dang, none of the leading technology companies wants to buy my awesome invention. Man, they're all stupid.

Unrelated facts paired with unrelated information are a warning sign, such as a scene which looks just like nVidia's volumetric terrain demo followed by footage from Crysis, and a speech about how your super awesome technology will revolutionize video games. What do either of the two have to do with the claims? Where are the facts?

Ok, so there is footage of a small rock which has been scanned at 64 atoms per cubic millimeter like every object in the scene (notably, with a quite finite resolution 3D scanner!), an elephant which is quite obviously a polygonal model (albeit high-detail). An instancing demo of that elephant, yay. But, each of those unique objects in an absurdely large world is displayed at infinite resolution. Yay.

See, nobody doubts that you can generate an image with unlimited detail in realtime (or rather, limited to the precision of your floating point math). Everybody and their aunt had julia fractals and mandelbrots in the 1980s in realtime. However, that is in no relation to the ability to show meaningful detail at unlimited scale. Nobody is interested in zooming into perlin noise or into a mandelbrot. Everybody knows that this is possible, it's been done before, and it's not something special.

What this video totally fails to show is something that is truly geniune and innovative. Such as an animated figure walking and leaving a footprint on the terrain. Then zoom into the footprint up to the scale of a sand grain. Surely this should not be a problem, at least my footprints are quite finite. Zoom at the animated human and show me the pores in his skin, and his stubbles. We're talking of infinite detail, and stubbles are quite finite, so again I assume there is no problem. And then, show me 5000 of those unique unlimited detail rocks in one scene, and zoom at each of them, and I'd like 20 unique animated characters in the same scene. Oh, and I'd like to see some of that 32km terrain too. 5000 + 20 is quite a bit smaller than "infinite" so I guess this should be no problem at all, and a viewing distance of 5km should be sufficient as well. After all, 5km is quite a bit smaller than infinite, too.

### #16 rouncer   Members

294
Like
0Likes
Like

Posted 06 August 2011 - 08:44 AM

really, if you wanted unlimited detail, you can render it, animation is iffy, but storing a decent sized world (like for an mmorpg for example) would be a TB. or 200 gigs if you didnt mind it being a little smaller.
what would work tho, is if you stopped trying to beat polygons and not mind the cubes showing, it would work now, with a lot less to store, and editing would go much faster too.

also another idea, avoid fps view cause it is too close to the world, step back a bit and do top view, then youd need to store less and you wouldnt need as much detail cause you cant even percieve it through the resolution. that would also make the general edit plot size smaller, and making the world would go faster too.

but it actually would still make a "hi res" game, just stepped back a little bit.

remember, bombshell, even if your idea worked, youd still have to store all the colours at least, it would still be a massive amount to store just colours.

### #17way2lazy2care  Members

786
Like
0Likes
Like

Posted 08 August 2011 - 12:40 PM

remember, bombshell, even if your idea worked, youd still have to store all the colours at least, it would still be a massive amount to store just colours.

You could have color data stored in parents in the SVO. There's no reason you'd need every single voxel to store it's own color data when it could just inherit it from it's parent if it doesn't need higher detail. There's no reason every voxel has to be at full resolution as though you were 2 inches away from it either. Those two things alone cut notch's estimate on data use by a hefty amount. There's no reason to assume that we'd need to store any more color data than the data we already store; perhaps not stored quite as efficiently for pure color data, but the amount shouldn't change.

### #18A Brain in a Vat  Members

317
Like
0Likes
Like

Posted 08 August 2011 - 12:54 PM

You could have color data stored in parents in the SVO. There's no reason you'd need every single voxel to store it's own color data when it could just inherit it from it's parent if it doesn't need higher detail. There's no reason every voxel has to be at full resolution as though you were 2 inches away from it either. Those two things alone cut notch's estimate on data use by a hefty amount. There's no reason to assume that we'd need to store any more color data than the data we already store; perhaps not stored quite as efficiently for pure color data, but the amount shouldn't change.

Eh... how exactly would you do this? Show me a data structure that can have this "optional" color data and not take up the space for it otherwise.

### #19way2lazy2care  Members

786
Like
0Likes
Like

Posted 08 August 2011 - 01:06 PM

Eh... how exactly would you do this? Show me a data structure that can have this "optional" color data and not take up the space for it otherwise.

It would work fine generally using an existing SVO pretty much. You just add another child type that gets it's color data from it's parent rather than storing it itself. I don't see which part you find impossible? You're essentially casting a ray through the tree, so why could you not grab the color data of a parent and just reuse the same color data for all it's children?

It's just like drawing a low resolution texture on a high resolution model; you don't need a pixel of texture data for every pixel of screen space taken up by a piece of the model, so why should you expect you'd need more for an SVO?

### #20A Brain in a Vat  Members

317
Like
0Likes
Like

Posted 08 August 2011 - 01:28 PM

Eh... how exactly would you do this? Show me a data structure that can have this "optional" color data and not take up the space for it otherwise.

It would work fine generally using an existing SVO pretty much. You just add another child type that gets it's color data from it's parent rather than storing it itself. I don't see which part you find impossible? You're essentially casting a ray through the tree, so why could you not grab the color data of a parent and just reuse the same color data for all it's children?

It's just like drawing a low resolution texture on a high resolution model; you don't need a pixel of texture data for every pixel of screen space taken up by a piece of the model, so why should you expect you'd need more for an SVO?

To "just add another child type" implies virtual inheritance, which adds 4 bytes (the typical size of a color) to every object. So where exactly have you saved versus just replicating the color in every single child?

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.