[Theory] Unraveling the Unlimited Detail plausibility

Started by
166 comments, last by Ben Bowen 11 years, 10 months ago
Now I'm not too experienced with graphics programming so I'll probably not be as innovative when thinking of how this could be done, but I'll give it my best.
I'm assuming most of you have seen the unlimited detail video? if not here is a link
http://www.youtube.c...h?v=00gAbgBu8R4
for obvious reasons many people remain skeptical. I for one won't think of it as impossible like most claim until I am sure there isn't a way to do it.
Hopefully a lot of people here are open thinkers so give it a shot, think of how efficient you could get point clouds to work in theory.

At the moment the best I can think of is computational estimation as the memory consumption seems far too great.
and to throw a hole into notches belief only surfaces need to be stored so using the distance between 2/3/4 points and a value representing a shape or a series of values representing the form of the surface, a large number of shapes could be interpreted via a small amount of bits. (think like compression of an image by using a colour index) Mixxing this with an octree / quadtree shapes being held within shapes without putting them in too deep could save massive amounts of space.
here's a crudely drawn example.
diagz.png

here 2-bits for identifying corners (possibly 3 with 1 being bool for "this is a styled corner") and then obviously the identity of the shape in the index
Not the best of examples but its basicly what I'm getting at.

Anyone more experienced willing to throw in an argument on this please go ahead,
I'll be researching more into this so next time I post I don't seem as dumb :P
thanks for reading and or posting,
Bombshell
Advertisement
Here's some starting points for your research smile.gif
Efficient SVOs

GigaVoxels

Animated SVOs

Point rendering

http://www.google.com.au/search?q=point-based+rendering+techniques

N.B. these types of renderers actually used to be quite popular in the 90's.
skimming at each of these papers (and correct me if I'm wrong) I keep seeing a problem with how data is stored.
storing the surface data, good. storing every voxel of the surface, bad.
Think of it like a triangle, only key points are needed. Circling to computational estimation to fill in the gaps.
However if true voxel's are key then why store the full surface? if the data was treated as "surface point" and the surface is interpreted as there until it is define "surface not here" that's knocking off everything in between flat surfaces, leaving room for a int or pointer to a computational estimation of the surfaces shape or even texture coordinates allowing voxels to fall into texture use rather than surface voxel by voxel colour/normal/any other data.
thinking over this its pushing more towards triangles but like other aspects of graphics programming (deferred vs forward) there had to be a best of both worlds middle ground.

I'm sure while reading through those links someones going to post bashing my idea in with a sledgehammer, but if I'm going to think of anything good I cant rule out the crazy or the stupid.
Thanks for the links by the way their a great help... I'd expand on a way I'm thinking of animation becoming simple but I've got to think it through while reading the animated SVO paper.

Now I'm not too experienced with graphics programming so I'll probably not be as innovative when thinking of how this could be done, but I'll give it my best.
I'm assuming most of you have seen the unlimited detail video? if not here is a link
http://www.youtube.c...h?v=00gAbgBu8R4
for obvious reasons many people remain skeptical. I for one won't think of it as impossible like most claim until I am sure there isn't a way to do it.
Hopefully a lot of people here are open thinkers so give it a shot, think of how efficient you could get point clouds to work in theory.

At the moment the best I can think of is computational estimation as the memory consumption seems far too great.
and to throw a hole into notches belief only surfaces need to be stored so using the distance between 2/3/4 points and a value representing a shape or a series of values representing the form of the surface, a large number of shapes could be interpreted via a small amount of bits. (think like compression of an image by using a colour index) Mixxing this with an octree / quadtree shapes being held within shapes without putting them in too deep could save massive amounts of space.
here's a crudely drawn example.
diagz.png

here 2-bits for identifying corners (possibly 3 with 1 being bool for "this is a styled corner") and then obviously the identity of the shape in the index
Not the best of examples but its basicly what I'm getting at.

Anyone more experienced willing to throw in an argument on this please go ahead,
I'll be researching more into this so next time I post I don't seem as dumb :P
thanks for reading and or posting,
Bombshell


Unless I misunderstand something then I really don't see the point of this, as far as I know UD works on the idea of voxels and the voxels having a color (and that's where the texture comes from) ... so, by necessity the objects needs to consist of very very small voxels or the quality of both the "models and textures" would degrade significantly. You should never be able to see the individual voxels, so the shape of them should be irrelevant and if you make them larger then the texture quality will suffer... and although your idea seems sound at first... I'm not sure how you would actually make the shapes blend well into it's neighbours without even more bits.

And at that level, it doesn't really matter if they are square or round, or just interpolating somehow... the idea is for the voxels to never really be significantly larger than a single pixel or the texture quality goes out the window immediately. So unless I missed something, I really don't see how this would actually change anything? I would even guess that texturing and materials will always be a weakness of UD, perhaps it can be overcome by the overall quality of the models, but it seems to me that unless you come up with a really really good and fast system for sampling, then the "texture" will always have a washed out or aliased look to it.


my idea is not at voxel level. the crudely drawn example is meant to depict that of a model, so the computational expectation is meant to calculate the position of voxels in between key points. Also on the subject of textures looking at UD the colour detail isn't very... well. detailed. it does seem as if most of the power goes into form and the colour is rather low resolution in comparison. This is a shame but if I can think of a way to further reduce memory consumption without creating too much overhead using textures as with triangles could become a viable replacement. As for sampling I've got ideas already, I'll need to try them in practice.
thinking on the subject of saving memory if we think of hierarchical point clouds inside each level of detail the vector position is not needed, simply an identifier of position relative to the parenting level of detail, doing this within an octree 3-bits per chunk of level of detail / 12-bits in a single chunk including its lower level and the model itself holding the position in world space.

I'm sure if I keep at this and with the help of these forums we can think of a system that's powerful enough.
Thanks for the reply and thanks for reading,
Bombshell
To my knowledge, the key point of SVO is the simplicity of the voxels and that's what makes it relatively fast, just like triangles.

However, the more complexity you add the closer you get to raytracing, things get hairy when you have to start evaluating shapes. Without much to back it up, I feel confident that much of the speed in UD comes from the overly apparent grid pattern, that is, there is no overlap nor angles which should make it significantly faster to query (but even more useless in practice).

It would surprise me if you couldn't use SVO as some kind of early "hit test" to speed up realtime raytracing given a reasonably small and static scene... the key being static. My point being, the further you go from the simplicity of the voxel, the further you go into raytracing land.


I disagree, straying from the simplicity without going into ray tracing complexity, rasterization can occur via transforming a certain detail level chunk by world view and projection matrices then drawing the contents along its transformed axis (which was actually my idea for animation of voxel octree) set up properly culling can occur appropriate to the viewed angle. I mention a certain detail level chunk because calculation of offset / size of a voxel for the camera won't effect each individual voxel differently, so computing time can be saved by calculating in the lower detail levels to what degree does the octree need to divide before voxels reach pixel level. This could be done at lower detail levels than necessary for lower quality but faster rendering and visa versa with higher quality levels.

That sounds like you're thinking of marching cubes. It takes 8 bits (one for the filled/empty value of each cell corner),
and that maps directly into a table of polygons, -it's very fast.

[quote name='bombshell93' timestamp='1312539158' post='4844936']
my idea is not at voxel level. the crudely drawn example is meant to depict that of a model, so the computational expectation is meant to calculate the position of voxels in between key points.

Why would you calculate the position of any voxel? Voxels are regular-size cells with only value stored, and solid leaves of a voxel tree typically don't have their position stored, since it will already be relative to it's place in the tree or grid.


Also on the subject of textures looking at UD the colour detail isn't very... well. detailed. it does seem as if most of the power goes into form and the colour is rather low resolution in comparison.

Heh, we can agree that the colours aren't very varied. Maybe they're looked up in a palette of some kind.


This is a shame but if I can think of a way to further reduce memory consumption without creating too much overhead using textures as with triangles could become a viable replacement. As for sampling I've got ideas already, I'll need to try them in practice.

I don't believe that (textured) polygons "ultimately" is the solution. My GPU raycaster generally performs better than my voxel-polygon hybrid, but I'd love to see your results. I think Manic Digger etc. lives on acceptable framerates because they generally show a fairly low resolution.


thinking on the subject of saving memory if we think of hierarchical point clouds inside each level of detail the vector position is not needed, simply an identifier of position relative to the parenting level of detail, doing this within an octree 3-bits per chunk of level of detail / 12-bits in a single chunk including its lower level and the model itself holding the position in world space.

How about using a KD-tree, and sparing the positions/offsets? Or you could use wave surfing like Ken Silverman does it. That is, however, very difficult to implement in a regular pixel shader. For me at least, I gave it a try. But perhaps it's doable in CUDA? I never got to mess around with that...

I'm sure Ken already found a system powerful enough, and aside from the fact that I have yet to imitate those standards, UD doesn't look that impressive. And they're lacking trustworthiness somehow. It's like they're selling something, but have no documented product.
They're not publishing their "findings" for the academic value, that's for sure.
I think you're missing some of the point of this argument. I don't think many people are disbelieving that the videos they show are real. The problem is with the description of the video. Vague claims of "unlimited detail" are made without any explanation of what they really mean by that. The technique they use is well-known and existing, yet they act as if this is never-before-seen. The advantages and downsides of this technique are also well known, yet they only ever bring up the advantages. They fail to address any of the concerns that people have (sorry but "don't worry angry forum-posters we have animation in the works" doesn't count for much). These are the real concerns, not whether they have been totally faking everything.

Notch's point about memory is to show that sure, they can have this level of detail, but if they do then they need to repeat things. A lot. This has obvious downsides for games - just look how repetitive their world looks.
@Ezbez yeah I realized this not long after posting the topic but there's no way to change the topic name. Now I'm considering it more of a way to try and get past these problems, if we manage to get something better than UD in terms of practical use it could be a huge jump in game graphics. I know such an assumption doesn't come easy which is why I'm trying to do my research to understand the concepts and why some of my concepts may not work.
I'm also trying to build this so I can see how ideas work out in practice, I'm currently looking up some techniques I may have overlooked so I can get the best performance out of it. I'm definitely not the best programmer to give it a try but there's no sense in giving up without trying.

[size=2]@SuperVGA well I'm currently looking through some comparisons of CUDA, DirectComput and OpenCL to see which would give me the freedom / power I'd need to render voxel / point cloud (whichever I settle on I'm still partial to both) hardware accelerated without changing to tri's. My current Idea is fairly straight forward in doing a Deferred rendering kind of pipeline, render the geometry as efficiently as possible hopefully hardware accelerated generating a G-Buffer to leave the rest of the effects and etc in the hands of shaders on a full-screen quad. This could probably make it mixable[size=2] with triangle based geometry by using the depth to decide which of the Renders (triangle and voxel / point cloud) will be in front. Though "mixable" in this case means "if you want double the memory consumption" so I may try and get around that later.
Marching cubes I've heard of frequently, call me a fool but I've never looked into it, I'll start reading up soon though.
Calculating the position of voxels... in that case I was taking into account if I were to use point clouds (my concepts seem to be mixed in both) but as for calculating position in voxels case it would be calculating whats there and whats not, which could help eliminate the need to label higher detail levels with "is this solid space" bools. at least to a degree.
Texture wise I'm thinking I could get it optimized by assigning blocks of UV to the lower detail chunks, hard to work with raw but with a simple system for moving the image around and back would make it easily artist friendly.

This topic is closed to new replies.

Advertisement