[Theory] Unraveling the Unlimited Detail plausibility

Started by
166 comments, last by Ben Bowen 11 years, 10 months ago
It's perfectly plausible. That's the topic answered. It just has some draw backs in terms of a few things, but some bonuses in terms of other stuff.

I've been thinking every now and again how I would engineer some tech to match what euclidean has, and my algorithm so far is:

1. Store the scene as sparse oct tree with duplicates in zones removed.
1.a Any place that is empty is empty of data, it has no 'transparent' voxel representation
1.b The scene is a 'box' subdivided into 8 boxes and then those boxes are sub-divided till we get down to the voxel level.
1.c Each branch along the tree has a color entry that represents the averaged color of the leaves in that section of the tree.
1.d If the color of a branch is the same as the color of all leaves under it, then that branch is set to that color and all leaves are removed. This also means that voxels inside one of these cube maps, if not completely solid and the same color, are not counted as the same colour as the tree. Empty slots in the tree that make up the shape of an object count . Maybe just ignore this last bit since it might cause problems.

2. Move along the tree and render out boxes that describe the branches in the tree. The size of each box (so the depth it goes into the tree) is related to the distance of that part of the tree from the camera. So objects closer up are described by more boxes, and objects further away by less boxes.

3. Now that we have a general idea of the depth of the scene, we use that depth to limit the queries we use to find the actual voxels that make up the scene. The further away a node is from the camera the less we care about actually getting the right value for that part of the screen.

No. 3 is what I'm having problems thinking up. Right now it'll basically be a ray trace that's only saving grace is that the amount of items it needs to test against has been greatly reduced by the pre-render depth step. Maybe that'll be faster enough to work. Maybe it won't. Haven't had the time to try this and see it it'd work.
I say Code! You say Build! Code! Build! Code! Build! Can I get a woop-woop? Woop! Woop!
Advertisement

it may (arguably) be more efficient to represent certain surfaces that way, but then it becomes unclear how to use standard techniques such as coloring, displacement mapping or normal mapping, on top of this - you can of course represent the surface detail achieved by these techniques with these "atoms" directly, but then the atom count explodes, and is likely far less efficient of a representation, not more.



N.B. that if textures can be replaced by creating one atom per pixel in the texture, then the memory requirements of both approaches are going to be similar - you're storing the same data in the end.


In regards to both of these statements, wouldn't the conversion rate of 64 atoms per cubic millimeter take care of that? The atom count would be a controlled conversion, as well as the rate at which color data was distributed.

Until you guys said these things, I never really fully understood what he was talking about in reference to those numbers. It never dawned on me that he *could* have meant textured surfaces and might have figured out a way to represent color data from "converted textured assets".

Also, while doing more research about Point Cloud once you guys responded, I came across PhotoSynth. Microsoft made a product where you actually do the opposite. It converts high res pictures, into Point Cloud Models. The results are actually quite staggering. I don't know the conversion rate for PhotoSynth, but it seems like a technique similar to that would be ideal for the Unlimited Engine.
Also, while doing more research about Point Cloud once you guys responded, I came across PhotoSynth. Microsoft made a product where you actually do the opposite. It converts high res pictures, into Point Cloud Models. The results are actually quite staggering. I don't know the conversion rate for PhotoSynth, but it seems like a technique similar to that would be ideal for the Unlimited Engine
Yep, again though, note that this is already in practice. In one project that I worked on, we wanted to base a level off of a real location that we had a video of -- so we extracted all the frames from the video as separate "photographs", fed them through an app like PhotoSynth, and got a point-cloud of that location. We then cleaned up the data-set, built LODs from it and used it in the game.
A Little bit Off-topic, but every now and then someone says, it has been done before... And I don't think soo.

I mean, "some but not every one"* thinks it is a Sparse Octree with Raycasting... Then I like to see a Demo which produces 30 FPS at 1024x768 which such a deep Octree Level. I have seen none on a CPU, which they claim to use...


* Fixed that :)
Punika
I mean, everybody thinks it is a Sparse Octree with Raycasting[/quote]
Wait. Have you even read the topic? Only some (but very popular, you know who wink.png) people have spread such an assumption.
One thing I would love to see in their next demo is a perfect quad around 1km in size tilted on every axis while keeping the spatial resolution.
If memory/disk consumption of this object is still 8% of it's polygonal counterpart I'm impressed.

I'm pretty sure that he also said their engine didn't have LoD.
No LoD on geometry or surface color/normal information is a sure way to get aliasing hell.
We prefilter our textures for a reason and would prefilter geometry if it would allow it. (like voxels do..)
try thinking more outside the box guys, unlimited detail isn't something that Bruce Dell found in some PDF file on rendering unlimited detail. He invented it, basically going against all other current techniques for rendering point data;

My gist of all this, is that the entire world(not just individual models and objects in the world), are sliced into 2D layers, and converged into cartesian coordinates; these coordinates are then during run-time almost magically reverse transformed to their proper screen-space coordinates after already being sorted through some unknown function;

Don't like that approach? We'll octree's have not been able to yeild those results in real-time either; So, lets try and take down the number of cycles, and complicated maths and leave it more simple, that's the only way he could process that much information in such a small amount of time. Accept it and move towards that; Computational Theory;

I'd also say that ray-tracing is really not the answer here as he states Thinking outside the box might be an example of this based on the cartesian idea, that is the screen-space may not be nothing more then some normals right, and just like when righting a script to do reflection, and refraction, you're no more moving a few pixels in the direction of that normal. So lets transform our cartesian coordinates with some dot product to our screen-space normal; what might happen then? Magical reverse transformation of the exact "atom" we need for that point on the screen. Without a lot of cycles; Or math.

This guy's been developing this thing for a long time, and deserves more respect for not having stuck to standards, and simply accepting the PDF's or Tutorials they find on GameDev as their ultimatum.

He went around and beyond, I say you stop fighting it, and embrace it; Just because he hasn't decided to populate it yet, does not mean that it isn't there. Give him time to perfect it, and make some affiliations with physics companies, who can then compute on the GPU while all graphics processing is being done on the CPU. This type of co-proccessing is what is going to make next-gen games next-gen; Be patient;
To my above post which I cannot edit, I ment to say convert the 2D layers to radial, oh geeze I've been confusing those two a lot lately;
As long as he doesn't tell anyone how its done, its utterly useless for anyone.

Skeptical people does not believe because someone comes along telling they have done something. They want you to show it and will immediately ask "how?".

If you claim something extraordinary, you should expect a barrage of tough questions.

And if you answer those questions with snake oil claims that it will solve all your problems and put silly words like "unlimited" into it, then you can expect to not be taken very seriously.
there are some paper about perfect hashing of UV sets, to map positions of voxel to texture coordinates etc. (as you'd otherwise have quite a lot of data).

In theory, you could render a box and based on the UV of a particular face + view direction, you could address a voxels using this kind of perfect hashing. It would of course suck in extreme amount of memory, but would really allow that constant time voxel lookup.

This topic is closed to new replies.

Advertisement