• Advertisement
Sign in to follow this  

"Unlimited Detail"

This topic is 3028 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I haven't seen a mention of this on gamedev.net, so thought I'd post this link to see what you guys think: Unlimited Detail In short: he's claiming to render point-clouds on the CPU at decent framerates (as shown in his videos). From what I understand, the technique is to heavily pre-process a point cloud to compress the data, and to make searches very fast. I think the emphasis is on the "search" part: So when you perform a ray-cast for a pixel, it can quickly find which point in the point cloud is hit first. It all looks very interesting, but I do find it hard to believe you can perform searches fast enough for every pixel in a high resolution image. The videos and images on the site are let down by programmer's art.

Share this post


Link to post
Share on other sites
Advertisement
I'll believe this when I have a working demo sitting on my computer.

The photos do not show anything that cannot be done with current graphics hardware and the videos could have been done in a ray tracing app.

I am curious though, assuming this is real, how do you do animation? Do you have to translate millions of points in 3D space to move, say, an arm of a character? Even if they can get the rendering done at real-time speeds (which is, in and of itself, a misnomer; 10 FPS is "real-time", but not acceptable in a game), the math needed to do any transformations on the point cloud will quickly eat up any speed increase.

Share this post


Link to post
Share on other sites
Can't see this being used for anything that animates...It does however look like a nice solution for terrain/buildings (if any of this is actually doable).

I've always had a soft-spot for voxel-style rendering. But it's just like ray-tracing : may be possible in the "next-generation" (but never acutally is).

Given that you can do offset-mapping / virtual displacement mapping to give similar results on current-gen hardware - I think the polygon will still be around for a little while longer.

Share this post


Link to post
Share on other sites
I watched the video, unfortunatly its cut short but I must say it was quite impressive in parts (the little city/jungle bit for one).

Share this post


Link to post
Share on other sites
I see nothing impossible in that technique, but if I may remark, games are trying to move away from pre-processing as much as they can, and I think the future will be interactive environments, partially procedural, with complex physics and animations. A technique that relies on massive pre-processing for a static world has little future IMO.

Y.

Share this post


Link to post
Share on other sites
There is an import topic related to that: Entropy.

Entropy basically states the lower bound on number of bits needed to represent something.

Models displayed in such scenes need to come from somewhere. They could be modeled using NURBS, solids, or some other method. Imagine a sphere. While it could be rendered at "infinite" detail, that would add no extra information. A sphere at (2,4,1) with radius of 2.5 carries exactly same amount of information whether rendered 1 pixel wide, 1000 pixels wide or 1 billion pixels wide.


The problem with polygons demonstrated by conventional engines is not so much the detail rendered, but size of scene.

This problem is not solved by new rendering technique. Using adequate pre-processing, polygon-based engines are capable of exactly the same thing - but with adequate detail, the scene description would be terrabytes, even petabytes in size, if it were to convey that additional information. A tree that is infinitely zoomable would also need adequately detailed model representation.


Note that demos rendering Sierpinski cube in 3D in real time were available back in the 90's - and this is same concept. Rendering same simple model billions of time over and over does not break any boundaries - storage needed is minimal.


Ultimately, the visuals generated by such methods are very similar to fractal renderings. Same model replicated many times over scene. Unfortunately, human brain is incredibly good at pattern matching and will immediately recognize this as artificial.

So the final trick towards improving perceived quality of such rendering would be to properly mutate each instance to give perception of randomness.


At the end of the day, just like with laws of thermodynamics, the perceived quality and complexity of scene that is being rendered is directly proportional to entropy of data that defines it. And this directly affects cost of asset production, which is already bordering on prohibitive today.

Using such technique for hybrid rendering (grass, sand, or similar details which lend themselves well to procedural generation, imagine a field of grass with billions of individual leaves, or billions of petals, or trees with billions of leaves each) would probably be considerably more useful. But the viability of this will be mostly affected by other factors, such as lighting or other interaction requirements.

Share this post


Link to post
Share on other sites
The issue of creating art to a high degree of detail is a bit connected to how you define your primitives you model with. Deriving pointclouds from high-res meshes, which would be the more obvious way, might just not be the adequate technique for this kind of displaying.
I thought more in the direction of entirely procedurally based modeling, using basic functions to represent spheres, boxes and some more primitives parametrically, and to combine these only using bool-operations and mathematical modifiers, such that not a single actual vertex is ever to be stored. From a model defined this way, I could comparably easily create a set of points that is just suitable for the amount of detail I wish. Also, I can leave out vast sets of points if they're not visible, and instance them as needed. Combinig this with procedural texturing (which could also serve as a source for high-detail "baked out" bump mapping) could minimize the primary storage size considerably. It of course would require artists to work entirely different than today. The preprocessing would have to be a bit faster then of course. The problem of storing the actual points then becomes a question of availability: I may just create as many as the host system can handle, making "quality" a question of RAM, not that much of raw computing power (though that is still needed to quite some degree of course).

I am a little bit of a friend to any ideas that move away from polys and projection, because it always is a collection of "tricks" to make thing look like they were the actual thing, but mostly are not. Straight forward "real" (virtual) reflections, or shadows not caused by shadow volumes but as a natural outcome of the light computations - things like these, I would really like to see someday usable in "home" CG.

Projects like this one tend to make bigger promises than they can hold in the end, but I still am fond of any effort made to cut loose from the poly-rendering world, which without any doubt served (and still does) us very well.
That all this won't come today, not next year or within the next three years is no question. It'll take it's time, but it would take much longer if there weren't such ambitious projects pushing things like the one mentioned. They have my support, but they must watch out to not be too disppointed if the industry and hardware isn't just ready, or the technique wasn't what we've waited for after all. The effort is undoubtedly appreciable.

Share this post


Link to post
Share on other sites
I'd like to point out that the rendering of, and I quote, "trillions of trillions of trillions of points" in real-time isn't possible, unless they have really invented something extraordinary. A recent scientific research on this topic. They perform the rendering of 883k surface points at an average framerate of 16 fps. They perform no pre-computation of any kind. They do also perform some blur to subdue the artifects caused by point cloud rendering (without these, the rendering would have been at 29 fps). From the look at the videos, "Unlimited Detail" doesn't do that kind of things (holes in the geometry, holes in the depthbuffer of the shadowmapping technique, etc.).

If they really have invented something, they could make a lot of money out of it for sure. But take in mind: the geometry used for point cloud rendering must be of a super high quality, to cover the screen. Appearance/Performance may be lowered by this. Also, the artists must create these kind of high quality models, so that takes more time that ordinary polygons.

Emiel

Share this post


Link to post
Share on other sites
Quote:
Original post by emiel1
I'd like to point out that the rendering of, and I quote, "trillions of trillions of trillions of points" in real-time isn't possible, unless they have really invented something extraordinary.


They only need to render 1024x768 points, one for each visible pixel. The scene is defined by arbitrary number of points, but most of them aren't rendered. It's like doing ray-tracing, but only shooting ray once and not bouncing it. The rest of magic is about how to efficiently store the data set.

If lighting and other information is pre-computed, then doing this type of rendering is fairly trivial. But this is also where the method fails, since precomputing everything severely limits the interaction.

Perhaps a more interesting example is FryRender, which can apparently provide interactive scenes.

Now *that* is a technology that would transform real time rendering if it ever gets made into usable real-time version. Once that happens, rendered worlds will become indistinguishable from live ones.

I give it 10 years tops before someone actually pulls it off.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement