Jump to content

  • Log In with Google      Sign In   
  • Create Account

"Unlimited Detail"


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
31 replies to this topic

#1 Orbital Fan   Members   -  Reputation: 106

Like
0Likes
Like

Posted 29 December 2009 - 12:16 AM

I haven't seen a mention of this on gamedev.net, so thought I'd post this link to see what you guys think: Unlimited Detail In short: he's claiming to render point-clouds on the CPU at decent framerates (as shown in his videos). From what I understand, the technique is to heavily pre-process a point cloud to compress the data, and to make searches very fast. I think the emphasis is on the "search" part: So when you perform a ray-cast for a pixel, it can quickly find which point in the point cloud is hit first. It all looks very interesting, but I do find it hard to believe you can perform searches fast enough for every pixel in a high resolution image. The videos and images on the site are let down by programmer's art.

Sponsor:

#2 Orbital Fan   Members   -  Reputation: 106

Like
0Likes
Like

Posted 29 December 2009 - 12:20 AM

Here's another site which looks like a similar technique:

atomontage

(although they mention using a combination of GPU and CPU).

#3 MarkS   Members   -  Reputation: 180

Like
0Likes
Like

Posted 29 December 2009 - 04:37 AM

I'll believe this when I have a working demo sitting on my computer.

The photos do not show anything that cannot be done with current graphics hardware and the videos could have been done in a ray tracing app.

I am curious though, assuming this is real, how do you do animation? Do you have to translate millions of points in 3D space to move, say, an arm of a character? Even if they can get the rendering done at real-time speeds (which is, in and of itself, a misnomer; 10 FPS is "real-time", but not acceptable in a game), the math needed to do any transformations on the point cloud will quickly eat up any speed increase.

#4 Orbital Fan   Members   -  Reputation: 106

Like
0Likes
Like

Posted 29 December 2009 - 05:18 AM

Can't see this being used for anything that animates...It does however look like a nice solution for terrain/buildings (if any of this is actually doable).

I've always had a soft-spot for voxel-style rendering. But it's just like ray-tracing : may be possible in the "next-generation" (but never acutally is).

Given that you can do offset-mapping / virtual displacement mapping to give similar results on current-gen hardware - I think the polygon will still be around for a little while longer.

#5 Nanoha   Members   -  Reputation: 300

Like
0Likes
Like

Posted 29 December 2009 - 05:19 AM

I watched the video, unfortunatly its cut short but I must say it was quite impressive in parts (the little city/jungle bit for one).

#6 Ysaneya   Members   -  Reputation: 1241

Like
0Likes
Like

Posted 29 December 2009 - 05:54 AM

I see nothing impossible in that technique, but if I may remark, games are trying to move away from pre-processing as much as they can, and I think the future will be interactive environments, partially procedural, with complex physics and animations. A technique that relies on massive pre-processing for a static world has little future IMO.

Y.


#7 Antheus   Members   -  Reputation: 2397

Like
0Likes
Like

Posted 29 December 2009 - 06:09 AM

There is an import topic related to that: Entropy.

Entropy basically states the lower bound on number of bits needed to represent something.

Models displayed in such scenes need to come from somewhere. They could be modeled using NURBS, solids, or some other method. Imagine a sphere. While it could be rendered at "infinite" detail, that would add no extra information. A sphere at (2,4,1) with radius of 2.5 carries exactly same amount of information whether rendered 1 pixel wide, 1000 pixels wide or 1 billion pixels wide.


The problem with polygons demonstrated by conventional engines is not so much the detail rendered, but size of scene.

This problem is not solved by new rendering technique. Using adequate pre-processing, polygon-based engines are capable of exactly the same thing - but with adequate detail, the scene description would be terrabytes, even petabytes in size, if it were to convey that additional information. A tree that is infinitely zoomable would also need adequately detailed model representation.


Note that demos rendering Sierpinski cube in 3D in real time were available back in the 90's - and this is same concept. Rendering same simple model billions of time over and over does not break any boundaries - storage needed is minimal.


Ultimately, the visuals generated by such methods are very similar to fractal renderings. Same model replicated many times over scene. Unfortunately, human brain is incredibly good at pattern matching and will immediately recognize this as artificial.

So the final trick towards improving perceived quality of such rendering would be to properly mutate each instance to give perception of randomness.


At the end of the day, just like with laws of thermodynamics, the perceived quality and complexity of scene that is being rendered is directly proportional to entropy of data that defines it. And this directly affects cost of asset production, which is already bordering on prohibitive today.

Using such technique for hybrid rendering (grass, sand, or similar details which lend themselves well to procedural generation, imagine a field of grass with billions of individual leaves, or billions of petals, or trees with billions of leaves each) would probably be considerably more useful. But the viability of this will be mostly affected by other factors, such as lighting or other interaction requirements.

#8 Medium9   Members   -  Reputation: 192

Like
0Likes
Like

Posted 29 December 2009 - 07:16 AM

The issue of creating art to a high degree of detail is a bit connected to how you define your primitives you model with. Deriving pointclouds from high-res meshes, which would be the more obvious way, might just not be the adequate technique for this kind of displaying.
I thought more in the direction of entirely procedurally based modeling, using basic functions to represent spheres, boxes and some more primitives parametrically, and to combine these only using bool-operations and mathematical modifiers, such that not a single actual vertex is ever to be stored. From a model defined this way, I could comparably easily create a set of points that is just suitable for the amount of detail I wish. Also, I can leave out vast sets of points if they're not visible, and instance them as needed. Combinig this with procedural texturing (which could also serve as a source for high-detail "baked out" bump mapping) could minimize the primary storage size considerably. It of course would require artists to work entirely different than today. The preprocessing would have to be a bit faster then of course. The problem of storing the actual points then becomes a question of availability: I may just create as many as the host system can handle, making "quality" a question of RAM, not that much of raw computing power (though that is still needed to quite some degree of course).

I am a little bit of a friend to any ideas that move away from polys and projection, because it always is a collection of "tricks" to make thing look like they were the actual thing, but mostly are not. Straight forward "real" (virtual) reflections, or shadows not caused by shadow volumes but as a natural outcome of the light computations - things like these, I would really like to see someday usable in "home" CG.

Projects like this one tend to make bigger promises than they can hold in the end, but I still am fond of any effort made to cut loose from the poly-rendering world, which without any doubt served (and still does) us very well.
That all this won't come today, not next year or within the next three years is no question. It'll take it's time, but it would take much longer if there weren't such ambitious projects pushing things like the one mentioned. They have my support, but they must watch out to not be too disppointed if the industry and hardware isn't just ready, or the technique wasn't what we've waited for after all. The effort is undoubtedly appreciable.

#9 emiel1   Members   -  Reputation: 166

Like
0Likes
Like

Posted 29 December 2009 - 10:05 AM

I'd like to point out that the rendering of, and I quote, "trillions of trillions of trillions of points" in real-time isn't possible, unless they have really invented something extraordinary. A recent scientific research on this topic. They perform the rendering of 883k surface points at an average framerate of 16 fps. They perform no pre-computation of any kind. They do also perform some blur to subdue the artifects caused by point cloud rendering (without these, the rendering would have been at 29 fps). From the look at the videos, "Unlimited Detail" doesn't do that kind of things (holes in the geometry, holes in the depthbuffer of the shadowmapping technique, etc.).

If they really have invented something, they could make a lot of money out of it for sure. But take in mind: the geometry used for point cloud rendering must be of a super high quality, to cover the screen. Appearance/Performance may be lowered by this. Also, the artists must create these kind of high quality models, so that takes more time that ordinary polygons.

Emiel

#10 Antheus   Members   -  Reputation: 2397

Like
0Likes
Like

Posted 29 December 2009 - 11:26 AM

Quote:
Original post by emiel1
I'd like to point out that the rendering of, and I quote, "trillions of trillions of trillions of points" in real-time isn't possible, unless they have really invented something extraordinary.


They only need to render 1024x768 points, one for each visible pixel. The scene is defined by arbitrary number of points, but most of them aren't rendered. It's like doing ray-tracing, but only shooting ray once and not bouncing it. The rest of magic is about how to efficiently store the data set.

If lighting and other information is pre-computed, then doing this type of rendering is fairly trivial. But this is also where the method fails, since precomputing everything severely limits the interaction.

Perhaps a more interesting example is FryRender, which can apparently provide interactive scenes.

Now *that* is a technology that would transform real time rendering if it ever gets made into usable real-time version. Once that happens, rendered worlds will become indistinguishable from live ones.

I give it 10 years tops before someone actually pulls it off.


#11 PolyVox   Members   -  Reputation: 708

Like
0Likes
Like

Posted 29 December 2009 - 10:55 PM

Quote:
Original post by maspeir
I'll believe this when I have a working demo sitting on my computer.

This is essentially my attitude as well. I am extremely interested in both these technologies as I consider them the closest competitors to my own engine, but until I actually play an interactive demo on my own machine it is hard to know if they work as claimed. Though I have to admit the videos are pretty impressive (Atomontage at least, I didn't watch the Unlimited Detail ones yet).
Quote:
Original post by Ysaneya
...games are trying to move away from pre-processing as much as they can, and I think the future will be interactive environments...

While I generally agree, the Atomontage videos do show some rather cool examples of truck tyres leaving tires marks in the ground, and of dynamic smoke being animated in real time. It would be interesting to know how much preprocessing is actually involved.

[Edited by - PolyVox on December 30, 2009 7:55:13 AM]

#12 Lode   Members   -  Reputation: 982

Like
0Likes
Like

Posted 30 December 2009 - 03:13 AM

Without looking at anything technical at all, I find the website not really convincing at this time. They're quite vague, and quotes like "Unlimited Detail is believed by many to be the one of, and possibly the most significant piece of technology of the decade." don't really convince me unless there's some proof to back it up. They show lots of pretty pictures (strange enough, in 256-color palette though), and spend a lot of paragraphs explaining the problems of conventional engines, but again I miss the proof to show their engine does it better.

But nonetheless, would be cool if it's true, so hopefully we really get to see this some day.

There is something that disappoints me in their pictures, and that is that in some of those there is grass but you can clearly see a repeating tile pattern. I really hope repeating tile patterns (especially in nature-scenes) will be a thing of the past.

[Edited by - Lode on December 30, 2009 9:13:54 AM]

#13 Quat   Members   -  Reputation: 404

Like
0Likes
Like

Posted 30 December 2009 - 03:35 AM

Interesting, but there are some new DirectX 11 demos that show off the tesselator unit and they almost get pixel sized triangles and do real displacement mapping (not parallax occlusion mapping).




#14 cignox1   Members   -  Reputation: 723

Like
0Likes
Like

Posted 30 December 2009 - 09:19 AM

Quote:
Original post by Lode
Without looking at anything technical at all, I find the website not really convincing at this time.


Same here.

Quote:

It really is Unlimited, Infinite, endless power, for 3D graphics.


I might be wrong, but this statement seem quite strange. Everything involving infinite (expecially when computers are used) sounds not very professional.




#15 Nick Gravelyn   Members   -  Reputation: 851

Like
0Likes
Like

Posted 30 December 2009 - 10:45 AM

I'm a little skeptical as well. They claim unlimited by we all know computers are quite finite. They always mention a 1024x768 resolution but that's, IMO, a thing of the past. If we're really talking about revolutionizing graphics, the target is at least 1920x1080 which is 2.6x as many points. I'm curious to not only see this in action on my computer, but see whether their algorithm actually scales to HD given that it's quite a bit more points to be put on screen and they never mention anything about any resolution besides 1024x768.

In addition there's all the other issues such as authoring the content, memory usage, animations, etc. Even if you could use this for non-animated objects, would you want to? Mixing something like this with traditional polygonal animated meshes would likely look worse due to the contrast in geometry.

I'm putting this in my bucket along with OnLive and all the other technologies that sound great but need to be proved to me still.

#16 Antheus   Members   -  Reputation: 2397

Like
0Likes
Like

Posted 30 December 2009 - 02:51 PM

Quote:
Original post by NickGravelyn
I'm putting this in my bucket along with OnLive and all the other technologies that sound great but need to be proved to me still.


See here for an update. Taking into consideration the numbers, custom video encoding hardware, custom routing and custom controller, crunching the numbers given by them, it is actually possible for online to have less latency for some than running it via stock x-box controller locally.

#17 Antheus   Members   -  Reputation: 2397

Like
0Likes
Like

Posted 30 December 2009 - 02:51 PM

Quote:
Original post by NickGravelyn
I'm putting this in my bucket along with OnLive and all the other technologies that sound great but need to be proved to me still.


See here for an update. Taking into consideration the numbers, custom video encoding hardware, custom routing and custom controller, crunching the numbers given by them, it is actually possible for online to have less latency for some than running it via stock x-box controller (when multiple controllers are connected) locally.

Guess we'll see soon enough.

#18 Nick Gravelyn   Members   -  Reputation: 851

Like
0Likes
Like

Posted 30 December 2009 - 03:27 PM

Quote:
Original post by Antheus
Quote:
Original post by NickGravelyn
I'm putting this in my bucket along with OnLive and all the other technologies that sound great but need to be proved to me still.


See here for an update. Taking into consideration the numbers, custom video encoding hardware, custom routing and custom controller, crunching the numbers given by them, it is actually possible for online to have less latency for some than running it via stock x-box controller locally.
That's not possible. If I use a wired 360 controller, OnLive can't possibly beat that. And even when he talks about the wireless controllers, he says there's about 20ms of latency which is still 1/4 of OnLive's target 80ms.

At the end of the day, they've demoed it with one guy playing 250 miles from a server that is likely only being used by, what, a few hundred beta testers? I live in Seattle where my nearest server would be, according to their map, southern California (i.e. well over 1000 miles away and surrounded by some massively populated areas). I simply won't believe that experience is going to be better than my Xbox 360 sitting in my living room until it's out and proven.

(Not to mention I like my 360 game pad and their controller looks ugly. :P)

EDIT: Whoops, guess this is a tad off topic. Back to Unlimited Detail: I'm still doubtful. :D

#19 MarkS   Members   -  Reputation: 180

Like
0Likes
Like

Posted 31 December 2009 - 01:11 AM

My problem with this, other than the obvious doubts, is that assuming it's true, what does this mean for games? In the past few years, I have seen an apparent rush towards better graphics at the expense of content. I'm an Elder Scrolls fan. Take Morrowind for example. The graphics sucked, but the story was very in depth and it took at least 100 hours to get through the main quest alone. Then oblivion comes along and the graphics are amazing and I can beat the main quest and most of the side quests in the same amount of time.

I'd much rather have lower quality polygonal graphics with a good, in depth, story and awesome game play than near realistic graphics that take so much time and resources to develop that the developer cannot spend time on what is important.

#20 Medium9   Members   -  Reputation: 192

Like
0Likes
Like

Posted 31 December 2009 - 03:20 AM

I have to heavily agree with that. A few weeks ago I was so bored by recent games, that I grabbed SCUMM VM and replayed Monkey Island 1+2 and DOTT with amazing joy, which defenitely didn't all came from nostalgia (which certainly was present too). Good and witty story telling is without doubt falling short when looking at a percentage of total games released per year.
The thing is, that this isn't the fault of any coder or art department. It quite often is big publishers that often are somewhat ignorant and deadline pusing, which so often cripples titles that sounded great, that it hurts. Thus it isn't the mere availability of better GFX, but the marketing oriented companies, and in the end the customer, which often seems to be lured more easily with stunning graphics, than good games. Since companies strive for cash, and only with few exceptions for ideals, they of course cater to that. Unfortunately on the expense of those, that enjoy and create quality games, which seems to be a miority.
Stopping progress in graphics isn't going to change one bit of this machinery I fear.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS