Jump to content

  • Log In with Google      Sign In   
  • Create Account


[Theory] Unraveling the Unlimited Detail plausibility


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
168 replies to this topic

#61 _moagstar_   Members   -  Reputation: 465

Like
0Likes
Like

Posted 11 August 2011 - 07:09 AM

I'm not 100% sure I agree with this. I was a 3D animator for a while before I switched to programming and I'm still a potter. Modeling in clay is a good amount easier and more natural to do. There's a lot to be said for the tactile feedback of the clay and interacting with a model in 3D instead of interacting with a model through a variety of 2D interfaces which interact with the 3D model.


Granted, and I'm sure that there are many artists who would get a lot out of working with clay rather than a clumsy 2D interface, but don't forget that by working with a computer you gain a lot too, for example on a very superficial scale how easy is 'undo' and 'redo' when working with clay? Can you look at a history of all your actions to see just how you got that specific shape? How practical is storing different versions of your work? How easy is it to grab parts of different models and merge them to experiment with new things? How easy is it to share your work with others? These are all important considerations for content creation for a professional game. I'm not saying that 3d laser scanning has no applications, just that Dell's claim that it's so much easier is, just like the rest of his claims, naive and overstated.

I'd think it was most useful would be importing architecture.
You can get the whole interior of a building in a relatively short amount of time completely textured and to scale; exterior is a bit trickier, but can still be done in under a day depending on the size and complexity of the building.


I disagree, your stock objects like rocks and chairs and the like perhaps...but depending on the game I think architecture should be original content.

Sponsor:

#62 szecs   Members   -  Reputation: 2102

Like
2Likes
Like

Posted 11 August 2011 - 07:14 AM

Yeah, it would be lot easier and more feasible for indie and amateur developers to buy a 3D scanner than to use a free modelling software....

#63 ArKano22   Members   -  Reputation: 646

Like
0Likes
Like

Posted 11 August 2011 - 08:06 AM

I´ve never used voxel technology, but i´ve been thinking of UD and an efficient way to store voxels and i´ve come up with this. Might be an stupidity, but oh well...

Let´s suppose we are drawing an infinite array on voxels, all in the same direction: positive x axis, for example. We can use one bit to tell if the next voxel is at x+1, or not. If not, it must be at y+1,y-1, z+1 or z-1. So we use 2bits for that (2^2 = 4).

Now looking at a model it seems fairly common to have long arrays of voxels in the same direction: walls, trees, bricks, etc. We can use n bits to tell how many voxels ahead of us are in the same direction, and just don´t store any information for them.

if voxel starts with 0:
n bits to represent 2^n sucessive voxels after this one. (total n+1 bits)

if voxel starts with 1:
2 bits to indicate a new direction (total 3 bits)

Using n = 4:
In the worst case (every voxel changes direction), we can store 1 million voxels in 10^6*3/8/1024 = 366 kb.
In the best case (every voxel has up to 2^n =16 neighbours facing the same direction) we can store 1 million voxels in just 38 kb. If we know beforehand the best value for n, it could be lower.

It would be possible to preproccess a surface and find an optimal representation of it in terms of "n" (bits used for average number of voxels in the same direction) and path followed through the voxels.
Color info could be stored in a similar way, adding bits to indicate relative displacement over a palette.

Drawbacks: n and the path must be chosen very carefully or you might end up wasting space like crazy. The "worst case" is not the WORST case in which you have small arrays of just two voxels, meaning that half the voxels are wasting (n+1) bits just to tell you that the next one does not need any info. Traversing this structure to render it is not efficient (load it into an octree at runtime?). Well, lots of other things. What do you think?

EDIT: On second thought, this really is stupid (lol). Just have one bit per voxel to indicate if the next changes direction or not. 3 bits at worst and 1 at best per voxel, average 2 bits per voxel: 1 million in 244kb.

#64 Hodgman   Moderators   -  Reputation: 28518

Like
0Likes
Like

Posted 11 August 2011 - 08:34 AM

Traversing this structure to render it is not efficient (load it into an octree at runtime?). Well, lots of other things. What do you think?

A really efficient storage format is handy, but it's not key to the technology. The important data-structure is the one that stores the geometry for rendering -- and enables the search algorithm.

From what I can tell, it's got the following requirements:

* Can store a large number of voxels in limited RAM (large enough to call it 'unlimited' in practical usage).
* Can perform a "find 2D array of closest voxels to frustum near-plane" query.

My guess is that the query is broken down like:
* Given a camera frustum (derived from position, direction, FOV and aspect), and a resolution.
* Divide that frustum up into W*H sub-frustums, based on resolution's Width and Height.
* Perform a "find closest voxel to frustum near-plane" query for each sub-frustum.

Then after performing these queries:
* Use the information from the returned voxels to perform shading/lighting (could be used to generate a g-buffer for deferred lighting).


So, the data structure not only needs to store a large amount of data in a very compact form, but it also need to be able to return you the closest data-point, given any given search frustum.

In the previously posted interview, he hints that the method for projecting the 3D points into the 2D array of results is done in a way where the coherency of the frustum is exploited. i.e. each 'sub-frustum' (the 3D polytope covered by a 2D screen pixel) somehow benefits from the fact that it's very similar to it's neighboring 'sub-frustums'.

In that video, you can also see that there is a near-plane being used, when he accidentally intersects with a leaf on the ground. If he was casting rays from a pinhole camera, there'd be no clipping artifact there.


An interesting side-note, is that in the close-up of the leaf on the ground, the shadows look extremely similar to PCF shadow-maps.

#65 way2lazy2care   Members   -  Reputation: 782

Like
0Likes
Like

Posted 11 August 2011 - 08:41 AM

Granted, and I'm sure that there are many artists who would get a lot out of working with clay rather than a clumsy 2D interface, but don't forget that by working with a computer you gain a lot too, for example on a very superficial scale how easy is 'undo' and 'redo' when working with clay? Can you look at a history of all your actions to see just how you got that specific shape? How practical is storing different versions of your work? How easy is it to grab parts of different models and merge them to experiment with new things? How easy is it to share your work with others? These are all important considerations for content creation for a professional game. I'm not saying that 3d laser scanning has no applications, just that Dell's claim that it's so much easier is, just like the rest of his claims, naive and overstated.

You missed the most obvious limitation being physics as some things are just physically impossible with clay.

I am of course not talking about using clay for everything, but there are tons of cases where using clay is very beneficial. You would still get the benefit of modeling in clay, scanning it in, then messing with it in a modeling application for touch-ups. I know tons of ceramic artists that can make a totally realistic bust in under 30 minutes. Then you put it in the computer and anybody can do whatever they want with it; share it, modify it, whatever. The pipeline wouldn't be clay->finished in game model; it would just replace the first step.

I disagree, you're stock objects like rocks and chairs and the like perhaps...but depending on the game I think architecture should be original content.

I do agree that it should be original for fictional games, but even in that case it still gives you a great starting point to grab or reuse a lot of useful data from. I have to imagine that stitching together point clouds would be easier than stitching together meshes as well since you don't actually have to stitch anything together; just drop in the points you want. You could grab the ceiling from the Sistine chapel and put it in an office building in a couple seconds.

There are also a bunch of games that do want to be accurate to real places. Madden and Fifa both try to replicate their stadiums as accurately as possible. GT and Forza could replicate all their tracks to the centimeter and all their cars simply by scanning. If the parts were scanned individually before being assembled you could get perfectly detailed cars. Not necessarily a major selling point for all games, but certainly a huge selling point for some.

Yeah, it would be lot easier and more feasible for indie and amateur developers to buy a 3D scanner than to use a free modelling software....

The same argument could be made for any 3rd party middleware engine that uses polys. That doesn't make them irrelevant to game developers.

#66 _moagstar_   Members   -  Reputation: 465

Like
1Likes
Like

Posted 11 August 2011 - 08:48 AM

The pipeline wouldn't be clay->finished in game model; it would just replace the first step.


I know, but that is what Dell is trying to claim, he's like "it only takes 15 minutes!" which is of course just utter bullshit.

Madden and Fifa both try to replicate their stadiums as accurately as possible



Fair point, and for those games perhaps 3d scanning (if feasible on such a large scale) may be a good alternative for generating the geometry. It still doesn't make his technology any more revolutionary though.

#67 Ezbez   Crossbones+   -  Reputation: 1164

Like
2Likes
Like

Posted 11 August 2011 - 08:59 AM

Guys, have you heard that Notch developed some technology where he can store an entire, unlimited-sized world in a single short string?!

There's a lot that can be done in cool and interesting ways and is worth exploring, but everything has a downside. What's really really disappointing about this guy is that he ignores the downsides, even when he's supposedly addressing concerns other people raised.

Off the top of my head, here's things that made me go "does this guy know what he's talking about?"
-Misinterpretting Notch's post as saying "This is super easy". The actual words Notch used were "It’s a very pretty and very impressive piece of technology."
-All that talk about just "pushing a button" and now the bitmap is resized for different platforms and that that's all they need to do (I really don't know what he was trying to say here). Clearly the hardest part about developing games for multiple platforms is resizing the graphics.
-His tesselation explanation. Was it just me or was he just describing parallax mapping? TBH, I don't know much about this
-"Level of distance" and the demo that 'proved' they weren't using it (though I do believe them that they're not, just that demo is totally non-conclusive)
-Him counting or estimating the number of polygons in games these days.
-Acting as if the 500K triangle mesh he scanned from the elephant is unfeasible for games and as if normal mapping it to reduce polygons would be particularly difficult
-Comparing triangles to "atoms" isn't fair in the first place. Just as fair would be to compare "atoms" to texels since atoms seem to have to do the work of texels as well as triangles.
-And the big one: claiming it's unlimited but then not saying what he means by that just insisting that things are "infinite" or something.

Also, they should try to find an interviewer who sounds more knowledgeable and unbiased next time.

#68 Krohm   Crossbones+   -  Reputation: 3015

Like
0Likes
Like

Posted 11 August 2011 - 09:15 AM

Guys, have you heard that Notch developed some technology where he can store an entire, unlimited-sized world in a single short string?!

Yes, it's called random seed and it's not unlimited. Notch himself clarified that a few times. After a while, it will wrap around. Perhaps you'll be dead by that time, but it will.


Also, they should try to find an interviewer who sounds more knowledgeable and unbiased next time.

It is my understanding the interviewer was accommodating Dell as a specific choice. Not doing so might have resulted in some PR trouble.

#69 Sirisian   Crossbones+   -  Reputation: 1678

Like
0Likes
Like

Posted 11 August 2011 - 09:43 AM

-Misinterpretting Notch's post as saying "This is super easy". The actual words Notch used were "It’s a very pretty and very impressive piece of technology."

It’s a very pretty and very impressive piece of technology, but they’re carefully avoiding to mention any of the drawbacks, and they’re pretending like what they’re doing is something new and impressive. In reality, it’s been done several times before.

-Notch
The point made was that their exact technique hasn't been used yet. Whether that's true or not is up for speculation. They showed the videos that Notch showed and "compared" them to their system. I hate their explanation of Atomontage which arguably is trying to do something identical to UD.

-His tesselation explanation. Was it just me or was he just describing parallax mapping? TBH, I don't know much about this

Tesselation often uses a displacement map input. It takes a patch and generates more triangles as the camera gets closer. His explanation was right of the current usage. (Unigine uses tesselation in this way).

-Him counting or estimating the number of polygons in games these days.

20 polygons per meter? That's a pretty close estimation. Turn on wireframe on a game and you'll notice how triangulate things really are. Characters are usually the exception to this.

-Acting as if the 500K triangle mesh he scanned from the elephant is unfeasible for games and as if normal mapping it to reduce polygons would be particularly difficult

You need POM or QDM would really be needed to get the grooves right including self-shadowing. It's not as cheap as it sounds. I agree it would be nice to see the comparison between the two techniques when it's done.

-And the big one: claiming it's unlimited but then not saying what he means by that just insisting that things are "infinite" or something.

For all practical purposes I assume. I guess most people read into that too much. They gave numbers of how much data they're rendering in the demo to show how large that number really was. The infinite instancing does skew this number.

#70 szecs   Members   -  Reputation: 2102

Like
0Likes
Like

Posted 11 August 2011 - 09:57 AM


Yeah, it would be lot easier and more feasible for indie and amateur developers to buy a 3D scanner than to use a free modelling software....

The same argument could be made for any 3rd party middleware engine that uses polys. That doesn't make them irrelevant to game developers.


Can't the same scanning be done with polygon models? Or there aren't any free software that can reduce polygon counts arbitrarily by a mouseclick?

#71 A Brain in a Vat   Members   -  Reputation: 313

Like
0Likes
Like

Posted 11 August 2011 - 10:15 AM

What do you guys make of his claim that "we're not using any rays"?

That part struck me. I don't understand whether he just meant "we are not raytracing", or whether he's really saying they aren't tracing a ray from the camera point into the scene to sample their geometry structure.

I think they're using "rays", whether or not they call it that in code.

#72 A Brain in a Vat   Members   -  Reputation: 313

Like
0Likes
Like

Posted 11 August 2011 - 10:26 AM

I think we can interpret Bruce Dell's claims of "unlimited" and "infinite" to mean infinite interpolation. That is, they always draw one "atom" per pixel. If their data structure doesn't go that deep, they have some method for interpolation.

Calling that "unlimited detail" is obviously disingenuous.

He also might mean that rendering an object from a given distance isn't dependent upon how detailed that object is. That is, you could make an object infinitely more detailed (ignoring the spatial requirements), and it'll never dig deeper than level N, where level N is the level at which each atom maps to one pixel on the screen.

This last feature is cool. It's the 3D analogue to Carmack's MegaTexture tech that's in Id's latest engine. Watch this video by John Carmack. At 2:46 he says:
"In addition to allowing us to create huge amounts of detail on things..., it also has this additional benefit that any work that's done on the surfaces here is guaranteed to have zero impact on the performance, stability, resource utilization, any of these things."

What he's saying is that you can add an arbitrary (not infinite) amount of detail to any particular object, without bringing down performance in the rest of the world. This is awesome, and Bruce Dell should feel cool about having implemented this in 3D, but this is not the same as "unlimited detail". He is doing himself and his company a disservice by stretching the truth. The truth is that he's implemented something that John Carmack, having coined the term "Sparse Voxel Octree", would have put into his game if it was in any way practical for games. It is not in any way practical for games at the moment, and Bruce Dell doesn't seem to have made any novel contributions that make it practical for games.

If he just started saying "detail that's limited only by how much you can store on your harddrive" rather than "unlimited detail", people wouldn't be reaming him. That doesn't sound as catchy to investors though.

#73 way2lazy2care   Members   -  Reputation: 782

Like
1Likes
Like

Posted 11 August 2011 - 10:43 AM

Can't the same scanning be done with polygon models? Or there aren't any free software that can reduce polygon counts arbitrarily by a mouseclick?


I guess it could, but you'd have to convert it and then check it to make sure it's optimized. Ideally this uses the same data that you get from the scan itself, so there's no need to look at the data after scanning it in.

I think a lot of people are reading way too far into his marketing speak and nitpicking him for it. What he said is no worse than anything any marketing rep/president would say about their company to the public when announcing a new product.

#74 A Brain in a Vat   Members   -  Reputation: 313

Like
0Likes
Like

Posted 11 August 2011 - 10:53 AM

I guess it could, but you'd have to convert it and then check it to make sure it's optimized. Ideally this uses the same data that you get from the scan itself, so there's no need to look at the data after scanning it in.

I think a lot of people are reading way too far into his marketing speak and nitpicking him for it. What he said is no worse than anything any marketing rep/president would say about their company to the public when announcing a new product.


That's not true. In the video above, John Carmack is completely honest about the abilities of his technology. He could call it "unlimited detail" if he wanted to, but he doesn't. That's because he's a person with enough integrity to tell the truth.

The fact that you so easily give a pass to lies because they're made in order to market a product says a lot about you. You might have a future in business, your ethics are slimy enough for it.

Looking at this from a pure marketing perspective, Dell seems to have done wonders -- everyone is talking about his tech. But we don't know if he's made a critical mistake. If the public at large starts talking about his extreme exaggerations, who knows what his investors will do. I hope they sue him to get their funds back, personally. I'll stop there -- we don't know if he's committed actual fraud because we don't know what he told investors behind closed doors.

#75 szecs   Members   -  Reputation: 2102

Like
0Likes
Like

Posted 11 August 2011 - 10:55 AM


Can't the same scanning be done with polygon models? Or there aren't any free software that can reduce polygon counts arbitrarily by a mouseclick?


I guess it could, but you'd have to convert it and then check it to make sure it's optimized. Ideally this uses the same data that you get from the scan itself, so there's no need to look at the data after scanning it in.

I think a lot of people are reading way too far into his marketing speak and nitpicking him for it. What he said is no worse than anything any marketing rep/president would say about their company to the public when announcing a new product.


Well, I'm only nitpicking on you :) This stuff is way beyond me.
I only picked on two things: that the colors of the world is mostly only depend on the surface topology, and that it's easier to make art with this voxel magic.

#76 D_Tr   Members   -  Reputation: 362

Like
2Likes
Like

Posted 11 August 2011 - 11:05 AM

@way2lazy2care: Marketing has it's limits too. I haven't heard many marketing people claiming "100000 times better graphics" and feeling sorry for poor ATI/AMD and nVidia for pouring millions upon millions of dollars on the ugly triangles... Also, mathematics put a limit at the compression ratio when you do lossless compression (see information entropy). Replicating the same elephant and tree 1000 times does not increase the entropy much. If you want to replicate a real (not procedurally generated and with instancing all over the place) square kilometer island at milimeter detail, however, you are hopelessly screwed no matter your compresion techlonogy.

#77 way2lazy2care   Members   -  Reputation: 782

Like
0Likes
Like

Posted 11 August 2011 - 11:45 AM

Well, I'm only nitpicking on you :) This stuff is way beyond me.
I only picked on two things: that the colors of the world is mostly only depend on the surface topology, and that it's easier to make art with this voxel magic.

The two parts of my reply were separate. Only the first part was directed at you :P

That's not true. In the video above, John Carmack is completely honest about the abilities of his technology. He could call it "unlimited detail" if he wanted to, but he doesn't. That's because he's a person with enough integrity to tell the truth.

The fact that you so easily give a pass to lies because they're made in order to market a product says a lot about you. You might have a future in business, your ethics are slimy enough for it.

John Carmack also isn't marketing his product in most of his demos. He's explaining it. There's a stark difference. I have heard other people from Zenimax marketing it and it sounds pretty much the same. Similar things have also been said about CryEngine and Unreal engine in different aspects of the engine; IE: "X is 10000 times better at Y than everything else."

@way2lazy2care: Marketing has it's limits too. I haven't heard many marketing people claiming "100000 times better graphics" and feeling sorry for poor ATI/AMD and nVidia for pouring millions upon millions of dollars on the ugly triangles...

I've heard similar things from almost every major publisher and manufacturer.

Also, mathematics put a limit at the compression ratio when you do lossless compression (see information entropy). Replicating the same elephant and tree 1000 times does not increase the entropy much. If you want to replicate a real (not procedurally generated and with instancing all over the place) square kilometer island at milimeter detail, however, you are hopelessly screwed no matter your compresion techlonogy.

That's a valid thing to be curious about, but we haven't seen how their data is compressed for the scenes we've seen or what could be done if an artist were allowed to just go crazy with it. If I had to model an area that size down to the pebble I'd probably reuse a ton of stuff too just to save time. This is something we'll have to wait and see more on whether they are reusing the same data or using copies of the data existing in different places.

#78 A Brain in a Vat   Members   -  Reputation: 313

Like
-1Likes
Like

Posted 11 August 2011 - 11:57 AM

John Carmack also isn't marketing his product in most of his demos. He's explaining it. There's a stark difference. I have heard other people from Zenimax marketing it and it sounds pretty much the same. Similar things have also been said about CryEngine and Unreal engine in different aspects of the engine; IE: "X is 10000 times better at Y than everything else."


How is what John Carmack is doing in this video different from what Bruce Dell is doing in the last video posted? It's the exact same thing. You just say anything you can to win an argument, without caring about truth or validity, don't you? What's the stark difference? You should be ashamed of yourself. Do you work for Euclideon or something?

And you're lying about Zenimax marketing MegaTexturing in a similar way. Show us.

#79 Sirisian   Crossbones+   -  Reputation: 1678

Like
0Likes
Like

Posted 11 August 2011 - 12:00 PM

Also, mathematics put a limit at the compression ratio when you do lossless compression (see information entropy). Replicating the same elephant and tree 1000 times does not increase the entropy much. If you want to replicate a real (not procedurally generated and with instancing all over the place) square kilometer island at milimeter detail, however, you are hopelessly screwed no matter your compresion techlonogy.

:blink: Wait you think they're using a lossless compression algorithm? You realize that no computer game uses lossless compression for textures normally right? I guess this whole time when I was imagining their system I was picturing their compression guy probably has thought about this 100 times more than myself when it comes to compression and come to the conclusion that a lossy technique would possibly lose very little data while allowing for a much more compact format.

Then again a nice lossless format would be epic. :)

#80 D_Tr   Members   -  Reputation: 362

Like
0Likes
Like

Posted 11 August 2011 - 12:31 PM


Also, mathematics put a limit at the compression ratio when you do lossless compression (see information entropy). Replicating the same elephant and tree 1000 times does not increase the entropy much. If you want to replicate a real (not procedurally generated and with instancing all over the place) square kilometer island at milimeter detail, however, you are hopelessly screwed no matter your compresion techlonogy.

:blink: Wait you think they're using a lossless compression algorithm? You realize that no computer game uses lossless compression for textures normally right? I guess this whole time when I was imagining their system I was picturing their compression guy probably has thought about this 100 times more than myself when it comes to compression and come to the conclusion that a lossy technique would possibly lose very little data while allowing for a much more compact format.

Then again a nice lossless format would be epic. :)


I actually had the geometry in mind when talking about lossless compression. They almost surely perform lossy compression on color and normals (assuming they do not calculate them on the fly). I automatically assumed, however, that they do not throw away any geonetry information when converting from polygonal to octree, because geometric detail is supposed to be a strong point of voxel technology.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS