Jump to content
  • Advertisement
Sign in to follow this  
Bombshell93

[Theory] Unraveling the Unlimited Detail plausibility

This topic is 2330 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

A voxel at pixel granularity, in screenspace, is a pixel.
Fractal image compression is just that.
Let's not mince terms.
Let's say you can process an image and compress it into a tree of pixels, good for you, now apply that to a 3D space with pixel granularity and you have applied a volumetric texture to your world at one to one granularity, to fit this in memory, that is a very small world, which if we scale up, looks blocky :)
Perhaps cloud tech can help us around the physical limitations imposed by our current generation architecture.
I'm not sold, but I see potential.
Anyway, on a desktop machine, unlimited detail is as impossible as it sounds implausible.
You cannot fit infinity into a finite space.
Sorry.
You can however map the empty and non empty space in a better way than we do perhaps.
I would accept that.
But that is not unlimited detail.
It is a data structure.

Share this post


Link to post
Share on other sites
Advertisement

I'm not 100% sure I agree with this. I was a 3D animator for a while before I switched to programming and I'm still a potter. Modeling in clay is a good amount easier and more natural to do. There's a lot to be said for the tactile feedback of the clay and interacting with a model in 3D instead of interacting with a model through a variety of 2D interfaces which interact with the 3D model.


Granted, and I'm sure that there are many artists who would get a lot out of working with clay rather than a clumsy 2D interface, but don't forget that by working with a computer you gain a lot too, for example on a very superficial scale how easy is 'undo' and 'redo' when working with clay? Can you look at a history of all your actions to see just how you got that specific shape? How practical is storing different versions of your work? How easy is it to grab parts of different models and merge them to experiment with new things? How easy is it to share your work with others? These are all important considerations for content creation for a professional game. I'm not saying that 3d laser scanning has no applications, just that Dell's claim that it's so much easier is, just like the rest of his claims, naive and overstated.


[color="#1C2837"]I'd think it was most useful would be importing architecture.[color="#1C2837"]
You can get the whole interior of a building in a relatively short amount of time completely textured and to scale; exterior is a bit trickier, but can still be done in under a day depending on the size and complexity of the building.


I disagree, your stock objects like rocks and chairs and the like perhaps...but depending on the game I think architecture should be original content.

Share this post


Link to post
Share on other sites
Yeah, it would be lot easier and more feasible for indie and amateur developers to buy a 3D scanner than to use a free modelling software....

Share this post


Link to post
Share on other sites
I´ve never used voxel technology, but i´ve been thinking of UD and an efficient way to store voxels and i´ve come up with this. Might be an stupidity, but oh well...

Let´s suppose we are drawing an infinite array on voxels, all in the same direction: positive x axis, for example. We can use one bit to tell if the next voxel is at x+1, or not. If not, it must be at y+1,y-1, z+1 or z-1. So we use 2bits for that (2^2 = 4).

Now looking at a model it seems fairly common to have long arrays of voxels in the same direction: walls, trees, bricks, etc. We can use n bits to tell how many voxels ahead of us are in the same direction, and just don´t store any information for them.

if voxel starts with 0:
n bits to represent 2^n sucessive voxels after this one. (total n+1 bits)

if voxel starts with 1:
2 bits to indicate a new direction (total 3 bits)

Using n = 4:
In the worst case (every voxel changes direction), we can store 1 million voxels in 10^6*3/8/1024 = 366 kb.
In the best case (every voxel has up to 2^n =16 neighbours facing the same direction) we can store 1 million voxels in just 38 kb. If we know beforehand the best value for n, it could be lower.

It would be possible to preproccess a surface and find an optimal representation of it in terms of "n" (bits used for average number of voxels in the same direction) and path followed through the voxels.
Color info could be stored in a similar way, adding bits to indicate relative displacement over a palette.

Drawbacks: n and the path must be chosen very carefully or you might end up wasting space like crazy. The "worst case" is not the WORST case in which you have small arrays of just two voxels, meaning that half the voxels are wasting (n+1) bits just to tell you that the next one does not need any info. Traversing this structure to render it is not efficient (load it into an octree at runtime?). Well, lots of other things. What do you think?

EDIT: On second thought, this really is stupid (lol). Just have one bit per voxel to indicate if the next changes direction or not. 3 bits at worst and 1 at best per voxel, average 2 bits per voxel: 1 million in 244kb.

Share this post


Link to post
Share on other sites
Traversing this structure to render it is not efficient (load it into an octree at runtime?). Well, lots of other things. What do you think?
A really efficient storage format is handy, but it's not key to the technology. The important data-structure is the one that stores the geometry for rendering -- and enables the search algorithm.

From what I can tell, it's got the following requirements:

* Can store a large number of voxels in limited RAM (large enough to call it 'unlimited' in practical usage).
* Can perform a "find 2D array of closest voxels to frustum near-plane" query.

My guess is that the query is broken down like:
* Given a camera frustum (derived from position, direction, FOV and aspect), and a resolution.
* Divide that frustum up into [font="Courier New"]W*H[/font] sub-frustums, based on resolution's [font="Courier New"]W[/font]idth and [font="Courier New"]H[/font]eight.
* Perform a "find closest voxel to frustum near-plane" query for each sub-frustum.

Then after performing these queries:
* Use the information from the returned voxels to perform shading/lighting (could be used to generate a g-buffer for deferred lighting).


So, the data structure not only needs to store a large amount of data in a very compact form, but it also need to be able to return you the closest data-point, given any given search frustum.

In the previously posted interview, he hints that the method for projecting the 3D points into the 2D array of results is done in a way where the coherency of the frustum is exploited. i.e. each 'sub-frustum' (the 3D polytope covered by a 2D screen pixel) somehow benefits from the fact that it's very similar to it's neighboring 'sub-frustums'.

In that video, you can also see that there is a near-plane being used, when he accidentally intersects with a leaf on the ground. If he was casting rays from a pinhole camera, there'd be no clipping artifact there.


An interesting side-note, is that in the close-up of the leaf on the ground, the shadows look extremely similar to PCF shadow-maps.

Share this post


Link to post
Share on other sites

Granted, and I'm sure that there are many artists who would get a lot out of working with clay rather than a clumsy 2D interface, but don't forget that by working with a computer you gain a lot too, for example on a very superficial scale how easy is 'undo' and 'redo' when working with clay? Can you look at a history of all your actions to see just how you got that specific shape? How practical is storing different versions of your work? How easy is it to grab parts of different models and merge them to experiment with new things? How easy is it to share your work with others? These are all important considerations for content creation for a professional game. I'm not saying that 3d laser scanning has no applications, just that Dell's claim that it's so much easier is, just like the rest of his claims, naive and overstated.

You missed the most obvious limitation being physics as some things are just physically impossible with clay.

I am of course not talking about using clay for everything, but there are tons of cases where using clay is very beneficial. You would still get the benefit of modeling in clay, scanning it in, then messing with it in a modeling application for touch-ups. I know tons of ceramic artists that can make a totally realistic bust in under 30 minutes. Then you put it in the computer and anybody can do whatever they want with it; share it, modify it, whatever. The pipeline wouldn't be clay->finished in game model; it would just replace the first step.


I disagree, you're stock objects like rocks and chairs and the like perhaps...but depending on the game I think architecture should be original content.
[/quote]
I do agree that it should be original for fictional games, but even in that case it still gives you a great starting point to grab or reuse a lot of useful data from. I have to imagine that stitching together point clouds would be easier than stitching together meshes as well since you don't actually have to stitch anything together; just drop in the points you want. You could grab the ceiling from the Sistine chapel and put it in an office building in a couple seconds.

There are also a bunch of games that do want to be accurate to real places. Madden and Fifa both try to replicate their stadiums as accurately as possible. GT and Forza could replicate all their tracks to the centimeter and all their cars simply by scanning. If the parts were scanned individually before being assembled you could get perfectly detailed cars. Not necessarily a major selling point for all games, but certainly a huge selling point for some.


Yeah, it would be lot easier and more feasible for indie and amateur developers to buy a 3D scanner than to use a free modelling software....

The same argument could be made for any 3rd party middleware engine that uses polys. That doesn't make them irrelevant to game developers.

Share this post


Link to post
Share on other sites

The pipeline wouldn't be clay->finished in game model; it would just replace the first step.


I know, but that is what Dell is trying to claim, he's like "it only takes 15 minutes!" which is of course just utter bullshit.


[color="#1C2837"]Madden and Fifa both try to replicate their stadiums as accurately as possible
[color="#1C2837"][/quote]
[color="#1C2837"]
[color="#1c2837"]Fair point, and for those games perhaps 3d scanning (if feasible on such a large scale) may be a good alternative for generating the geometry. It still doesn't make his technology any more revolutionary though.

Share this post


Link to post
Share on other sites
Guys, have you heard that Notch developed some technology where he can store an entire, unlimited-sized world in a single short string?!

There's a lot that can be done in cool and interesting ways and is worth exploring, but everything has a downside. What's really really disappointing about this guy is that he ignores the downsides, even when he's supposedly addressing concerns other people raised.

Off the top of my head, here's things that made me go "does this guy know what he's talking about?"
-Misinterpretting Notch's post as saying "This is super easy". The actual words Notch used were "It’s a very pretty and very impressive piece of technology."
-All that talk about just "pushing a button" and now the bitmap is resized for different platforms and that that's all they need to do (I really don't know what he was trying to say here). Clearly the hardest part about developing games for multiple platforms is resizing the graphics.
-His tesselation explanation. Was it just me or was he just describing parallax mapping? TBH, I don't know much about this
-"Level of distance" and the demo that 'proved' they weren't using it (though I do believe them that they're not, just that demo is totally non-conclusive)
-Him counting or estimating the number of polygons in games these days.
-Acting as if the 500K triangle mesh he scanned from the elephant is unfeasible for games and as if normal mapping it to reduce polygons would be particularly difficult
-Comparing triangles to "atoms" isn't fair in the first place. Just as fair would be to compare "atoms" to texels since atoms seem to have to do the work of texels as well as triangles.
-And the big one: claiming it's unlimited but then not saying what he means by that just insisting that things are "infinite" or something.

Also, they should try to find an interviewer who sounds more knowledgeable and unbiased next time.

Share this post


Link to post
Share on other sites
Guys, have you heard that Notch developed some technology where he can store an entire, unlimited-sized world in a single short string?!
Yes, it's called random seed and it's not unlimited. Notch himself clarified that a few times. After a while, it will wrap around. Perhaps you'll be dead by that time, but it will.


Also, they should try to find an interviewer who sounds more knowledgeable and unbiased next time.
It is my understanding the interviewer was accommodating Dell as a specific choice. Not doing so might have resulted in some PR trouble.

Share this post


Link to post
Share on other sites

-Misinterpretting Notch's post as saying "This is super easy". The actual words Notch used were "It’s a very pretty and very impressive piece of technology."

It’s a very pretty and very impressive piece of technology, but they’re carefully avoiding to mention any of the drawbacks, and they’re pretending like what they’re doing is something new and impressive. In reality, it’s been done several times before.[/quote]-Notch
The point made was that their exact technique hasn't been used yet. Whether that's true or not is up for speculation. They showed the videos that Notch showed and "compared" them to their system. I hate their explanation of Atomontage which arguably is trying to do something identical to UD.

-His tesselation explanation. Was it just me or was he just describing parallax mapping? TBH, I don't know much about this

Tesselation often uses a displacement map input. It takes a patch and generates more triangles as the camera gets closer. His explanation was right of the current usage. (Unigine uses tesselation in this way).

-Him counting or estimating the number of polygons in games these days.

20 polygons per meter? That's a pretty close estimation. Turn on wireframe on a game and you'll notice how triangulate things really are. Characters are usually the exception to this.

-Acting as if the 500K triangle mesh he scanned from the elephant is unfeasible for games and as if normal mapping it to reduce polygons would be particularly difficult

You need POM or QDM would really be needed to get the grooves right including self-shadowing. It's not as cheap as it sounds. I agree it would be nice to see the comparison between the two techniques when it's done.

-And the big one: claiming it's unlimited but then not saying what he means by that just insisting that things are "infinite" or something.

For all practical purposes I assume. I guess most people read into that too much. They gave numbers of how much data they're rendering in the demo to show how large that number really was. The infinite instancing does skew this number.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!