There is serious research too, without the bullshit.
Which is where I would bet my money, sometime in the future...
http://research.nvidia.com/publication/efficient-sparse-voxel-octrees
[Theory] Unraveling the Unlimited Detail plausibility
I found this particularly strange too, I love his idea that artists 'would go back to more traditional mediums such as clay' :-\ Someone should show him ZBrush! Does he write all his emails by hand too and then scan them in? Because it's so much easier that way! This just demonstrates that he has no idea about a real world content pipeline.
I'm not 100% sure I agree with this. I was a 3D animator for a while before I switched to programming and I'm still a potter. Modeling in clay is a good amount easier and more natural to do. There's a lot to be said for the tactile feedback of the clay and interacting with a model in 3D instead of interacting with a model through a variety of 2D interfaces which interact with the 3D model.
Not that I think it's realistic that everybody do that, but it's not quite so crazy as it sounds. For things like character busts I could see it being tremendously useful, but the place I'd think it was most useful would be importing architecture.
You can get the whole interior of a building in a relatively short amount of time completely textured and to scale; exterior is a bit trickier, but can still be done in under a day depending on the size and complexity of the building.
A voxel at pixel granularity is a pixel.
Fractal image compression is just that.
Let's not mince terms.
Let's say you can process an image and compress it into a tree of pixels, good for you, now apply that to a 3D space with pixel granularity and you have applied a volumetric texture to your world at one to one granularity, to fit this in memory, that is a very small world, which if we scale up, looks blocky
Perhaps cloud tech can help us around the physical limitations imposed by our current generation architecture.
I'm not sold, but I see potential.
Anyway, on a desktop machine, unlimited detail is as impossible as it sounds implausible.
You cannot fit infinity into a finite space.
Sorry.
Fractal image compression is just that.
Let's not mince terms.
Let's say you can process an image and compress it into a tree of pixels, good for you, now apply that to a 3D space with pixel granularity and you have applied a volumetric texture to your world at one to one granularity, to fit this in memory, that is a very small world, which if we scale up, looks blocky
Perhaps cloud tech can help us around the physical limitations imposed by our current generation architecture.
I'm not sold, but I see potential.
Anyway, on a desktop machine, unlimited detail is as impossible as it sounds implausible.
You cannot fit infinity into a finite space.
Sorry.
A voxel at pixel granularity, in screenspace, is a pixel.
Fractal image compression is just that.
Let's not mince terms.
Let's say you can process an image and compress it into a tree of pixels, good for you, now apply that to a 3D space with pixel granularity and you have applied a volumetric texture to your world at one to one granularity, to fit this in memory, that is a very small world, which if we scale up, looks blocky
Perhaps cloud tech can help us around the physical limitations imposed by our current generation architecture.
I'm not sold, but I see potential.
Anyway, on a desktop machine, unlimited detail is as impossible as it sounds implausible.
You cannot fit infinity into a finite space.
Sorry.
You can however map the empty and non empty space in a better way than we do perhaps.
I would accept that.
But that is not unlimited detail.
It is a data structure.
Fractal image compression is just that.
Let's not mince terms.
Let's say you can process an image and compress it into a tree of pixels, good for you, now apply that to a 3D space with pixel granularity and you have applied a volumetric texture to your world at one to one granularity, to fit this in memory, that is a very small world, which if we scale up, looks blocky
Perhaps cloud tech can help us around the physical limitations imposed by our current generation architecture.
I'm not sold, but I see potential.
Anyway, on a desktop machine, unlimited detail is as impossible as it sounds implausible.
You cannot fit infinity into a finite space.
Sorry.
You can however map the empty and non empty space in a better way than we do perhaps.
I would accept that.
But that is not unlimited detail.
It is a data structure.
I'm not 100% sure I agree with this. I was a 3D animator for a while before I switched to programming and I'm still a potter. Modeling in clay is a good amount easier and more natural to do. There's a lot to be said for the tactile feedback of the clay and interacting with a model in 3D instead of interacting with a model through a variety of 2D interfaces which interact with the 3D model.
Granted, and I'm sure that there are many artists who would get a lot out of working with clay rather than a clumsy 2D interface, but don't forget that by working with a computer you gain a lot too, for example on a very superficial scale how easy is 'undo' and 'redo' when working with clay? Can you look at a history of all your actions to see just how you got that specific shape? How practical is storing different versions of your work? How easy is it to grab parts of different models and merge them to experiment with new things? How easy is it to share your work with others? These are all important considerations for content creation for a professional game. I'm not saying that 3d laser scanning has no applications, just that Dell's claim that it's so much easier is, just like the rest of his claims, naive and overstated.
[color="#1C2837"]I'd think it was most useful would be importing architecture.[color="#1C2837"]
You can get the whole interior of a building in a relatively short amount of time completely textured and to scale; exterior is a bit trickier, but can still be done in under a day depending on the size and complexity of the building.
I disagree, your stock objects like rocks and chairs and the like perhaps...but depending on the game I think architecture should be original content.
Yeah, it would be lot easier and more feasible for indie and amateur developers to buy a 3D scanner than to use a free modelling software....
I´ve never used voxel technology, but i´ve been thinking of UD and an efficient way to store voxels and i´ve come up with this. Might be an stupidity, but oh well...
Let´s suppose we are drawing an infinite array on voxels, all in the same direction: positive x axis, for example. We can use one bit to tell if the next voxel is at x+1, or not. If not, it must be at y+1,y-1, z+1 or z-1. So we use 2bits for that (2^2 = 4).
Now looking at a model it seems fairly common to have long arrays of voxels in the same direction: walls, trees, bricks, etc. We can use n bits to tell how many voxels ahead of us are in the same direction, and just don´t store any information for them.
if voxel starts with 0:
n bits to represent 2^n sucessive voxels after this one. (total n+1 bits)
if voxel starts with 1:
2 bits to indicate a new direction (total 3 bits)
Using n = 4:
In the worst case (every voxel changes direction), we can store 1 million voxels in 10^6*3/8/1024 = 366 kb.
In the best case (every voxel has up to 2^n =16 neighbours facing the same direction) we can store 1 million voxels in just 38 kb. If we know beforehand the best value for n, it could be lower.
It would be possible to preproccess a surface and find an optimal representation of it in terms of "n" (bits used for average number of voxels in the same direction) and path followed through the voxels.
Color info could be stored in a similar way, adding bits to indicate relative displacement over a palette.
Drawbacks: n and the path must be chosen very carefully or you might end up wasting space like crazy. The "worst case" is not the WORST case in which you have small arrays of just two voxels, meaning that half the voxels are wasting (n+1) bits just to tell you that the next one does not need any info. Traversing this structure to render it is not efficient (load it into an octree at runtime?). Well, lots of other things. What do you think?
EDIT: On second thought, this really is stupid (lol). Just have one bit per voxel to indicate if the next changes direction or not. 3 bits at worst and 1 at best per voxel, average 2 bits per voxel: 1 million in 244kb.
Let´s suppose we are drawing an infinite array on voxels, all in the same direction: positive x axis, for example. We can use one bit to tell if the next voxel is at x+1, or not. If not, it must be at y+1,y-1, z+1 or z-1. So we use 2bits for that (2^2 = 4).
Now looking at a model it seems fairly common to have long arrays of voxels in the same direction: walls, trees, bricks, etc. We can use n bits to tell how many voxels ahead of us are in the same direction, and just don´t store any information for them.
if voxel starts with 0:
n bits to represent 2^n sucessive voxels after this one. (total n+1 bits)
if voxel starts with 1:
2 bits to indicate a new direction (total 3 bits)
Using n = 4:
In the worst case (every voxel changes direction), we can store 1 million voxels in 10^6*3/8/1024 = 366 kb.
In the best case (every voxel has up to 2^n =16 neighbours facing the same direction) we can store 1 million voxels in just 38 kb. If we know beforehand the best value for n, it could be lower.
It would be possible to preproccess a surface and find an optimal representation of it in terms of "n" (bits used for average number of voxels in the same direction) and path followed through the voxels.
Color info could be stored in a similar way, adding bits to indicate relative displacement over a palette.
Drawbacks: n and the path must be chosen very carefully or you might end up wasting space like crazy. The "worst case" is not the WORST case in which you have small arrays of just two voxels, meaning that half the voxels are wasting (n+1) bits just to tell you that the next one does not need any info. Traversing this structure to render it is not efficient (load it into an octree at runtime?). Well, lots of other things. What do you think?
EDIT: On second thought, this really is stupid (lol). Just have one bit per voxel to indicate if the next changes direction or not. 3 bits at worst and 1 at best per voxel, average 2 bits per voxel: 1 million in 244kb.
Traversing this structure to render it is not efficient (load it into an octree at runtime?). Well, lots of other things. What do you think?A really efficient storage format is handy, but it's not key to the technology. The important data-structure is the one that stores the geometry for rendering -- and enables the search algorithm.
From what I can tell, it's got the following requirements:
* Can store a large number of voxels in limited RAM (large enough to call it 'unlimited' in practical usage).
* Can perform a "find 2D array of closest voxels to frustum near-plane" query.
My guess is that the query is broken down like:
* Given a camera frustum (derived from position, direction, FOV and aspect), and a resolution.
* Divide that frustum up into [font="Courier New"]W*H[/font] sub-frustums, based on resolution's [font="Courier New"]W[/font]idth and [font="Courier New"]H[/font]eight.
* Perform a "find closest voxel to frustum near-plane" query for each sub-frustum.
Then after performing these queries:
* Use the information from the returned voxels to perform shading/lighting (could be used to generate a g-buffer for deferred lighting).
So, the data structure not only needs to store a large amount of data in a very compact form, but it also need to be able to return you the closest data-point, given any given search frustum.
In the previously posted interview, he hints that the method for projecting the 3D points into the 2D array of results is done in a way where the coherency of the frustum is exploited. i.e. each 'sub-frustum' (the 3D polytope covered by a 2D screen pixel) somehow benefits from the fact that it's very similar to it's neighboring 'sub-frustums'.
In that video, you can also see that there is a near-plane being used, when he accidentally intersects with a leaf on the ground. If he was casting rays from a pinhole camera, there'd be no clipping artifact there.
An interesting side-note, is that in the close-up of the leaf on the ground, the shadows look extremely similar to PCF shadow-maps.
Granted, and I'm sure that there are many artists who would get a lot out of working with clay rather than a clumsy 2D interface, but don't forget that by working with a computer you gain a lot too, for example on a very superficial scale how easy is 'undo' and 'redo' when working with clay? Can you look at a history of all your actions to see just how you got that specific shape? How practical is storing different versions of your work? How easy is it to grab parts of different models and merge them to experiment with new things? How easy is it to share your work with others? These are all important considerations for content creation for a professional game. I'm not saying that 3d laser scanning has no applications, just that Dell's claim that it's so much easier is, just like the rest of his claims, naive and overstated.
You missed the most obvious limitation being physics as some things are just physically impossible with clay.
I am of course not talking about using clay for everything, but there are tons of cases where using clay is very beneficial. You would still get the benefit of modeling in clay, scanning it in, then messing with it in a modeling application for touch-ups. I know tons of ceramic artists that can make a totally realistic bust in under 30 minutes. Then you put it in the computer and anybody can do whatever they want with it; share it, modify it, whatever. The pipeline wouldn't be clay->finished in game model; it would just replace the first step.
I disagree, you're stock objects like rocks and chairs and the like perhaps...but depending on the game I think architecture should be original content.
[/quote]
I do agree that it should be original for fictional games, but even in that case it still gives you a great starting point to grab or reuse a lot of useful data from. I have to imagine that stitching together point clouds would be easier than stitching together meshes as well since you don't actually have to stitch anything together; just drop in the points you want. You could grab the ceiling from the Sistine chapel and put it in an office building in a couple seconds.
There are also a bunch of games that do want to be accurate to real places. Madden and Fifa both try to replicate their stadiums as accurately as possible. GT and Forza could replicate all their tracks to the centimeter and all their cars simply by scanning. If the parts were scanned individually before being assembled you could get perfectly detailed cars. Not necessarily a major selling point for all games, but certainly a huge selling point for some.
Yeah, it would be lot easier and more feasible for indie and amateur developers to buy a 3D scanner than to use a free modelling software....
The same argument could be made for any 3rd party middleware engine that uses polys. That doesn't make them irrelevant to game developers.
The pipeline wouldn't be clay->finished in game model; it would just replace the first step.
I know, but that is what Dell is trying to claim, he's like "it only takes 15 minutes!" which is of course just utter bullshit.
[color="#1C2837"]Madden and Fifa both try to replicate their stadiums as accurately as possible
[color="#1C2837"][/quote]
[color="#1C2837"]
[color="#1c2837"]Fair point, and for those games perhaps 3d scanning (if feasible on such a large scale) may be a good alternative for generating the geometry. It still doesn't make his technology any more revolutionary though.
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement