In a way, sampling is used to interpret implicit information (with a lot of ambiguity to worry about) from an incomplete data set i.e. spatial information but nothing else; hence the need to use a just-in-time generative approach based on the spatial information. A procedural model will describe additional features by inserting their associated details ontop or between the initial form of data. So, it's not even necessary, for that matter, to specify a material for each face of every voxel (which would be extremely consuming). I believe there's a huge variety of techniques you could conceive to describe materials. To elaborate this idea, look at this illustration and think about the way these cliffs work:

Although this works by the idea of isolines bridging between hierarchical planes of terrain, it's quite similar to Frenetic Pony's solution (a scalar map of grass density), which may be multichannel to support multiple planes. I'm guessing that an efficient extension of this concept may require some form of spatial hashing, because I'm not really sure how you would integrate this with voxels. As many of you may know, I hate voxels. Though it is in fact their very advantage, the structures' strict euclidean uniformity heavily sacrifice datas' sense of any spatial character foreign to euclidean regularity and especially non-spatial properties. I believe it's very possible to accomplish the same features which voxels are often utilized for with novel alternatives that may perform at least as well or better than voxels. In otherwords, it's possible to imagine stuctures which remain as uniform, predictable and thereby as efficient mediums of spatial dynamics as voxels, without sacrificing the extensibility yielded by procedural definition. If you align procedural definition with procedural execution, then you have a model perfect for your purposes. Think about the way 3D modeling programs represent triangle meshes. Not only are they capable of storing attributes per vertices, edges, faces etc. but this structure is also optimal for manipulation. Think about applying normal smoothing onto a plain list of vertices. Here are the steps required:

1. For each vertex, find other vertices which have the exact same position.

2. Average their normals (sum the normals and then normalize).

3. Go back through all of these vertices and find where you need to apply this average.

... without any stacks or intermediate storage regarding the mesh. Now that's just completely stupid... but it's an extreme example of the lacking approaches people often take.

For your case, I recommend you just describe enough for the sampling to know what's appropriate i.e. a single scalar code for each voxel which corresponds to a 6-sided set of materials Examples: some material sets may entirely include the cliff texture, sometimes a mix of messy grass at the top and cliff on the sides, dirt ontop and red rock on the sides, or chalky-dirt with weeds on the top and cobble on the sides etc.

### Show differencesHistory of post edits

### #7Reflexus

Posted 01 October 2012 - 07:12 PM

In a way, sampling is used to interpret implicit information (with a lot of ambiguity to worry about) from an incomplete data set i.e. spatial information but nothing else; hence the need to use a just-in-time generative approach based on the spatial information. A procedural model will describe additional features by inserting their associated details ontop or between the initial form of data. So, it's not even necessary, for that matter, to specify a material for each face of every voxel (which would be extremely consuming). I believe there's a huge variety of techniques you could conceive to describe materials. To elaborate this idea, look at this illustration and think about the way these cliffs work:

Although this works by the idea of isolines bridging between hierarchical planes of terrain, it's quite similar to Frenetic Pony's solution (a scalar map of grass density), which may be multichannel to support multiple planes. I'm guessing that an efficient extension of this concept may require some form of spatial hashing, because I'm not really sure how you would integrate this with voxels. As many of you may know, I hate voxels. Though it is in fact their very advantage, the structures' strict euclidean uniformity heavily sacrifice datas' sense of any spatial character foreign to euclidean regularity and especially non-spatial properties. I believe it's very possible to accomplish the same features which voxels are often utilized for with novel alternatives that may perform at least as well or better than voxels. In otherwords, it's possible to imagine stuctures which remain as uniform, predictable and thereby as efficient mediums of spatial dynamics as voxels, without sacrificing the extensibility yielded by procedural definition. If you align procedural definition with procedural execution, then you have a model perfect for your purposes. Think about the way 3D modeling programs represent triangle meshes. Not only are they capable of storing attributes per vertices, edges, faces etc. but this structure is also optimal for manipulation. Think about applying normal smoothing onto a plain list of vertices. Here are the steps required:

1. For each vertex, find other vertices which have the exact same position.

2. Average their normals (sum the normals and then normalize).

3. Go back through all of these vertices and find where you need to apply this average.

... without any stacks or intermediate storage regarding the mesh. Now that's just completely stupid... but it's an extreme example of the lacking approaches people often take.

Although this works by the idea of isolines bridging between hierarchical planes of terrain, it's quite similar to Frenetic Pony's solution (a scalar map of grass density), which may be multichannel to support multiple planes. I'm guessing that an efficient extension of this concept may require some form of spatial hashing, because I'm not really sure how you would integrate this with voxels. As many of you may know, I hate voxels. Though it is in fact their very advantage, the structures' strict euclidean uniformity heavily sacrifice datas' sense of any spatial character foreign to euclidean regularity and especially non-spatial properties. I believe it's very possible to accomplish the same features which voxels are often utilized for with novel alternatives that may perform at least as well or better than voxels. In otherwords, it's possible to imagine stuctures which remain as uniform, predictable and thereby as efficient mediums of spatial dynamics as voxels, without sacrificing the extensibility yielded by procedural definition. If you align procedural definition with procedural execution, then you have a model perfect for your purposes. Think about the way 3D modeling programs represent triangle meshes. Not only are they capable of storing attributes per vertices, edges, faces etc. but this structure is also optimal for manipulation. Think about applying normal smoothing onto a plain list of vertices. Here are the steps required:

1. For each vertex, find other vertices which have the exact same position.

2. Average their normals (sum the normals and then normalize).

3. Go back through all of these vertices and find where you need to apply this average.

... without any stacks or intermediate storage regarding the mesh. Now that's just completely stupid... but it's an extreme example of the lacking approaches people often take.

### #6Reflexus

Posted 30 September 2012 - 08:59 PM

In a way, sampling is used to interpret implicit information (with a lot of ambiguity to worry about) from an incomplete data set i.e. spatial information but nothing else; hence the need to use a just-in-time generative approach based on the spatial information. A procedural model will describe additional features by inserting their associated details ontop or between the initial form of data. So, it's not even necessary, for that matter, to specify a material for each face of every voxel (which would be extremely consuming). I believe there's a huge variety of techniques you could conceive to describe materials. To elaborate this idea, look at this illustration and think about the way these cliffs work:

Although this works by the idea of isolines bridging between hierarchical planes of terrain, it's quite similar to Frenetic Pony's solution (a scalar map of grass density), which may be multichannel to support multiple planes. I'm guessing that an efficient extension of this concept may require some form of spatial hashing, because I'm not really sure how you would integrate this with voxels. As many of you may know, I hate voxels. Though it is in fact their very advantage, the structures' strict euclidean uniformity heavily sacrifice datas' sense of any spatial character foreign to euclidean regularity and especially non-spatial properties. I believe it's very possible to accomplish the same features as voxels are often utilized for by novel alternatives which may perform at least as well or better than voxels. In otherwords, it's possible to imagine stuctures which remain as uniform, predictable and thereby as efficient mediums of spatial dynamics as voxels, without sacrificing the extensibility yielded by procedural definition. If you align procedural definition with procedural execution, then you have a model perfect for your purposes. Think about the way 3D modeling programs represent triangle meshes. Not only are they capable of storing attributes per vertices, edges, faces etc. but this structure is also optimal for manipulation. Think about applying normal smoothing onto a plain list of vertices. Here are the steps required:

1. For each vertex, find other vertices which have the exact same position.

2. Average their normals (sum the normals and then normalize).

3. Go back through all of these vertices and find where you need to apply this average.

... without any stacks or intermediate storage regarding the mesh. Now that's just completely stupid... but it's an extreme example of the lacking approaches people often take.

Although this works by the idea of isolines bridging between hierarchical planes of terrain, it's quite similar to Frenetic Pony's solution (a scalar map of grass density), which may be multichannel to support multiple planes. I'm guessing that an efficient extension of this concept may require some form of spatial hashing, because I'm not really sure how you would integrate this with voxels. As many of you may know, I hate voxels. Though it is in fact their very advantage, the structures' strict euclidean uniformity heavily sacrifice datas' sense of any spatial character foreign to euclidean regularity and especially non-spatial properties. I believe it's very possible to accomplish the same features as voxels are often utilized for by novel alternatives which may perform at least as well or better than voxels. In otherwords, it's possible to imagine stuctures which remain as uniform, predictable and thereby as efficient mediums of spatial dynamics as voxels, without sacrificing the extensibility yielded by procedural definition. If you align procedural definition with procedural execution, then you have a model perfect for your purposes. Think about the way 3D modeling programs represent triangle meshes. Not only are they capable of storing attributes per vertices, edges, faces etc. but this structure is also optimal for manipulation. Think about applying normal smoothing onto a plain list of vertices. Here are the steps required:

1. For each vertex, find other vertices which have the exact same position.

2. Average their normals (sum the normals and then normalize).

3. Go back through all of these vertices and find where you need to apply this average.

... without any stacks or intermediate storage regarding the mesh. Now that's just completely stupid... but it's an extreme example of the lacking approaches people often take.

### #5Reflexus

Posted 30 September 2012 - 08:58 PM

In a way, sampling is used to interpret implicit information (with a lot of ambiguity to worry about) from an incomplete data set i.e. spatial information but nothing else; hence the need to use a just-in-time generative approach based on the spatial information. A procedural model will describe additional features by inserting their associated details ontop or between the initial form of data. So, it's not even necessary, for that matter, to specify a material for each face of every voxel (which would be extremely consuming). I believe there's a huge variety of techniques you could conceive to describe materials. To elaborate this idea, look at this illustration and think about the way these cliffs work:

Although this works by the idea of isolines bridging between hierarchical planes of terrain, it's quite similar to Frenetic Pony's solution (a scalar map of grass density), which may be multichannel to support multiple planes. I'm guessing that an efficient extension of this concept may require some form of spatial hashing, because I'm not really sure how you would integrate this with voxels. As many of you may know, I hate voxels. Though it is in fact their very advantage, the structures' strict euclidean uniformity heavily sacrifice datas' sense of any spatial character foreign to euclidean regularity and especially non-spatial properties. I believe it's very possible to accomplish the same features as voxels are often utilized for by novel alternatives which may perform at least as well or better than voxels. In otherwords, it's possible to imagine stuctures which remain as uniform, predictable and thereby as efficient mediums of spatial dynamics as voxels, without sacrificing the extensibility yielded by procedural definition. If you align procedural definition with procedural execution, then you have a model perfect for your purposes. Think about the way 3D modeling programs represent triangle meshes. Not only are they capable of storing attributes per vertices, edges, faces etc. but this structure is also optimal for manipulation. Think about applying normal smoothing onto a plain list of vertices. Here are the steps required:

1. For each vertex, find other vertices which share the exact same position.

2. Average their normals (sum the normals and then normalize).

3. Go back through all of these vertices and find where you need to apply this average.

... without any stacks or intermediate storage regarding the mesh. Now that's just completely stupid... but it's an extreme example of the lacking approaches people often take.

Although this works by the idea of isolines bridging between hierarchical planes of terrain, it's quite similar to Frenetic Pony's solution (a scalar map of grass density), which may be multichannel to support multiple planes. I'm guessing that an efficient extension of this concept may require some form of spatial hashing, because I'm not really sure how you would integrate this with voxels. As many of you may know, I hate voxels. Though it is in fact their very advantage, the structures' strict euclidean uniformity heavily sacrifice datas' sense of any spatial character foreign to euclidean regularity and especially non-spatial properties. I believe it's very possible to accomplish the same features as voxels are often utilized for by novel alternatives which may perform at least as well or better than voxels. In otherwords, it's possible to imagine stuctures which remain as uniform, predictable and thereby as efficient mediums of spatial dynamics as voxels, without sacrificing the extensibility yielded by procedural definition. If you align procedural definition with procedural execution, then you have a model perfect for your purposes. Think about the way 3D modeling programs represent triangle meshes. Not only are they capable of storing attributes per vertices, edges, faces etc. but this structure is also optimal for manipulation. Think about applying normal smoothing onto a plain list of vertices. Here are the steps required:

1. For each vertex, find other vertices which share the exact same position.

2. Average their normals (sum the normals and then normalize).

3. Go back through all of these vertices and find where you need to apply this average.

... without any stacks or intermediate storage regarding the mesh. Now that's just completely stupid... but it's an extreme example of the lacking approaches people often take.

### #4Reflexus

Posted 30 September 2012 - 08:58 PM

In a way, sampling is used to interpret implicit information (with a lot of ambiguity to worry about) from an incomplete data set i.e. spatial information but nothing else; hence the need to use a just-in-time generative approach based on the spatial information. A procedural model will describe additional features by inserting their associated details ontop or between the initial form of data. So, it's not even necessary, for that matter, to specify a material for each face of every voxel (which would be extremely consuming). I believe there's a huge variety of techniques you could conceive to describe materials. To elaborate this idea, look at this illustration and think about the way these cliffs work:

Although this works by the idea of isolines bridging between hierarchical planes of terrain, it's quite similar to Frenetic Pony's solution (a scalar map of grass density), which may be multichannel to support multiple planes. I'm guessing that an efficient extension of this concept may require some form of spatial hashing, because I'm not really sure how you would integrate this with voxels. As many of you may know, I hate voxels. Though it is in fact their very advantage, the structures' strict euclidean uniformity heavily sacrifice datas' sense of any spatial character foreign to euclidean regularity and especially non-spatial properties. I believe it's very possible to accomplish the same features as voxels are often utilized for by novel alternatives which may perform at least as well or better than voxels. In otherwords, it's possible to imagine stuctures which remain as uniform, predictable and thereby as efficient mediums of spatial dynamics as voxels, without sacrificing the extensibility yielded by procedural definition. If you align procedural definition with procedural execution, then you have a model perfect for your purposes. Think about the way 3D modeling programs represent triangle meshes. Not only are they capable of storing attributes per vertices, edges, faces etc. but this structure is also optimal for manipulation. Think of applying per-normal smoothing onto a plain list of vertices. Here are the steps required:

1. For each vertex, find other vertices which share the exact same position.

2. Average their normals (sum the normals and then normalize).

3. Go back through all of these vertices and find where you need to apply this average.

... without any stacks or intermediate storage regarding the mesh. Now that's just completely stupid... but it's an extreme example of the lacking approaches people often take.

Although this works by the idea of isolines bridging between hierarchical planes of terrain, it's quite similar to Frenetic Pony's solution (a scalar map of grass density), which may be multichannel to support multiple planes. I'm guessing that an efficient extension of this concept may require some form of spatial hashing, because I'm not really sure how you would integrate this with voxels. As many of you may know, I hate voxels. Though it is in fact their very advantage, the structures' strict euclidean uniformity heavily sacrifice datas' sense of any spatial character foreign to euclidean regularity and especially non-spatial properties. I believe it's very possible to accomplish the same features as voxels are often utilized for by novel alternatives which may perform at least as well or better than voxels. In otherwords, it's possible to imagine stuctures which remain as uniform, predictable and thereby as efficient mediums of spatial dynamics as voxels, without sacrificing the extensibility yielded by procedural definition. If you align procedural definition with procedural execution, then you have a model perfect for your purposes. Think about the way 3D modeling programs represent triangle meshes. Not only are they capable of storing attributes per vertices, edges, faces etc. but this structure is also optimal for manipulation. Think of applying per-normal smoothing onto a plain list of vertices. Here are the steps required:

1. For each vertex, find other vertices which share the exact same position.

2. Average their normals (sum the normals and then normalize).

3. Go back through all of these vertices and find where you need to apply this average.

... without any stacks or intermediate storage regarding the mesh. Now that's just completely stupid... but it's an extreme example of the lacking approaches people often take.

### #3Reflexus

Posted 30 September 2012 - 08:51 PM

In a way, sampling is used to interpret implicit information (with a lot of ambiguity to worry about) from an incomplete data set i.e. spatial information but nothing else; hence the need to use a just-in-time generative approach based on the spatial information. A procedural model will describe additional features by inserting their associated details ontop or between the initial form of data. So, it's not even necessary, for that matter, to specify a material for each face of a voxel (which would be extremely consuming). I believe there's a huge variety of techniques you could conceive to describe materials. To elaborate this idea, think about the way these cliffs work:

Although this works by the idea of isolines bridging between hierarchical planes of terrain, it's quite similar to Frenetic Pony's solution (a scalar map of grass density), which may be multichannel to support multiple planes. I'm guessing that an efficient extension of this concept may require some form of spatial hashing, because I'm not really sure how you would integrate this with voxels. As many of you may know, I hate voxels... the structures are so uniform and euclidean, though that is their advantage in fact, it heavily sacrifices the data's sense of any spatial character foreign to euclidean regularity and especially non-spatial properties. I believe it's very possible to achieve the same as voxels by novel alternatives which may perform at least as good or better. In otherwords, it's possible to imagine stuctures which remain as uniform, predictable and thereby as efficient mediums of spatial dynamics as voxels, without sacrificing the extensibility yielded by procedural definition. If you align procedural definition with procedural execution, then you have a model perfect for your purposes. Think about the way 3D modeling programs represent triangle meshes. Not only are they capable of storing attributes per vertices, edges, faces etc. but this structure is also optimal for manipulation. Think of applying per-normal smoothing onto a plain list of vertices. Here are the steps required:

1. For each vertex, find other vertices which share the exact same position.

2. Average their normals (sum the normals and then normalize).

3. Go back through all of these vertices and find where you need to apply this average.

... without any stacks or intermediate storage regarding the mesh. Now that's just completely stupid... but it's an extreme example of the lacking approaches people often take.

Although this works by the idea of isolines bridging between hierarchical planes of terrain, it's quite similar to Frenetic Pony's solution (a scalar map of grass density), which may be multichannel to support multiple planes. I'm guessing that an efficient extension of this concept may require some form of spatial hashing, because I'm not really sure how you would integrate this with voxels. As many of you may know, I hate voxels... the structures are so uniform and euclidean, though that is their advantage in fact, it heavily sacrifices the data's sense of any spatial character foreign to euclidean regularity and especially non-spatial properties. I believe it's very possible to achieve the same as voxels by novel alternatives which may perform at least as good or better. In otherwords, it's possible to imagine stuctures which remain as uniform, predictable and thereby as efficient mediums of spatial dynamics as voxels, without sacrificing the extensibility yielded by procedural definition. If you align procedural definition with procedural execution, then you have a model perfect for your purposes. Think about the way 3D modeling programs represent triangle meshes. Not only are they capable of storing attributes per vertices, edges, faces etc. but this structure is also optimal for manipulation. Think of applying per-normal smoothing onto a plain list of vertices. Here are the steps required:

1. For each vertex, find other vertices which share the exact same position.

2. Average their normals (sum the normals and then normalize).

3. Go back through all of these vertices and find where you need to apply this average.

... without any stacks or intermediate storage regarding the mesh. Now that's just completely stupid... but it's an extreme example of the lacking approaches people often take.