Procedural terrain textureing
Just wondering if anyone has any ideas on realtime procedural textureing for landscapes/terrain.
My current system uses tiled textures blended by slopes, height and vertex colors. I then multiply that gradient by a black and white mask to give give per pixel variation to the blending, so it's not a simple cross fade.
While this looks good, I am now looking for something more dynamic in terms of textureing..something where I can take small source textures, and genrate variations blended together seamlessly, according to arbitrary factors, such as slope, elevations, winwards/leewards, so and so forth.
Thus, you may have a base sand texture, but there could be a random grouping of rocks interspersed throught it, beaking up the tiling completely. How can something like this be done? wihtout using the standard vetex colors or mask textures.. perhaps some kind of "texture bombing"? Anyway, any ideas are welcome..
Ysaneya has done some rather impressive work in that regard, check out his journal (here on gamedev), specifically December 2005
linky (the 12th and before) ... nov 2nd seems to show were he got the detail textures, btw.
off topic: btw, did you work out a decent solution for the uv mapping of steep slopes?
-Michael g.
linky (the 12th and before) ... nov 2nd seems to show were he got the detail textures, btw.
off topic: btw, did you work out a decent solution for the uv mapping of steep slopes?
-Michael g.
Not yet..still thinking about that..I may end up later on tweaking the texture coords in my modelling app ... becasue my terrains are just large meshes..
That stuff by Ynseya seem interesting, but i wonder how its done?
Also, I wonder if it lookd very good up close though.. thats my interest.. not planetary scale stuff..perhaps the same method applies however.
That stuff by Ynseya seem interesting, but i wonder how its done?
Also, I wonder if it lookd very good up close though.. thats my interest.. not planetary scale stuff..perhaps the same method applies however.
Ysaneya uses a similar method as Terragen. This is why many people say it's not real-time but Terragen ;-)
Basically, you use the slope, height and some density information (average amount of grass, rock, sand...) in each pixel to mix a final color. Each quantity can be perturbed by Perlin noise to generate a more natural look. But in contrast to Terragen, Ysaneya uses detail textures that are blended together. In Terragen, you can only select a single color for each terrain type (grass, rock etc.).
If you download the program and work through the tutorial, you can learn a lot about how it works.
Lutz
Basically, you use the slope, height and some density information (average amount of grass, rock, sand...) in each pixel to mix a final color. Each quantity can be perturbed by Perlin noise to generate a more natural look. But in contrast to Terragen, Ysaneya uses detail textures that are blended together. In Terragen, you can only select a single color for each terrain type (grass, rock etc.).
If you download the program and work through the tutorial, you can learn a lot about how it works.
Lutz
How are whole planets with that level of detail created? It can't be all stored vertices can it?
My thoughts are kind of scattered now, I'll try to clean this up later.
One of my projects uses similar techniques, but uses modified smoothed simplex noise and perlin noise to generate procedural texturing for the landscapes. I don't actually use textures, however- everything in the engine uses procedural texturing, or coloring, or whatever you want to call it, that is 100% generated in the pixel and vertex shaders. This is quite expensive in terms of performance, and also has pretty high requirements (SM3 card), but is not intended to be distributed as a game or anything (its a research project). This method gives very good results. All data is generated from a random seed, as in Ysaneya's engine.
I should probably point out that modifying the result of the noise function is an important part of getting good results. Applying a function to the output (producing a non-uniform noise distribution) is often necessary to create a realistic texture.
I sometimes use 3 and 4 dimensional noise functions to generate texturing for dynamic elements of the environment, such as water or the sky. For instance, the clouds move, grow and shrink dynamically. I use a 4d function (x,y,z, time) to color the pixels on the surface of a sphere, and have obtained pretty good results. Using higher dimensional noise functions is sometimes more expensive (which is why I often use the better dimensionally-scaling Simplex noise rather than Perlin noise), but it also eliminates quite a bit of complication. For example, I don't have to worry about texture mapping a square texture onto a sphere for the sky- there is never a step that requires a 2 dimensional flat image.
Often, I end up using several noise function calls in one shader to generate a texture, basically yielding a detail texture type of result. One advantage of this method is that the shader (with knowledge of how far away the camera is) can adjust whether or not the detail-texturish noise function is actually evaluated, yielding a kind of LOD-ish thing (I haven't actually implemented this part yet, though I hope to soon). Another way to achieve a similar result would be to parameterize the number of octaves of noise calculated.
Hope this gives you a few ideas about texture generation.
Quote:Original post by Lutz
Basically, you use the slope, height and some density information (average amount of grass, rock, sand...) in each pixel to mix a final color. Each quantity can be perturbed by Perlin noise to generate a more natural look.
One of my projects uses similar techniques, but uses modified smoothed simplex noise and perlin noise to generate procedural texturing for the landscapes. I don't actually use textures, however- everything in the engine uses procedural texturing, or coloring, or whatever you want to call it, that is 100% generated in the pixel and vertex shaders. This is quite expensive in terms of performance, and also has pretty high requirements (SM3 card), but is not intended to be distributed as a game or anything (its a research project). This method gives very good results. All data is generated from a random seed, as in Ysaneya's engine.
I should probably point out that modifying the result of the noise function is an important part of getting good results. Applying a function to the output (producing a non-uniform noise distribution) is often necessary to create a realistic texture.
I sometimes use 3 and 4 dimensional noise functions to generate texturing for dynamic elements of the environment, such as water or the sky. For instance, the clouds move, grow and shrink dynamically. I use a 4d function (x,y,z, time) to color the pixels on the surface of a sphere, and have obtained pretty good results. Using higher dimensional noise functions is sometimes more expensive (which is why I often use the better dimensionally-scaling Simplex noise rather than Perlin noise), but it also eliminates quite a bit of complication. For example, I don't have to worry about texture mapping a square texture onto a sphere for the sky- there is never a step that requires a 2 dimensional flat image.
Often, I end up using several noise function calls in one shader to generate a texture, basically yielding a detail texture type of result. One advantage of this method is that the shader (with knowledge of how far away the camera is) can adjust whether or not the detail-texturish noise function is actually evaluated, yielding a kind of LOD-ish thing (I haven't actually implemented this part yet, though I hope to soon). Another way to achieve a similar result would be to parameterize the number of octaves of noise calculated.
Hope this gives you a few ideas about texture generation.
While that sounds interesting, I doubt completely procedural shaders can look very realistic up close. Noise functions can purturb a surface, but they cant generate real images, like leaves, grass, sticks, stones etc.
So such a system would be fine for intermediate viewing distances, but may fail up close.
I'm leaning toward some kind of texture bombing approach. However, I would be interested in learning more about your approach, and if you have screenshots that would be cool to see!
So such a system would be fine for intermediate viewing distances, but may fail up close.
I'm leaning toward some kind of texture bombing approach. However, I would be interested in learning more about your approach, and if you have screenshots that would be cool to see!
Quote:Original post by Matt Aufderheide
I doubt completely procedural shaders can look very realistic up close.
... but they cant generate real images, like leaves, grass, sticks, stones etc.
...may fail up close.
You're absolutely right. This is a problem that I've been thinking about a bit lately. I don't have a solution now, but it seems like there really should be a way to procedurally generate more textury stuff like the stuff you mentioned above. I think that this may be doable, but will probably be too expensive to do in a shader.
I think we're years and years away from being able to do a shader (without a texture base) that can do randomly placed leaf shapes (which is probably what you're looking for), but I think stuff like a hay texture, or sticks, or sand or other simple textures that are not quite so specificly shape based are actually doable, and are something that I'll look at in the future.
Quote:Original post by Matt Aufderheide
However, I would be interested in learning more about your approach, and if you have screenshots that would be cool to see!
I'm at work right now, and won't be able to get to my stuff until at least this weekend, but I'd be glad to show you some of the stuff that I've come up with so far.
[Edited by - foreignkid on July 20, 2006 12:59:54 PM]
Quote:Original post by Matt Aufderheide
Just wondering if anyone has any ideas on realtime procedural textureing for landscapes/terrain.
My current system uses tiled textures blended by slopes, height and vertex colors. I then multiply that gradient by a black and white mask to give give per pixel variation to the blending, so it's not a simple cross fade.
While this looks good, I am now looking for something more dynamic in terms of textureing..something where I can take small source textures, and genrate variations blended together seamlessly, according to arbitrary factors, such as slope, elevations, winwards/leewards, so and so forth.
Thus, you may have a base sand texture, but there could be a random grouping of rocks interspersed throught it, beaking up the tiling completely. How can something like this be done? wihtout using the standard vetex colors or mask textures.. perhaps some kind of "texture bombing"? Anyway, any ideas are welcome..
I used a similar approach as described above where I take the colors of pixels of images based on height and slope. I took the technique from Trent Polak's Focus on 3D Terrain Programming.
At startup create an empty RGB image array, say 512x512, and generate the pixel (color) data for this from base images - grass, dirt, rock, etc. For the current pixel, determine the height at the corresponding vertex on the mesh (or interpolated height if your texture size doesn't match the mesh size and your between two vertices), and then pull in the corresponding pixel from the base images (more interpolation required is you are using small 64x64 base textures). The percentage of color you use from the base image pixels is determined by ranges and percentages. If your heightmap is in the range 0 to 255 (nice because that matches RGB color ranges nicely), then you set ranges for which base images are active and the percentage of color used. For example, water from height 0-30, sand 20-60, green grass 40-160, brown grass 130-220, rock 200-255. If your height is 50 for the current pixel, then your final pixel color will be about an equal percentage of sand and grass. I then make a second pass where I factor in other things like slope (angle at current location from horizontal), and blend in other colors from three different rock images.
The final image then has a texture created from it, and is stretched across the mesh. The image below does multi-texture in a detail map though. The thing I like about this approach is taking the pixel from the base images based on where you are on the mesh and the terrain texture. This means that if you base image is rich in detail and color, then that is nicely distributed across the whole terrain, thus adding to the effect.
I highly recommend the book above. This implementation has so many variations that can be made on it. Everything done up-front at startup means you only need to make one pass to display it.
hth
F451
This topic is closed to new replies.
Advertisement
Popular Topics
Advertisement