Terrain multitexturing problem

Started by
11 comments, last by dreamslayerx 12 years, 6 months ago
Hi there:
I am currently working on a terrain rendering project. I have 2 problems with it.
1: The terrain has 4 textures for multi-textureing(grass,sand,rock ....), and also has a 4 channels(RGBA) detail texture to indicate weight of each texture.(ie.128,0,127,0 roughly means %50 grass and %50rock). By using shader, this is easy, final = a*R + b *G + c*B + d*A where a,b,c,d are sample color from textures.But how to implement it without shader? For example, we might use glTexEnv for multitextureing?
Is that possible to render it without shader and in single pass?
2: if terrain textures are up to 8, then it will be 10 texture units(8 textures,2 details) it seems it is impossible to render it within single pass even if with shader.(because most video cards only support 8 texture units for multitextureing)
Is there any way to render it in single pass?

thanks for your helps!
Advertisement
To the best of my knowledge, what you are trying to do isn't really possible using fixed-function alone, and especially not in a single pass. I remember implementing something similar a very long time ago using glVertexAttribPointerARB to specify an array of vertex attributes per blend layer to perform the blending, but even still I had to set up a vertex program (old school) to do the alpha replacement.

Maybe you could figure something out if you split the blending RGBA texture up into separate GL_ALPHA format textures, but this of course would eat up available binding slots.
A2: Texture atlassing is key, just put the four textures into one and sample it four times( once for each detail channel )

To the best of my knowledge, what you are trying to do isn't really possible using fixed-function alone, and especially not in a single pass. I remember implementing something similar a very long time ago using glVertexAttribPointerARB to specify an array of vertex attributes per blend layer to perform the blending, but even still I had to set up a vertex program (old school) to do the alpha replacement.

Maybe you could figure something out if you split the blending RGBA texture up into separate GL_ALPHA format textures, but this of course would eat up available binding slots.


Really thanks for the help.
Is it could be done by multipass? Use depth function to set equal for second pass?

A2: Texture atlassing is key, just put the four textures into one and sample it four times( once for each detail channel )


Thanks for the reply Murdocki;
would you give any details for putting 4 textures into one? I really have no idea how to put four textures into one (each texture at least has 3 channels RGB,and for enable textures I have to call glActiveTexture() 4 times.).
"Is it could be done by multipass? Use depth function to set equal for second pass?"

Yes, this is typically how it would be done.

Make sure you write solidly for the first pass (to populate the depth buffer), turn off depth writing for the subsequent passes (for speed), set equal as the test and then use chained texture blend units to feed the R G or B component from one stage as the alpha into the second.

The key phrase to look up here is "texture combiners".

Basically if you imagine the fixed functionality has a bunch of little units in it which can have the inputs fed from selections of different things -- constant colours, a texture lookup, the glColor or (crucially here) the output from one of the earlier stages.

Setting them up is faffy and fiddly and involves a big stack of glTexEnvi(GL_TEXTURE_ENV,...) calls; it's a good idea to plan out carefully what needs setting up before starting to write the code.

You'll need to do some jiggery pokery with the texture coordinate scales; because one of your textures is big (landscape sized) and one of them is the grass/sand/rock tile. I *think* you do this by chosing the active texture unit and then manipulating the texture matrix, and it'll apply to just that unit. So you can put a scaling on one of the lookups.

You could I guess, possibly do this in one go. Maybe. You'd have to look really carefully at the texture combiner modes available.

Is there any reason why you can't just do this in a modern vertex shader? It seems a lot of work to go to to do this ye olde fashionedey wayey.
If you got four images of 256*256 dimensions you can create one image of 512*512 which contains one of the images in each corner, you can do this with paint / gimp or any other image editing tool.

Something like this:

base1.png


This image is then loaded and bound as a single texture, in your fragment program you can calculate the texture coordinates for each individual image with something like this
vec2 t1Coords = vec2( texCoord.x / 2.0, texCoord.y / 2.0 );
vec2 t2Coords = vec2( texCoord.x / 2.0 + 0.5, texCoord.y / 2.0 );
vec2 t3Coords = vec2( texCoord.x / 2.0, texCoord.y / 2.0 - 0.5 );
vec2 t4Coords = vec2( texCoord.x / 2.0 + 0.5, texCoord.y / 2.0 - 0.5 );
Just as an aside, I'm finding it hard to think of a justification these days for not using shaders. If it's a "supporting older hardware" thing, you might keep this in mind: the original ARB_vertex_program and ARB_fragment_program extensions were created in 2002 or thereabouts. That's almost a decade ago. Even with those ancient and decrepit extensions, the texture splatting problem is an order of magnitude easier to solve. All the "jiggery pokery", as Katie so awesomely described it, is meant to "fake" an actual programmable shader by setting up a complex set of cascading texture stage states. I honestly believe that you can safely assume a very high percentage of gamers will have hardware that supports at least a basic level of programmability, and forego all the glTexEnv voodoo.
These days I'd be very suprised if you had to go below 'DX9 level' hardware, which is basically shaders; I have a hard time imagining there are much hardware 'in the wild' these days which wouldn't support GLSL1.0, and that which didn't probably wouldn't have the fill rate to pull of other hacks or multi-pass tricks at an acceptable speed anyway.

I'd advise that, unless you know for certain a large percentage of your target can't run even basic shaders, to go in the shader direction.

If you are doing it 'just to learn' then stop right now. The API is dead, there is nothing to learn which would be useful and you would be better served picking up shaders instead.

If you got four images of 256*256 dimensions you can create one image of 512*512 which contains one of the images in each corner, you can do this with paint / gimp or any other image editing tool.

Something like this:

base1.png


This image is then loaded and bound as a single texture, in your fragment program you can calculate the texture coordinates for each individual image with something like this
vec2 t1Coords = vec2( texCoord.x / 2.0, texCoord.y / 2.0 );
vec2 t2Coords = vec2( texCoord.x / 2.0 + 0.5, texCoord.y / 2.0 );
vec2 t3Coords = vec2( texCoord.x / 2.0, texCoord.y / 2.0 - 0.5 );
vec2 t4Coords = vec2( texCoord.x / 2.0 + 0.5, texCoord.y / 2.0 - 0.5 );




Okya. It sounds works.I'd try it later. Really thanks for the idea.

This topic is closed to new replies.

Advertisement