Jump to content

  • Log In with Google      Sign In   
  • Create Account

Terrain multitexturing problem


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
12 replies to this topic

#1 wlb2001   Members   -  Reputation: 123

Like
0Likes
Like

Posted 01 October 2011 - 03:03 AM

Hi there:
I am currently working on a terrain rendering project. I have 2 problems with it.
1: The terrain has 4 textures for multi-textureing(grass,sand,rock ....), and also has a 4 channels(RGBA) detail texture to indicate weight of each texture.(ie.128,0,127,0 roughly means %50 grass and %50rock). By using shader, this is easy, final = a*R + b *G + c*B + d*A where a,b,c,d are sample color from textures.But how to implement it without shader? For example, we might use glTexEnv for multitextureing?
Is that possible to render it without shader and in single pass?
2: if terrain textures are up to 8, then it will be 10 texture units(8 textures,2 details) it seems it is impossible to render it within single pass even if with shader.(because most video cards only support 8 texture units for multitextureing)
Is there any way to render it in single pass?

thanks for your helps!

Sponsor:

#2 JTippetts   Moderators   -  Reputation: 8663

Like
2Likes
Like

Posted 01 October 2011 - 07:39 AM

To the best of my knowledge, what you are trying to do isn't really possible using fixed-function alone, and especially not in a single pass. I remember implementing something similar a very long time ago using glVertexAttribPointerARB to specify an array of vertex attributes per blend layer to perform the blending, but even still I had to set up a vertex program (old school) to do the alpha replacement.

Maybe you could figure something out if you split the blending RGBA texture up into separate GL_ALPHA format textures, but this of course would eat up available binding slots.

#3 Murdocki   Members   -  Reputation: 274

Like
0Likes
Like

Posted 01 October 2011 - 10:16 AM

A2: Texture atlassing is key, just put the four textures into one and sample it four times( once for each detail channel )

#4 wlb2001   Members   -  Reputation: 123

Like
0Likes
Like

Posted 01 October 2011 - 01:34 PM

To the best of my knowledge, what you are trying to do isn't really possible using fixed-function alone, and especially not in a single pass. I remember implementing something similar a very long time ago using glVertexAttribPointerARB to specify an array of vertex attributes per blend layer to perform the blending, but even still I had to set up a vertex program (old school) to do the alpha replacement.

Maybe you could figure something out if you split the blending RGBA texture up into separate GL_ALPHA format textures, but this of course would eat up available binding slots.


Really thanks for the help.
Is it could be done by multipass? Use depth function to set equal for second pass?

#5 wlb2001   Members   -  Reputation: 123

Like
0Likes
Like

Posted 01 October 2011 - 01:39 PM

A2: Texture atlassing is key, just put the four textures into one and sample it four times( once for each detail channel )


Thanks for the reply Murdocki;
would you give any details for putting 4 textures into one? I really have no idea how to put four textures into one (each texture at least has 3 channels RGB,and for enable textures I have to call glActiveTexture() 4 times.).

#6 Katie   Members   -  Reputation: 1375

Like
1Likes
Like

Posted 01 October 2011 - 02:36 PM

"Is it could be done by multipass? Use depth function to set equal for second pass?"

Yes, this is typically how it would be done.

Make sure you write solidly for the first pass (to populate the depth buffer), turn off depth writing for the subsequent passes (for speed), set equal as the test and then use chained texture blend units to feed the R G or B component from one stage as the alpha into the second.

The key phrase to look up here is "texture combiners".

Basically if you imagine the fixed functionality has a bunch of little units in it which can have the inputs fed from selections of different things -- constant colours, a texture lookup, the glColor or (crucially here) the output from one of the earlier stages.

Setting them up is faffy and fiddly and involves a big stack of glTexEnvi(GL_TEXTURE_ENV,...) calls; it's a good idea to plan out carefully what needs setting up before starting to write the code.

You'll need to do some jiggery pokery with the texture coordinate scales; because one of your textures is big (landscape sized) and one of them is the grass/sand/rock tile. I *think* you do this by chosing the active texture unit and then manipulating the texture matrix, and it'll apply to just that unit. So you can put a scaling on one of the lookups.

You could I guess, possibly do this in one go. Maybe. You'd have to look really carefully at the texture combiner modes available.

Is there any reason why you can't just do this in a modern vertex shader? It seems a lot of work to go to to do this ye olde fashionedey wayey.

#7 Murdocki   Members   -  Reputation: 274

Like
0Likes
Like

Posted 01 October 2011 - 03:10 PM

If you got four images of 256*256 dimensions you can create one image of 512*512 which contains one of the images in each corner, you can do this with paint / gimp or any other image editing tool.

Something like this:

Posted Image


This image is then loaded and bound as a single texture, in your fragment program you can calculate the texture coordinates for each individual image with something like this
vec2 t1Coords = vec2( texCoord.x / 2.0, texCoord.y / 2.0 );
            vec2 t2Coords = vec2( texCoord.x / 2.0 + 0.5, texCoord.y / 2.0 );
            vec2 t3Coords = vec2( texCoord.x / 2.0, texCoord.y / 2.0 - 0.5 );
            vec2 t4Coords = vec2( texCoord.x / 2.0 + 0.5, texCoord.y / 2.0 - 0.5 );


#8 JTippetts   Moderators   -  Reputation: 8663

Like
0Likes
Like

Posted 01 October 2011 - 04:59 PM

Just as an aside, I'm finding it hard to think of a justification these days for not using shaders. If it's a "supporting older hardware" thing, you might keep this in mind: the original ARB_vertex_program and ARB_fragment_program extensions were created in 2002 or thereabouts. That's almost a decade ago. Even with those ancient and decrepit extensions, the texture splatting problem is an order of magnitude easier to solve. All the "jiggery pokery", as Katie so awesomely described it, is meant to "fake" an actual programmable shader by setting up a complex set of cascading texture stage states. I honestly believe that you can safely assume a very high percentage of gamers will have hardware that supports at least a basic level of programmability, and forego all the glTexEnv voodoo.


#9 phantom   Moderators   -  Reputation: 7593

Like
0Likes
Like

Posted 01 October 2011 - 05:34 PM

These days I'd be very suprised if you had to go below 'DX9 level' hardware, which is basically shaders; I have a hard time imagining there are much hardware 'in the wild' these days which wouldn't support GLSL1.0, and that which didn't probably wouldn't have the fill rate to pull of other hacks or multi-pass tricks at an acceptable speed anyway.

I'd advise that, unless you know for certain a large percentage of your target can't run even basic shaders, to go in the shader direction.

If you are doing it 'just to learn' then stop right now. The API is dead, there is nothing to learn which would be useful and you would be better served picking up shaders instead.

#10 wlb2001   Members   -  Reputation: 123

Like
0Likes
Like

Posted 02 October 2011 - 07:13 AM

If you got four images of 256*256 dimensions you can create one image of 512*512 which contains one of the images in each corner, you can do this with paint / gimp or any other image editing tool.

Something like this:

Posted Image


This image is then loaded and bound as a single texture, in your fragment program you can calculate the texture coordinates for each individual image with something like this

vec2 t1Coords = vec2( texCoord.x / 2.0, texCoord.y / 2.0 );
            vec2 t2Coords = vec2( texCoord.x / 2.0 + 0.5, texCoord.y / 2.0 );
            vec2 t3Coords = vec2( texCoord.x / 2.0, texCoord.y / 2.0 - 0.5 );
            vec2 t4Coords = vec2( texCoord.x / 2.0 + 0.5, texCoord.y / 2.0 - 0.5 );



Okya. It sounds works.I'd try it later. Really thanks for the idea.



#11 wlb2001   Members   -  Reputation: 123

Like
0Likes
Like

Posted 02 October 2011 - 07:17 AM

These days I'd be very suprised if you had to go below 'DX9 level' hardware, which is basically shaders; I have a hard time imagining there are much hardware 'in the wild' these days which wouldn't support GLSL1.0, and that which didn't probably wouldn't have the fill rate to pull of other hacks or multi-pass tricks at an acceptable speed anyway.

I'd advise that, unless you know for certain a large percentage of your target can't run even basic shaders, to go in the shader direction.

If you are doing it 'just to learn' then stop right now. The API is dead, there is nothing to learn which would be useful and you would be better served picking up shaders instead.


Yeah, I am sure most of my targets can't run shaders. I hate those on board video chips. Most of them only support up to OpenGL v1.4(maybe 1.5).That's why I have to figure out the way without shader. Anyway, still really thanks for the advices.

#12 V-man   Members   -  Reputation: 805

Like
1Likes
Like

Posted 03 October 2011 - 08:33 AM

Yeah, I am sure most of my targets can't run shaders. I hate those on board video chips. Most of them only support up to OpenGL v1.4(maybe 1.5).That's why I have to figure out the way without shader. Anyway, still really thanks for the advices.


Those would be Intel and they most likely CAN run shaders. Use DirectX9 and shaders and it will work great.
As for GL, Intel doesn't update or put in much effort in their GL driver. Their drivers tend to be buggy (search these forums and you'll find plenty of posts).

Good luck.
Sig: http://glhlib.sourceforge.net
an open source GLU replacement library. Much more modern than GLU.
float matrix[16], inverse_matrix[16];
glhLoadIdentityf2(matrix);
glhTranslatef2(matrix, 0.0, 0.0, 5.0);
glhRotateAboutXf2(matrix, angleInRadians);
glhScalef2(matrix, 1.0, 1.0, -1.0);
glhQuickInvertMatrixf2(matrix, inverse_matrix);
glUniformMatrix4fv(uniformLocation1, 1, FALSE, matrix);
glUniformMatrix4fv(uniformLocation2, 1, FALSE, inverse_matrix);

#13 xXDreamXx   Members   -  Reputation: 114

Like
0Likes
Like

Posted 03 October 2011 - 09:23 AM

I am new to this as well, but when texturing my terrain I develop the terrain in an modeling program like blender. From there I usually change my nurbs surface into a mesh after that I export my mesh into a UV map. Then I export my UV maps to a painting program such as gimp or photoshop. Then if you export your file as a .obj file or something similar. You will get texture coordinates for your uv maps, vertex coordinates, and vertex normals. From there all you have to do is read the .obj file and it can load up your textures onto your model for you. I don't know if this is the best approach, but its the approach that I am using to texture my terrain.






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS