Sign in to follow this  

Multi-texturing question

This topic is 3867 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, Can someone please try to explain to me how multiple textures can be blended per alpha on a face with D3DX? I've been reading and searching, repeat x 5 on this and I still just don't see how it's done. I'm working with terrain, and I've already been able to box filter for smoothness, sectorize for performance with larger heightmaps, and colorize vertices based on elevation, etc. I have a fairly good idea on how the texture stage settings work, and I've been able to "overlay" a detail texture onto a base texture with no problem, since this is simply merging two textures across the faces entire surface. What I want to be able to do is blend from one texture to another within the same face, based on alpha values. I've read countless articles and tutorials on doing this, but the problem is that they all say to just store an alpha value for each texture type per vertex, then the texture blending stages will take it from there, basically. It certainly makes sense and it looks fantastic on paper, but implementation is a different story, for one reason... The supported vertex structures in D3DX only contain one alpha setting per vertex, which of course is in the diffuse. I have no way of storing different alpha values for the vertex. I could make use of the specular component to store one more alpha value, but not only is that a crude workaround, it also still only allows two textures, which obviously isn't going to work for a nice terrain. I hope I'm making sense with this and that someone can shed some light on how this should be done. Thank you very much in advance, Scorp

Share this post


Link to post
Share on other sites
D3DTOP_BLENDTEXTUREALPHA, D3DTOP_BLENDDIFFUSEALPHA ?

something like this:


d3dDevice->setTexture(0, &tex1);
d3dDevice->setTexture(1, &tex2);

d3dDevice->setTextureStateState(0, D3DTSS_COLORARG1, D3DTA_TEXTURE);
d3dDevice->setTextureStateState(0, D3DTSS_COLORARG2, D3DTA_DIFFUSE);
d3dDevice->setTextureStateState(0, D3DTSS_COLOROP, D3DTOP_MODULATE);//multiply texture by diffuse

d3dDevice->setTextureStateState(1, D3DTSS_COLORARG1, D3DTA_TEXTURE);//tex2
d3dDevice->setTextureStateState(1, D3DTSS_COLORARG2, D3DTA_CURRENT);//tex1
d3dDevice->setTextureStateState(1, D3DTSS_COLOROP, D3DTOP_BLENDTEXTUREALPHA);//blend


The docs says that D3DTOP_BLENDTEXTUREALPHA does the following: take the first arg(the tex2 in this case) and multiply by alpha then sum with the second arg(tex1) multiplied by alpha-1.

I think it works =/

Share this post


Link to post
Share on other sites
Thank you for responding, xissburg. I wanted to say that I tried that stage setting, but in thinking about it, I tried so many different combinations to make this concept work, it became blurry in my mind. So, I tried that method once again and rediscovered the issue I have with that series.

You're right in that it does blend the textures correctly across the face, but the problem is that once you've blended the second texture, the tri is completely opaque, which means any subsequent textures won't have any effect, as far as linearly interpolating between what exists and the new texture.

I have my primary ground type structures containing their own vert buffer, so that the entire terrain patch can be rendered without drawing quads multiple times. With that, my performance is still doing fairly good at 600-ish fps, and I can introduce more ground types to help bring in more textures. I really hate to do that, though, because I'd like to keep my buffer count to a minimum if at all possible.

Thanks again, though. Even if I do need to go the "added ground types" route, I'll still be using this combo I'm sure.

I haven't worked with OpenGL at all, myself, but from what I've read I believe it supports multiple alpha values per vertex, which would make all of this texture blending like I'm talking about, feasable. Is that true? Can someone maybe verify that? (Don't want an API war, obviously, but am curious as to maybe I should shift gears and go with it instead)

Scorp

Share this post


Link to post
Share on other sites
I have actually been tossing around the thought of going that route, because the fact that MS is dropping the FFP from DX 10 completely, to me clearly indicates that shaders are a much better way to go. Plus I would be working with the most current in rendering technology. The only reason I haven't done so yet is I haven't started learning much about shaders yet. At first look, the HLSL looked like it was quite a ways over my skill level. Maybe I should look at it again and reconsider.

What is your thought on the learning curve for shaders vs. the payoff? Is it really the way to go IYO? The idea of defining how I want things to blend versus trying to brute force them through the FFP does sound appealing, afterall.

Thanks!
Scorp

Share this post


Link to post
Share on other sites

This topic is 3867 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this