Sign in to follow this  

Multitexturing through a CG shader

This topic is 3101 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, could someone please explain to me how GL_MODULATE blends two textures? For example:
// Multi-texturing:
// - Texture1
glActiveTextureARB(GL_TEXTURE0_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, TextureID1);
//			
// - Texture2
glActiveTextureARB(GL_TEXTURE1_ARB);
glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, TextureID2);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);

What will be the result? Text1 + Text2? Text1 * Text2? If I had to pass through a CG fragment shader, I would write a procedure such as:
float4 main(float2 text1Coords : TEXCOORD0,
            float2 text2Coords: TEXCOORD1,
            uniform sampler2D texture1 : TEXUNIT0,
            uniform sampler2D texture2 : TEXUNIT1): COLOR

float3 color1 = tex2D(texture1, text1Coords).rgb;
float3 color2 = tex2D(texture2, text2Coords).rgb;

return float4(color1+color2, 1); // Which one?
return float4(color1*color2, 1);
return float4(color1?color2, 1);

What should I return? Am I on the right track? Thanks!

Share this post


Link to post
Share on other sites
I try to explain it in a simpler way.
Let suppose to render something with multitexturing and n textures.
Just with OpenGL functions, no shaders.
Now, if you wanted a very simple fragment shader, for example just to add 0.1 to the red channel of each pixel, what would you do?
Have I to pass all the textures, coordinates, etc.. to the shader and recreate the multitexturing by my own? Is there a way preserve the opengl capability to perform multitexturing working with shaders?
Hope it's clearer :)
Thanks

Share this post


Link to post
Share on other sites
GL_MODULATE multiplies the original color by the texture color, in this case I think you'll have to use GL_MODULATE in both textures to get: Color*Tex1*Tex2

Now, if Color = <0.1,0.0,0.0>
you get what you're looking for.

Share this post


Link to post
Share on other sites
Thanks for your answer, but I think my question is not clear :(
I don't really need to add 0.1 to the red channel, that was just an example.
What I'm trying to explain is: suppose you have already coded some wonderul effects just with opengl calls, such as multitexturing and other marvelous effects!
Now you need to write a very simple fragment shader to achieve a very simple effect, as the example of the 0.1..
What would you do?

I mean, if you did (pseudocode):
out fragment_program(in: color)
{
out.red = in.red + 0.1;
}
would you lose everything was coded with opengl calls?
For example, if you had 5 texture, just to add 0.1 (or any other simple effect), should I do:
out fragment_program(in: color, text1, text2, etc.. coords1, coords2, etc..)
{
out = tex2D(text1, coords1)+tex2D(text2, coords2)+...+...
out.red = in.red + 0.1;
}
and recreate what was automatically already achieved without the shader?

Share this post


Link to post
Share on other sites
"and recreate what was automatically already achieved without the shader?"

You have to understand what parts of the fixed function the fragment shader overrides.
If you want to modulate 2 textures, that code you had is fine

float4 main(float2 text1Coords : TEXCOORD0,
float2 text2Coords: TEXCOORD1,
uniform sampler2D texture1 : TEXUNIT0,
uniform sampler2D texture2 : TEXUNIT1): COLOR

float3 color1 = tex2D(texture1, text1Coords).rgb;
float3 color2 = tex2D(texture2, text2Coords).rgb;

return float4(color1*color2, 1);



This stuff is useless
glEnable(GL_TEXTURE_2D);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
when you are using fragment shaders.

Share this post


Link to post
Share on other sites
Quote:
Original post by V-man
This stuff is useless
glEnable(GL_TEXTURE_2D);
glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
when you are using fragment shaders.

Exactly.
Now suppose you already have achieved multitexturing with those opengl calls, and you need to add a fragment shader. Those lines become useless: is there a way to let opengl manage multitexturing with those lines so that I can write my fragment shader without thinking about multitexturing?
My problem is: I have some opengl code which has no shaders, and there's a piece of code where there are many textures applied, with many effects, blending etc..
Now I need to add a fragment shader for other purposes, but I want multitexturing to continue working! If I need to recreate multitexturing inside the shader it's going to be very complicated.. I'm not even sure to be able to do it..
Anyway, shouldn't this be an already experienced problem?

Share this post


Link to post
Share on other sites
"is there a way to let opengl manage multitexturing with those lines so that I can write my fragment shader without thinking about multitexturing?"
nope

Share this post


Link to post
Share on other sites
The programmable pipeline overrides completely the fixed function pipeline. They can't work together (Well, strictly speaking yes, but not for this case).

So, to add fragment shader effects, you need to implement also other effects such as multitexturing in your shaders. Seems a good time to learn shaders :).

And remember, there's no fixed function pipeline, in all modern cards, it's emulated through shaders in the graphics driver.

Share this post


Link to post
Share on other sites
Thanks for your answer, that's what I was looking for.
I'm actually reading the CG Book from the nvidia website and it's pretty good to understand how to implement many effects.
However it's a sort of compendium of tutorials, and I lack the knowledge (is this a correct english expression?:D) about how to work with shaders in a project greater than the achievement of a single effect.
For example, if I want an object to be rendered with an "A" effect, and another one with two effects, "A" and "B", could I reuse the shader already written for the "A" effect? It seems to me that I can bind one fragment and/or vertex program at a time. Does this mean I have to put everything inside one single .cg file? What about code management, reusing etc?
I looked without success for a more comprehensive book.

Staying on the topic, I've just implemented, in two different shaders, the Phong illumination model and the environment mapping.
How would you unify the results?
Supposing to have the colors resulting from the two programs in two variables such as:
float3 colorFromPhong = ambient+emissive+diffuse+specular+blabla..;
float3 colorFromEnv = computation with a cubemap blabla...;
What could I return?
colorFromPhong * colorFromEnv?
colorFromPhong + colorFromEnv?
lerp(colorFromPhong, colorFromEnv, alpha)?
None of them give me good results..
Thanks! :)

Share this post


Link to post
Share on other sites

This topic is 3101 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this