Sign in to follow this  

OpenGL OpenGL using one texture as the alpha mask of another

This topic is 1485 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

If I'm drawing one texture, and I want to use another texture as the alpha channel for it, and permit the alpha mask texture to be rotated 90 degrees and flipped horizontally or vertically, how would I go about doing that?

 

I can pass in the mask texture to the fragment shader, and pass in a float with the 90 degree rotations in it, and manually rotate in the GLSL shader like this:

vec2 RotateTexCoord(vec2 pos, vec2 rotation, bool flipHorizontally, bool flipVertically)
{
    //Mirror (horizontally, vertically, or both).
    if(flipHorizontally)
    {
        pos.x = (1.0 - pos.x);
    }
    if(flipVertically)
    {
        pos.y = (1.0 - pos.y);
    }

    //Rotate in 90 degree increments.
    
    if(rotation > (270 - 45))
    {
        //Swap x and y (same reason as above).
        float temp = pos.y;
        pos.y = pos.x;
        pos.x = temp;

        pos.x = (1.0 - pos.x);
    }
    else if(rotation > (180 - 45))
    {
        pos.x = (1.0 - pos.x);
        pos.y = (1.0 - pos.y);
    }
    if(rotation > (90 - 45))
    {
        //Swap width and height (since image is turned on side)
        //int alteredWidth = this->GetHeight();
        //int alteredHeight = this->GetWidth();

        float temp = pos.x;
        pos.x = pos.y;
        pos.y = temp;

        pos.y = (1.0 - pos.y);
    }
}

Questions:

A) Does OpenGL already have functionality built-in for using one texture as the alpha mask of another?

B) When passing in a second texture to GLSL, are there features built in for rotating the texture coordinate for the second texture separate from the texture coordinate of the first texture?

C) This simple RotateTexCoord() function has 5 if() statements... that's 5 branches per fragment drawn for a trivial operation. How could it be optimized?

Share this post


Link to post
Share on other sites


A) Does OpenGL already have functionality built-in for using one texture as the alpha mask of another?

B) When passing in a second texture to GLSL, are there features built in for rotating the texture coordinate for the second texture separate from the texture coordinate of the first texture?

C) This simple RotateTexCoord() function has 5 if() statements... that's 5 branches per fragment drawn for a trivial operation. How could it be optimized?

 

A) Don't think so

B) Don't think so

C) Use a matrix to represent the transformation. Calculate the 3x2 transformation matrix on the CPU and pass it into the vertex shader as a couple of vec3's, transforming the UVs should boil down to a couple of dot products. If you're not already doing so, make sure you do the transformation on the UVs in the vertex shader, not in the pixel shader.

Share this post


Link to post
Share on other sites

C) Use a matrix to represent the transformation. Calculate the 3x2 transformation matrix on the CPU and pass it into the vertex shader as a couple of vec3's, transforming the UVs should boil down to a couple of dot products.

 

I don't know matrix math yet - what's the matrix for a 180 degree rotation (around the Z axis I guess, since they are texture coordinates)? Can matrices be used for flipping horizontally or vertically as well? Is that a 180 rotation around the X axis for vertical flipping and 180 around Y axis for horizontal?

 

If you're not already doing so, make sure you do the transformation on the UVs in the vertex shader, not in the pixel shader.

 

Well, the texture being drawn is already rotated by OpenGL - I don't yet have a need for a vertex shader. Are you saying rotate the texture coordinates of the alpha mask image in the vertex shader? So the vertex shader would rotate two separate textures at once?

Edited by Servant of the Lord

Share this post


Link to post
Share on other sites

I don't know matrix math yet


I can't stress enough how much you should stop everything you're doing and learn this. A game developer that doesn't have at least a decent grasp of linear algebra in Euclidean space is much akin to a surgeon who doesn't know how to hold a scalpel. smile.png
 

what's the matrix for a 180 degree rotation (around the Z axis I guess, since they are texture coordinates)?


Assuming 2D texture coordinates and no need to apply an offset, you can use a 2x2 matrix here (offsets would require a 3x3 affine transformation matrix):

| -1, 0 |
| 0, -1 |
 

Can matrices be used for flipping horizontally or vertically as well? Is that a 180 rotation around the X axis for vertical flipping and 180 around Y axis for horizontal?


Yes.

vertical mirror:
| 1 0 |
| 0 -1 |

or

horizontal mirror:
| -1 0 |
| 0 1 |

You can perform a great many operations with linear transformations which is what a matrix models.
 

Well, the texture being drawn is already rotated by OpenGL - I don't yet have a need for a vertex shader. Are you saying rotate the texture coordinates of the alpha mask image in the vertex shader? So the vertex shader would rotate two separate textures at once?


You must have a vertex shader to use a pixel shader. Your vertex shader is likely just doing passthru (copying its inputs directly to the outputs). The pixel shader is then receiving the original UVs post-interpolation and rotating them. There's usually no reason you can't do that rotation in the vertex shader so that you don't have to re-evaluate the rotation for every pixel fragment.

The vertex shader can use whatever vertex attributes it wants so there's no reason it couldn't rotate both or either of the incoming UVs (assuming they're separate). If the source texture and mask are both rotated the same then you only need a single UV attribute.

Vertex shader pseudocode:

vsout_position = vsin_position
vsout_texcoord0 = rotate(vsin_texcoord0)
vsout_texcoord1 = rotate(vsin_texcoord1)

Pixel shader pseudocode:

psout_color = sample(source_texture, psin_texcoord0)
psout_color *= sample(mask_texture, psin_texcoord1)

Again, just using a single texcoord if both textures use the same coordinates.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Similar Content

    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

  • Popular Now