Sign in to follow this  

OpenGL Cutting of a piece texture with a different texture.

This topic is 1372 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello everybody!


Okay, let's say that my application screen looks like that:





And I want it to look like this:




The scenery at the first picture contains of two planes on top of each other that have different textures.


What I want is the thing that happened at the second picture. It want it to cut the part of the plane wherever I want based of this texture:



I am using SDL for handling events, and OpenGL for drawing.


Thanks in advance smile.png.


Share this post

Link to post
Share on other sites

It appears you're already using alpha-blending on the surface you want to "cut" with the "hole" texture. In order to add the "hole", you can use multi-texturing. Here's a simple OpenGL tutorial: . In addition to what is explained there, you will also have to add a certain offset and a scale to the texture coordinates of the second texture (e.g., (gl_TexCoord[1] + offset) * scale), which is the position where you want the "hole" to appear, and the "size" of the hole (etc., etc.), and clamp the result to [0,1] (or was it [-1, 1] for OpenGL?), or use the "clamp" texture-addressing mode (this is what it's called in D3D - I don't know if OpenGL has it or what it's called).


You will also have to modify the gl_FragColor returned from the Fragment Program/Shader to something like this: gl_FragColor = texval1 * texval2; (just an "off the top of my head" example - if it doesn't work, try tweaking only the alpha value from texval1, with the average of the r, g and b values from texval2. etc... depends on the color format of your second texture).

Edited by tonemgub

Share this post

Link to post
Share on other sites

Thank you very much :D! +1 for you. But what if I had a vector of 2D points, and I would want all of them to make holes in "the texture", and the number of the points is specified in a variable, so it can change anytime? Can I add multiple textures of "the hole" to the multitexture? Or is it possible to combine all of the "holes" that point made into one texture, and use it in the multi-texture?


Thanks in advance :)

Share this post

Link to post
Share on other sites

Can I add multiple textures of "the hole" to the multitexture? Or is it possible to combine all of the "holes" that point made into one texture, and use it in the multi-texture?


You can do both.


The first method: you can send your 2D points vector into the fragment shader, as a 1D-texture object, and then iteratively sample the "hole" texture for each point from the 1D texture, then blend all of the "hole" samples together (multiply them, or just add them and clamp the final result), then use the final value instead of the texval2. You get the points from the 1D texture with texelFetch(), and the iteration (for loop) should run from 0 to the size of the points vectorm which you also send into a separate uniform variable to the fragment shader (or you could put it in the first float from the 1D texture...).
You'll have to re-create the 1D texture every time one of your 2D points changes, so it might be slow if it changes a lot, and/or you have a lot of points. There is also a limit on the size of a 1D texture (I don't know what it is, though).

Also, instead of a single offset&size for the "hole", you now have to send an array of offsets&sizes, for all of the holes, and use them like before.


The second method (combining all of the "hole" textures into one) can be done by using render-to-texture and then drawing the "hole" at different spots. First, clear everything to white (glClearColor, glClear), apply an ortho projection (glOrtho) then iterate over your 2D points vector, and for each point, draw a polygon with the "hole" texture applied to it at the position of the point. You may have to translate the points to OpenGL's screen coordinates first [-1, 1], if they're not already transformed. The texture you rendered to (let's call it "holes") can then be used for multi-texturing as before.

For this method, you have to re-draw the "holes" texture every time your 2D points changes, and it will be at least eight times slower than the first method (because instead of just sending the 2D points to the programmable pipeline, you are now sending four vertices of a polygon for each point), and you are also doing more work on the CPU (like the iteration of the 2D points).

Edited by tonemgub

Share this post

Link to post
Share on other sites

I think that's called "texture painting". It's probably similar to the second method I tried to describe before, but without keeping track of all the mouse positions in a vector, and with some fancy blend effects. :)

Share this post

Link to post
Share on other sites
Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
      #define NUM_LIGHTS 2
      struct Light
          vec3 position;
          vec3 diffuse;
          float attenuation;
      uniform Light Lights[NUM_LIGHTS];
    • By pr033r
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article ( inspirate from another code (here: but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch):
      Exe file (if you want to look) and models folder (for those who will download the sources):
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ...
      EDIT: Depth texture attachment:
  • Popular Now