Archived

This topic is now archived and is closed to further replies.

OpenGL OpenGL Transparent, Not Translucent

This topic is 6419 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Me again. Ok, I have a bitmap of of the cursor I wish to use in my GUI. It loads fine and everything so thats not the problem. The problem is that I wish the white lines that make up the cursor to be totaly solid so that they don't blend with whatever's behind it. This sounds simple but what I also want to happen is the backgroun around the cursor to be totaly see through so the cursor isn't a big block And whatever is in the middle of my curor, to be translucent so that it blends with what evers behind it. Take the normal windows cursor, it has black lines defining it and its filled in with white. The surroundings are transparent. I want to take out the white and make it translucent and the black lines (mine are white but who cares) to be totaly solid so that they are always black (white in my case) I was thinking of creating a really ugly color for what I want transparent and another ugly one for translucent (like colorkeying in DX). Is there a way to do that in OpenGL? Thanks Edited by - Spanky on 6/16/00 2:20:58 AM

Share this post


Link to post
Share on other sites
Hi,
I already played around with transparency effects in openGL.

As far as I understand you use a quad with a texture on it. You should load the bitmap into a program like adobe''s photoshop. There you add a fourth channel, this channel is the alpha component. The transparent parts should black (0) and the translucent somewhere between 0 and 255. Because 255 would totally solid. Now you save that as a tga file or any other format you can read. When load to the OpenGL texture memory don''t forget to specifiy the fourth channel!
The openGL state machine should have GL_BLEND enabled before you draw the icon. Further you have to set the blending function this way:
glBlendFunc( GL_SRC_ALPHA , GL_ONE_MINUS_SRC_ALPHA );

Finally you have to ensure that the icon/cursor is drawn at last.

Share this post


Link to post
Share on other sites
I do the same thing in photoshop (add an alpha channel), only I use glDrawPixels() to put it to the screen. You could also do it using stenciling. There are a couple of tutorials that you can get to through opengl.org that detail this.

-BacksideSnap-

Share this post


Link to post
Share on other sites
You have to load your cursor as a texturmap on a quad an set the texturing to decal or something like that

Share this post


Link to post
Share on other sites
quote:
Original post by -BacksideSnap-

I do the same thing in photoshop (add an alpha channel), only I use glDrawPixels() to put it to the screen. You could also do it using stenciling. There are a couple of tutorials that you can get to through opengl.org that detail this.



That would work, but it''s slow. glDrawPixels is too slow, because the data to be drawn is kept in system memory, while if you load it as a texture, it remains on the card, being much fast to draw. The stencil could work too, but is far more complicated than TheMummy''s solution, and few cards accelerate stenciling.

An alternative to TheMummy''s solution is to just create the cursor image with colorkeying, and before pass the data to the glTexImage2D, you create another array of pixels, containing RGBA color values. Then you circle through your data, assign the RGB values accordingly your original data, and see if the current pixel has the color key value. If it has, you assign the A value to 0, otherwise set it to 1.

Hope that helps,




Nicodemus.

----
"When everything goes well, something will go wrong." - Murphy

Share this post


Link to post
Share on other sites
With OpenGL you can do everithing !

First of all don't use glDrawPixels (it's too slow on some hardware implementations).
The simplest solution is to use an RGBA (32 bit) texture for you image RGB store the color and byte A store alpha for the pixel.
You can decide (this is the default) to make alpha=0 transparent color and alpha=255 full opacity).

You have to apply texture on your quad and enable

TEXTURE2D (of course)
BLEND (if you want a blending...'semitransparent' effect)
use glBlendFunc( GL_SRC_ALPHA , GL_ONE_MINUS_SRC_ALPHA )
or ALPHA_TEST (if you want a simple pixel visible/invisible)

NOTE : if you use only alpha=0 or alpha=255 ALPHA_TEST and BLEND are equivalent (on some cards ALPHA_TEST is faster)

use GL_REPLACE for your texture environment

-------------------------

Another solution (useful for bitmap fonts) is to create a simple alpha map (8 bit) and apply the texture on a quad.
You can control the color using glColor()...(If I remember...you should use a simple GL_REPLACE with your GL_ALPHA texture and OpenGL will use you color RGB with your alpha texture)

-------------------------

Only drawback : you have to create your alpha map!
You can load a 24 bit bitmap into a 32 bit array and change alpha for every pixel if the RGB color is equal to the color you want to be transparent.
In the same manner you can load directly an alpha map as a 256 color bitmap and replace with 0 or 0xff every pixel (of course you ignore palette)

-------------------------

Of course if you play around texture environment you can get more strange effect.

Edited by - Andrea on June 18, 2000 12:01:18 PM

Share this post


Link to post
Share on other sites
Shit, sorry guys, I forgot that I had a decent graphics card. If you''re going to be cutting features out of the api due to hardware support then use fucking directx. glDrawPixels works fine on my GeForce, my friends TNT, TNT 2 and even an ATI Rage Fury. Don''t use a voodoo for gl, it''s like trying to bail out the pacific with a teaspoon.
By the way, what I''m doing still works if you want to be lame enough to apply it to a texture and move a quad around that doubles for a mouse cursor. Personally I don''t like doing things backwards, even if that''s the fastest way. And texture mapping things to quads is backwards when all you''re trying to do are simple raster graphics.
A lot of direct3d people love to rip on ogl because of this shit.

BTW, what Andrea said would work the best.

-BacksideSnap-

Share this post


Link to post
Share on other sites
-BacksideSnap-, could you explain WHY drawing a texture mapped quad is backwards, or lame?? Why do you think that glDrawPixels is better, or less lamer?? Since OpenGL has no way to allocate video memory directly in the card for doing raster graphics, the best soluction IS using textures. I don''t think that ANY professional game use glDrawPixels, simply because it''s TOO slow for anything. Transfering system memory to the card EVERY framme is slow, and lame in my oppinion is DO it when you have another option. And the Direct3D people are right about this topic: there''s no way to allocate video memory in OpenGL! The only way is to use textures! That''s the advantage that Direct3D has over OpenGL: it''s DirectDraw support, that btw, microsoft could''ve implemented for OpenGL too, but NO! They want to take on the world! BacksideSnap, try to just draw an 256x256 image using glDrawPixels, and try to load it as a texture and drawign it with a quad to see the diference... If you don''t want to support older cards, well, that''s your problem, because it''s not everyone that has a GeForce...

Nicodemus.

----
"When everything goes well, something will go wrong." - Murphy

Share this post


Link to post
Share on other sites
Hi!


Regarding texturing a quad .. it''s MUCH better than using glDrawPixels(). Actually, I would propose texturing just a triangle , but one triangle more isn''t going to hurt performance, so quads are fine. BTW, comparing OpenGL with DirectDraw/Direct3D is not fair and only leads to stupid "Which API is better?" threads.

Nicodemus ... what do you mean by "that btw, microsoft could''ve implemented for OpenGL too" I don''t think Microsoft developed OpenGL. I believe it was SGI. On their platforms (high-end workstations) this glDrawPixels()-problem also isn''t such a big issue, because they have a UMA (unified memory architecture), where you don''t distinguish between system and video memory.

Andrea: 8-bit alpha is a bit of overkill for a cursor, unless you want cool fading effects.

Spanky: Just use the textured quad/triangle approach. Be sure to set your blending mode correctly (I think TheMummy got it correctly).

Peace,

MK42

Share this post


Link to post
Share on other sites
Ok, let me clarify here. First of all, I''m not trying to start a D3D/oGL battle, I''ve been down that route before. If I want to criticize a part of the api (and no decent raster graphics is a legitimate complaint) then it''s my god given right to do so. If it pains you so much to read posts in a flame war, then don''t do so. I find it humerous when people cry about arguments... just don''t read the posts and get on with your life.
Second, I agree that texturing a quad DOES work better on most implementations. glDrawPixels runs very fast for me, I took no performance hit with it. The reason I think it is backwords is that I don''t think you should have to texture a quad just to get some raster graphics on the screen. What I said was more a bitch to the hardware vendors and something that needs to be corrected in oGL.
If anyone doubts that I am telling the truth check out the demo on my site(click on my name). Not a ton of raster graphics, but certainly a substantial amount. Actually, I''d appreciate if people would try it and let me know how it runs on their system/oGL implementation.
Shit, maybe I will go back to textured quads if DrawPixels is causing this kind of chaos

Good call about Microsoft too, Nicodemus. I don''t know wtf is up with not implementing DD in with oGL as well as D3D. Couldn''t be that hard for a multi-billion dollar corp.

-BacksideSnap-

Share this post


Link to post
Share on other sites
quote:
Original post by MK42

Nicodemus ... what do you mean by "that btw, microsoft could've implemented for OpenGL too" I don't think Microsoft developed OpenGL. I believe it was SGI. On their platforms (high-end workstations) this glDrawPixels()-problem also isn't such a big issue, because they have a UMA (unified memory architecture), where you don't distinguish between system and video memory.



I know that OpenGL was developed by SGI, but I mean that Microsoft could've implemented a way to use OpenGL with DirectDraw. Sometimes you want to write directly to the frame buffer, and there's no way to do that currently on Windows. Looks like some people have done this (using DD with OpenGL), but it's too "hack", and have performance hits.


quote:
Original post by -BacksideSnap-

Ok, let me clarify here. First of all, I'm not trying to start a D3D/oGL battle, I've been down that route before. If I want to criticize a part of the api (and no decent raster graphics is a legitimate complaint) then it's my god given right to do so. If it pains you so much to read posts in a flame war, then don't do so. I find it humerous when people cry about arguments... just don't read the posts and get on with your life.


Yeah, I agree that you can criticize the API, but that doesn't give you the right to call people "lame" if they don't do things like you do.


quote:
Original post by -BacksideSnap-

Good call about Microsoft too, Nicodemus. I don't know wtf is up with not implementing DD in with oGL as well as D3D. Couldn't be that hard for a multi-billion dollar corp.


Actually isn't hard. It's just that Microsoft doesn't want to help OpenGL, since they want D3D to get on top. So, they slowed down a lot OpenGL updates (I think they plan to release version 1.2 for windows at the end of the year, but I don't believe it), in favor of D3D... Micro$oft sucks big time!



Edited by - Nicodemus on June 20, 2000 11:57:57 PM

Share this post


Link to post
Share on other sites

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now