Archived

This topic is now archived and is closed to further replies.

Spanky

OpenGL OpenGL Transparent, Not Translucent

Recommended Posts

Me again. Ok, I have a bitmap of of the cursor I wish to use in my GUI. It loads fine and everything so thats not the problem. The problem is that I wish the white lines that make up the cursor to be totaly solid so that they don't blend with whatever's behind it. This sounds simple but what I also want to happen is the backgroun around the cursor to be totaly see through so the cursor isn't a big block And whatever is in the middle of my curor, to be translucent so that it blends with what evers behind it. Take the normal windows cursor, it has black lines defining it and its filled in with white. The surroundings are transparent. I want to take out the white and make it translucent and the black lines (mine are white but who cares) to be totaly solid so that they are always black (white in my case) I was thinking of creating a really ugly color for what I want transparent and another ugly one for translucent (like colorkeying in DX). Is there a way to do that in OpenGL? Thanks Edited by - Spanky on 6/16/00 2:20:58 AM

Share this post


Link to post
Share on other sites
Hi,
I already played around with transparency effects in openGL.

As far as I understand you use a quad with a texture on it. You should load the bitmap into a program like adobe''s photoshop. There you add a fourth channel, this channel is the alpha component. The transparent parts should black (0) and the translucent somewhere between 0 and 255. Because 255 would totally solid. Now you save that as a tga file or any other format you can read. When load to the OpenGL texture memory don''t forget to specifiy the fourth channel!
The openGL state machine should have GL_BLEND enabled before you draw the icon. Further you have to set the blending function this way:
glBlendFunc( GL_SRC_ALPHA , GL_ONE_MINUS_SRC_ALPHA );

Finally you have to ensure that the icon/cursor is drawn at last.

Share this post


Link to post
Share on other sites
I do the same thing in photoshop (add an alpha channel), only I use glDrawPixels() to put it to the screen. You could also do it using stenciling. There are a couple of tutorials that you can get to through opengl.org that detail this.

-BacksideSnap-

Share this post


Link to post
Share on other sites
You have to load your cursor as a texturmap on a quad an set the texturing to decal or something like that

Share this post


Link to post
Share on other sites
quote:
Original post by -BacksideSnap-

I do the same thing in photoshop (add an alpha channel), only I use glDrawPixels() to put it to the screen. You could also do it using stenciling. There are a couple of tutorials that you can get to through opengl.org that detail this.



That would work, but it''s slow. glDrawPixels is too slow, because the data to be drawn is kept in system memory, while if you load it as a texture, it remains on the card, being much fast to draw. The stencil could work too, but is far more complicated than TheMummy''s solution, and few cards accelerate stenciling.

An alternative to TheMummy''s solution is to just create the cursor image with colorkeying, and before pass the data to the glTexImage2D, you create another array of pixels, containing RGBA color values. Then you circle through your data, assign the RGB values accordingly your original data, and see if the current pixel has the color key value. If it has, you assign the A value to 0, otherwise set it to 1.

Hope that helps,




Nicodemus.

----
"When everything goes well, something will go wrong." - Murphy

Share this post


Link to post
Share on other sites
With OpenGL you can do everithing !

First of all don't use glDrawPixels (it's too slow on some hardware implementations).
The simplest solution is to use an RGBA (32 bit) texture for you image RGB store the color and byte A store alpha for the pixel.
You can decide (this is the default) to make alpha=0 transparent color and alpha=255 full opacity).

You have to apply texture on your quad and enable

TEXTURE2D (of course)
BLEND (if you want a blending...'semitransparent' effect)
use glBlendFunc( GL_SRC_ALPHA , GL_ONE_MINUS_SRC_ALPHA )
or ALPHA_TEST (if you want a simple pixel visible/invisible)

NOTE : if you use only alpha=0 or alpha=255 ALPHA_TEST and BLEND are equivalent (on some cards ALPHA_TEST is faster)

use GL_REPLACE for your texture environment

-------------------------

Another solution (useful for bitmap fonts) is to create a simple alpha map (8 bit) and apply the texture on a quad.
You can control the color using glColor()...(If I remember...you should use a simple GL_REPLACE with your GL_ALPHA texture and OpenGL will use you color RGB with your alpha texture)

-------------------------

Only drawback : you have to create your alpha map!
You can load a 24 bit bitmap into a 32 bit array and change alpha for every pixel if the RGB color is equal to the color you want to be transparent.
In the same manner you can load directly an alpha map as a 256 color bitmap and replace with 0 or 0xff every pixel (of course you ignore palette)

-------------------------

Of course if you play around texture environment you can get more strange effect.

Edited by - Andrea on June 18, 2000 12:01:18 PM

Share this post


Link to post
Share on other sites
Shit, sorry guys, I forgot that I had a decent graphics card. If you''re going to be cutting features out of the api due to hardware support then use fucking directx. glDrawPixels works fine on my GeForce, my friends TNT, TNT 2 and even an ATI Rage Fury. Don''t use a voodoo for gl, it''s like trying to bail out the pacific with a teaspoon.
By the way, what I''m doing still works if you want to be lame enough to apply it to a texture and move a quad around that doubles for a mouse cursor. Personally I don''t like doing things backwards, even if that''s the fastest way. And texture mapping things to quads is backwards when all you''re trying to do are simple raster graphics.
A lot of direct3d people love to rip on ogl because of this shit.

BTW, what Andrea said would work the best.

-BacksideSnap-

Share this post


Link to post
Share on other sites
-BacksideSnap-, could you explain WHY drawing a texture mapped quad is backwards, or lame?? Why do you think that glDrawPixels is better, or less lamer?? Since OpenGL has no way to allocate video memory directly in the card for doing raster graphics, the best soluction IS using textures. I don''t think that ANY professional game use glDrawPixels, simply because it''s TOO slow for anything. Transfering system memory to the card EVERY framme is slow, and lame in my oppinion is DO it when you have another option. And the Direct3D people are right about this topic: there''s no way to allocate video memory in OpenGL! The only way is to use textures! That''s the advantage that Direct3D has over OpenGL: it''s DirectDraw support, that btw, microsoft could''ve implemented for OpenGL too, but NO! They want to take on the world! BacksideSnap, try to just draw an 256x256 image using glDrawPixels, and try to load it as a texture and drawign it with a quad to see the diference... If you don''t want to support older cards, well, that''s your problem, because it''s not everyone that has a GeForce...

Nicodemus.

----
"When everything goes well, something will go wrong." - Murphy

Share this post


Link to post
Share on other sites
Hi!


Regarding texturing a quad .. it''s MUCH better than using glDrawPixels(). Actually, I would propose texturing just a triangle , but one triangle more isn''t going to hurt performance, so quads are fine. BTW, comparing OpenGL with DirectDraw/Direct3D is not fair and only leads to stupid "Which API is better?" threads.

Nicodemus ... what do you mean by "that btw, microsoft could''ve implemented for OpenGL too" I don''t think Microsoft developed OpenGL. I believe it was SGI. On their platforms (high-end workstations) this glDrawPixels()-problem also isn''t such a big issue, because they have a UMA (unified memory architecture), where you don''t distinguish between system and video memory.

Andrea: 8-bit alpha is a bit of overkill for a cursor, unless you want cool fading effects.

Spanky: Just use the textured quad/triangle approach. Be sure to set your blending mode correctly (I think TheMummy got it correctly).

Peace,

MK42

Share this post


Link to post
Share on other sites
Ok, let me clarify here. First of all, I''m not trying to start a D3D/oGL battle, I''ve been down that route before. If I want to criticize a part of the api (and no decent raster graphics is a legitimate complaint) then it''s my god given right to do so. If it pains you so much to read posts in a flame war, then don''t do so. I find it humerous when people cry about arguments... just don''t read the posts and get on with your life.
Second, I agree that texturing a quad DOES work better on most implementations. glDrawPixels runs very fast for me, I took no performance hit with it. The reason I think it is backwords is that I don''t think you should have to texture a quad just to get some raster graphics on the screen. What I said was more a bitch to the hardware vendors and something that needs to be corrected in oGL.
If anyone doubts that I am telling the truth check out the demo on my site(click on my name). Not a ton of raster graphics, but certainly a substantial amount. Actually, I''d appreciate if people would try it and let me know how it runs on their system/oGL implementation.
Shit, maybe I will go back to textured quads if DrawPixels is causing this kind of chaos

Good call about Microsoft too, Nicodemus. I don''t know wtf is up with not implementing DD in with oGL as well as D3D. Couldn''t be that hard for a multi-billion dollar corp.

-BacksideSnap-

Share this post


Link to post
Share on other sites
quote:
Original post by MK42

Nicodemus ... what do you mean by "that btw, microsoft could've implemented for OpenGL too" I don't think Microsoft developed OpenGL. I believe it was SGI. On their platforms (high-end workstations) this glDrawPixels()-problem also isn't such a big issue, because they have a UMA (unified memory architecture), where you don't distinguish between system and video memory.



I know that OpenGL was developed by SGI, but I mean that Microsoft could've implemented a way to use OpenGL with DirectDraw. Sometimes you want to write directly to the frame buffer, and there's no way to do that currently on Windows. Looks like some people have done this (using DD with OpenGL), but it's too "hack", and have performance hits.


quote:
Original post by -BacksideSnap-

Ok, let me clarify here. First of all, I'm not trying to start a D3D/oGL battle, I've been down that route before. If I want to criticize a part of the api (and no decent raster graphics is a legitimate complaint) then it's my god given right to do so. If it pains you so much to read posts in a flame war, then don't do so. I find it humerous when people cry about arguments... just don't read the posts and get on with your life.


Yeah, I agree that you can criticize the API, but that doesn't give you the right to call people "lame" if they don't do things like you do.


quote:
Original post by -BacksideSnap-

Good call about Microsoft too, Nicodemus. I don't know wtf is up with not implementing DD in with oGL as well as D3D. Couldn't be that hard for a multi-billion dollar corp.


Actually isn't hard. It's just that Microsoft doesn't want to help OpenGL, since they want D3D to get on top. So, they slowed down a lot OpenGL updates (I think they plan to release version 1.2 for windows at the end of the year, but I don't believe it), in favor of D3D... Micro$oft sucks big time!



Edited by - Nicodemus on June 20, 2000 11:57:57 PM

Share this post


Link to post
Share on other sites

  • Forum Statistics

    • Total Topics
      627776
    • Total Posts
      2979021
  • Similar Content

    • By lonewolff
      Hi guys,
      With OpenGL not having a dedicated SDK, how were libraries like GLUT and the likes ever written?
      Could someone these days write an OpenGL library from scratch? How would you even go about this?
      Obviously this question stems from the fact that there is no OpenGL SDK.
      DirectX is a bit different as MS has the advantage of having the relationship with the vendors and having full access to OS source code and the entire works.
      If I were to attempt to write the most absolute basic lib to access OpenGL on the GPU, how would I go about this?
    • By DelicateTreeFrog
      Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
      Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
      For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
      So, here's what the plan is so far as far as loading goes:
      Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
      Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
      Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
      The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
      So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
      With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!
    • By JJCDeveloper
      I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks
    • By AyeRonTarpas
      A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

      -What I'm using:
          C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.  
      -Questions
      Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?  
    • By ferreiradaselva
      Both functions are available since 3.0, and I'm currently using `glMapBuffer()`, which works fine.
      But, I was wondering if anyone has experienced advantage in using `glMapBufferRange()`, which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
      Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
  • Popular Now