Sign in to follow this  

OpenGL Am I blending this correctly?

Recommended Posts

Bozebo    108
I am in the process of learning OpenGL. I have been using various resources (including the outdated NeHe tutorials - which are a good way to start otherwise I would not know how things used to work from the correct perspective) to figure everything out. At the moment I have a basic understanding of texturing, vertex manipulation and a few other vital techniques. My next step is to get to grips with blending: So, I took a look at the particles example which is nice looking and I understand what it is doing, but my problem is that it only blends properly with a black background. So, I took a look at the next tutorial which shows off masking. Now I have essentially got a circular gradiented disk which I can have blended infront of a textured/coloured opaque surface correctly, good. But to do so I need 2 textures and have to do a double pass as NeHe's masking tutorial does. Now, is that not bad practice? Shouldn't I be looking at other blending methods to get the job done, and if so can somebody point me in the right direction? I have had a play with the different values for glBlendFunc but I can't get it to appear correctly without a double pass like NeHe did. I am aware I could modify my texture loading routine to create a mask for the image, which would solve the problem of 2 source bitmaps but I want to understand fully what is going on so I can learn at a more stable rate and get to grips with things before I delve into the world of fragment shaders and other more complicated operations. Take a look at my current source (devcpp project, but it shouldn't be hard to get it working fine in something else). Tanks for your time so far.

Share this post

Link to post
Share on other sites
Erik Rufelt    5901
Try using glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA). I think NeHe uses GL_ONE for destination blending, as otherwise you can get artifacts when particles aren't drawn in front to back order. It also depends on what effect you want to achieve. Explosions etc. often look good with additive blending (GL_ONE), as everything gets brighter where particle density is higher.

Share this post

Link to post
Share on other sites
Bozebo    108
I am not trying to freeload code off people or anything, but this is my frame's drawing code.

I tried a few adjustments and couldn't get any results I was happy with.

I tried having single pass with various different blending methods, none of them ended up looking correct while over the opaque background geometry, though a few had nice effects when over the black background. With double pass I tend to end up with bad looking blending unless I stick to GL_DST_COLOR,GL_ZERO for the mask and

//start of frame

//object step code etc

//enter the drawing phase
//clear the screen and z-buffer

//draw events

//glTranslatef(0,-16,-128); (this is the same as -camera.x etc, just added a class to it below for ease)

//not working with the frustum yet, I will leave that for another time

glTranslatef(-camera.x,-camera.y,-camera.z); //basic camera implementation

glPushMatrix(); //don't really need this do I?
glCallList(ground); //coloured + checkered platform to test blending

//draw particles
glDepthMask(false); //disable updating the z-buffer
glEnable(GL_BLEND); //enable blending

//test particle
glTranslatef(0,partY,0); //positioning (no rotation yet, make it "2d"?)

glBlendFunc(GL_DST_COLOR,GL_ZERO); //blend the mask
glCallList(part); //draws the quad

//the next bit is sort of what I mean by double pass (the non-single pass bit being the mask above)

glColor4f(1,0,0,0.2); //translucent + red
glBindTexture(GL_TEXTURE_2D,partOrb.texture); //inverted form of the mask
glBlendFunc(GL_ONE,GL_ONE); //blend the particle
glCallList(part); //draws the quad

glDisable(GL_BLEND); //no blending (back to opaque)
glDepthMask(true); //did I need to change this? or is it causing issues?
glColor4f(1,1,1,1); //white,opaque

//end of frame
//get input etc

"orb" particle texture:

(the mask is an inverted form of this, ie: black in the centre radiation in a gradient to a white exterior)

What are your thoughts?

I have added different comments so If you want to say anything about my code it would help me a lot to hear it because most of my work is "blind" as I can't find any proper tutorials that combine effects together in a practical manner (eg, NeHe's particles don't work with something other than black behind them)

Share this post

Link to post
Share on other sites
Erik Rufelt    5901
Does your texture actually have an alpha channel, or are you using that image as the color?
If it's the color, you could try GL_ONE, GL_ONE_MINUS_SRC_COLOR. Usually you probably want that image to represent the alpha channel, and keep the RGB channel as all white, if you want to colorize it with glColor4f. How do you load your textures?
To use alpha you need a format that supports it, for example PNG. Alpha channels can be saved in BMPs but whether they're actually used depends on how you load the image, as there's no alpha in the BMP specification.

Share this post

Link to post
Share on other sites
Bozebo    108
There are no alpha channels, the images are loaded from raw format 24bpp.

So, opengl textures support alpha channels directly?

Here is my texture class:

class texture{
GLuint texture;
int width,height;

bool loadRAW(char* path,int w = 128,int h = 128){
width = w;
height = h;

BYTE * data[width * height * 3]; //3 bytes per pixel
FILE * file;

//open and read texture data
file = fopen(path,"rb"); //attempt to open the file
if(!file) //check the file could be opened
return false;

fread(data, width * height * 3, 1, file); //copy file contents into the buffer
fclose(file); //close the file

//allocate a texture resource
if(repeat){ //if it repeats
//set as repeating

//set filtering

//set as target
//set texture data

//free buffer

return true;

(I think in some of my other workings I have a version of loadRAW which uses edge size instead of width/height as all the textures are going to be square)

I don't like using that buggy/dated/flawed glut library ^_^ so I opted for raw textures.

Would alpha channels solve my problems?

So where I use glColor4f(1,0,0,0.2);
it would render as the alpha of the pixel in the texture multiplied by 0.2?

So In the end I only need glColor3f? as the alpha is declared more sensibly.

The nature of NeHe's tutorials doesn't exactly help my understanding of other techniques.

Am I looking towards a different use of gluBuild2DMipmaps? Or another function for similar uses? I thought to implement an alpha map you had to use blending, and that was the idea behind the double pass.

Share this post

Link to post
Share on other sites
Erik Rufelt    5901
Use 4 instead of 3, and GL_RGBA instead of GL_RGB for gluBuild2DMipmaps, and specify a data array with 4 bytes per pixel, where the last will be the alpha.
You could do it in your load function, by setting the alpha channel to the RGB average and setting RGB to 255, or save a RAW with an alpha channel if the program you use supports it.

So where I use glColor4f(1,0,0,0.2);
it would render as the alpha of the pixel in the texture multiplied by 0.2?

Yes this is correct.

Share this post

Link to post
Share on other sites
Bozebo    108
Great thanks for the help!

I think here I will make it form the alpha channel from the same image as the source one (find the "lightness" in an HSL fashion), considering that for particles the effect I want is basically a shape where it is coloured to the magnitude of an alpha.

[Edited by - Bozebo on March 13, 2010 10:18:20 AM]

Share this post

Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Zaphyk
      I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?
    • By Kjell Andersson
      I'm trying to get some legacy OpenGL code to run with a shader pipeline,
      The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
      I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
      I've got a version 330 vertex shader to somewhat work:
      #version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
      What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
      What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?
      Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.
    • By markshaw001
      Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.
    • By KarimIO
      Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
      GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats =; gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!
    • By Adrianensis
      Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
      This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
      This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
      I'm available for a good conversation about Game Engine / Graphics Programming
  • Popular Now