Alpha blending with OpengL

Started by
5 comments, last by sakky 19 years ago
Hello, I've been working on this little engine of mine. It uses OpenGL! I haven't used OpenGL in a long time; I've mostly used Direct3D. But I feel for a change and want to code with a different API. Back on topic, I've decided that there are two types of polygons that I need to render. One for models and ones for special effects. The model polygons don't really use any blending but the special effects ones do. The special effect polygons are used for particles and such things. Any ways, I figured I would only use one vertex format for all so I would only have to work with one type of vertex structure. So I've came to the conclusion that all I need is: position, normal, color and texture coordinates. Each model polygon may use two different textures; one for a basic image and the other for detail (sort of like a bump but not). But because the detail texture will be that same size as the base texture I wouldn't need a another set of texture coordinates. I just use the same ones for the base. I can blend them with multi-pass or multi-texturing, either way it's the same vertex structure I use for everything. So my question is, what are the blending operation that I need to setup for source and destination? I should tell you that all textures will have an alpha channel. The base textures will be RGBA and the detail texture will bit 16 bit gray scale with an alpha channel. I use the alpha channels to blend the textures. That way, if I want to special effects to be blended I only specify the blend factor with the alpha. That being said, I want the alpha value to depict how the polygons with their texture will be blended. I'm using this: glBlendFunc( GL_SRC_ALPHA, GL_ONE )! I haven't tested it with any polygons yet because I'm still in designer stage. I'm just writing out the basic framework. I want to set all the render states for OpenGL once and not have to dick with them again at all. Unless of corse I have to refresh the application because the user task-switched out of it. But you get the basic idea. One time initialization. Now that every thing is don with polygons(triangles) I only have three types of primitives to draw; single triangles, triangle strips and triangle fans. That's pretty easy way of doing things I think. I really don't need a whole lot to make a complex scene, or I don't want a whole lot. Only using one vertex structure that is used by every thing in the engine is really efficient in my eyes at least. So is my blending operations correct? Should I not use one type of structure for all vertex information? How do I use the alpha channel to govern the transparency / translucency of my polygons? I think I had it right, do I? Just simple questions for all you pros out there. I hope you can help me.
Take back the internet with the most awsome browser around, FireFox
Advertisement
To the OpenGL forum with this, I think...
Quote:I've decided that there are two types of polygons that I need to render. One for models and ones for special effects.
seperate the model from the material
ie do
MM
{
int modelID;
int materialID;
};

+ not
model
{
vector *verts;
etc
int textureID;
};

Quote:I want to set all the render states for OpenGL once and not have to dick with them again at all. Unless of corse I have to refresh the application because the user task-switched out of it. But you get the basic idea. One time initialization.
unless everthings is drawn with the same material (opengl states/texture etc) then u will have to change something eg trees would use the same material as cars

Quote:Now that every thing is don with polygons(triangles) I only have three types of primitives to draw; single triangles, triangle strips and triangle fans. That's pretty easy way of doing things I think.
even easier just use GL_TRIANGLES fgor everything, a lot of apps do this eg quake3

Quote:So is my blending operations correct? Should I not use one type of structure for all vertex information? How do I use the alpha channel to govern the transparency / translucency of my polygons? I think I had it right, do I?
an often used blend methods are one one, src_lpaha 1-src_alpha ie one size does NOT fit all
I really didn't;t mean that the actual model's and material data would be combined together in the same structure. Sorry, I should of clarified better.

What I met was one type of vertex format that is used for all polygons. Hence, world mesh, model meshes and particles will all use the same vertex format. The actual polygon is just a simple structure like so:

// Vertex definition
//
typedef struct _GLVERTEX {
FLOAT X, Y, Z ;
BYTE R, G, B, A ;
FLOAT S, T ;
} GLVERTEX, *LPGLVERTEX ;

// Polygon definition
//
typedef struct _GLPOLYGON {
DWORD dwVertices[ 3 ] ;
UINT uBaseTexID ;
UINT uBumpTexID ;
} GLPOLYGON, *LPGLPOLYGON ;

That's it! Basically I use these same structures through out the entire engine for ever 3D object.

My real question is how I would setup blending. The textures will all have an alpha channel. Also, alpha testing as well as alpha blending will be enabled. I use both so that the edges that have been tested out can also look smooth.

Any ways, I'll stick with GL_TRIANGLES then as you suggested. About your material idea, well I'm already doing something similar like that with the polygon structure. The only difference is that it contains vertex indices too. The 3D object or entity will be represented a lot different from those to simple structures. I'm sorry if you miss understood my and though I met representing (data and logic wise) 3D objects using the same structure. I met same as in same primitive geometry

I will explain a little further so that I may better understand what I'm trying to accomplish here.

Basically, special effects will be blended (ie particles and such) and I want to control there blend using the alpha channel. Also, character and other 3D objects will appeared translucent at times. So I figured that I will control that with the alpha channel.

So my setup using the color component to blend color with the textures and to also used the alpha component for translucency. I use the alpha components in the textures to clip areas that I don't want to be scene. I do this so that I can get better looking 3D objects with less polygons. Mean while the alpha component of the color will blend the if needed.

My problem is that I'm seeing that I will not be able to use one blending operation for everything. I may be able to use the alpha channel to blend particles and stuff, but blending models to make them look ghostly or something might not work that same way. I figured that I would be able to blend specific parts of a texture if that alpha, were to say, blurred in the image area around the edges of something that I needed to clip out. This would make it look smoother and not so jaggy. And it would do this because of the alpha components in the texture. But I want to alpha component in the color to override or add to the effect if I decrease the alpha component in it. I'm guessing that this isn't possible.
Take back the internet with the most awsome browser around, FireFox
typedef struct _GLPOLYGON {
DWORD dwVertices[ 3 ] ;
UINT uBaseTexID ; <- this doesnt belong here but in the material
UINT uBumpTexID ; <- ""
} GLPOLYGON, *LPGLPOLYGON ;

keep the mesh + material structures totslly seperate eg say the polygon/mesh whatever suddenly changes material eg the player wears a green shirt instead of a polkadot one, with your example u will have to go through all polygons changing various IDs

Quote:My real question is how I would setup blending. The textures will all have an alpha channel. Also, alpha testing as well as alpha blending will be enabled. I use both so that the edges that have been tested out can also look smooth.
only enable them if theyre needed else performanvr will suffer.
Okay, I think I see what you mean now. Using a pointer to material instead of pointer to texture identities. That way if I want to change textures on the object all I have to do is reassign the pointer. Even better yet, I only use material pointers for objects and don't have them associated with the polygons at all.So then I would have->

// New polygon definition
//
typedef DWORD[ 3 ] GLPOLGON ;
typedef GLPOLYGON* LPGLPOLYGON ;

// Material definition
//
typedef struct _GL_MATERIAL {
UINT uBaseTextureID ;
UINT uBumpTextureID ;
UINT uBlenFlags ;
} GLMATERIAL, *LPGLMATERIAL ;

// Basic object definition
//
typedef struct _OBJECT {
LPPOLYGON pMesh ;
GLMATERIAL glMaterial ;

...

} OBJECT, *LPOBJECT ;

That looks a lot better! Then all I have to do is set the material and draw the GL_TRIANGLES. And by keeping the blend flag with the material I can use it to set the blend operations.

By the way, the bump texture identity is used as a detail texture (more or less a bump). It's blended over the polygon to give it more detail or a little bump sort of with out making expensive bump calculations. Also, the normal is used for specular highlights and a little lighting as well. But the real lighting comes from dynamic lighting with light maps and such.

So then I'll retake your advice and use the material object. I guess I wasn't really thinking on that one huh? Or I just wasn't thinking ahead. Maybe I should extend the material structure a little bit so that I can also specify which colors to alpha test for.

Thanks!
Take back the internet with the most awsome browser around, FireFox
Actually, I'm going to change something again. I will change the polygon structure to just an arreay of three LPGLVERTEXs. Because if I use an array of three DWORDs then I limited to the amount of polygons I can represent. But if I use vertex pointer then I'm not limited at all. Besides system resources, I can have huge worlds that have like 12 million polygons apposed to a littel of 4 million with a DWORD.
Take back the internet with the most awsome browser around, FireFox

This topic is closed to new replies.

Advertisement