Sign in to follow this  
JonW

OpenGL Bleeding Color in Linear Filtered Textures

Recommended Posts

JonW    173
Hi, I'm working on a 2D game with Direct3D version 2. I create all my textures as 32-bit with an alpha channel and use bilinear filtering for both scaling down and scaling up. The textures have a opaque portion surrounded by a black field that has an alpha of 0. When I display the textures with the nearest filter, they look OK, but when I use the bilinear filter the opaque parts of the image get blended with the black and so they have a dark border around them. In OpenGL, I was able to fix the problem of bleeding color by making the surrounding pixels black (a=0,r=0,g=0,b=0). In Direct3D it still bleeds anyway; apparently they must use different algos for bilinear filtering. I want to be able to have an alpha gradient in images, rather than just all-or-nothing transparency. Is there anyway I can keep the border from showing?

Share this post


Link to post
Share on other sites
ViLiO    1326
Adding a 1-pixel border around your sprite textures should be enough in Direct3D as well (I'm not 100% sure on this btw [smile])

The only thing I can think of is that if you are using the D3DXSprite interface and not making sure to pass it a RECT that reflects the added pixel border you would still be facing the same bleeding problems. The same could be said for screen-aligned quads and their texture coords.

All the best,
ViLiO

Share this post


Link to post
Share on other sites
JonW    173
I'm just drawing a textured quad with DrawPrimitive.

The border is a result of the opaque color blending with the transparent background color when linear filtering is used.

Share this post


Link to post
Share on other sites
ViLiO    1326
Quote:
Original post by JonWoyame
Sorry if I was unclear, but I'm actually trying to get rid of the border thats around the sprites.

The border is a result of the opaque color blending with the transparent background color when linear filtering is used.

Assuming i'm still not misunderstanding on this [wink] ....i'll expand on what I had said previously.

I am suggesting that in your paint program you add a 1-pixel border around your texture (ie the edge colour and alpha is extended out by 1 pixel in all directions). Then you shift your texture coords (or RECT for D3DXSprite) in by one pixel. This now means that when filtering, the outmost pixels of your sprite will get filtered with the added 1-pixel border.

Hope this helps, [smile]
ViLiO

Share this post


Link to post
Share on other sites
JonW    173
I shouldn't really say "border", it is a black outline around the image contents. For example, I have a hand cursor texture, and the cursor has a dark outline around it because it is blending with the neighboring black pixels which have an alpha of 0.

Share this post


Link to post
Share on other sites
ViLiO    1326
Ok, diagramatics time [lol]

Let's say we have a sprite ..we'll call him whiskers [wink]

When whiskers is filtered, the outer ring of pixels will get filtered with some full black opaque pixels (assuming this is correct for Direct3D [smile])
This will of course cause some artifacts as we want to have the white pixels transparent.

One solution is to add a border (or padding) of pixels around the outside of the sprite (this can be done in your paint program or you can modify the image yourself when you load it as a texture into Direct3D).
Here's one we prepared earlier ...


Now if we were to load and use this texture exactly as before the artifacts would still be there as the outer ring of white pixels would still be getting filtered with some black ones. So the solution is to shift your uv coords in by 1 pixel on all sides
Like so....


This now means that that very outer ring of pixels will still get filtered with some black ones, but they wont be part of out visible texture. And the ring of pixels just inside our uv area (the red line) will have valid neighbouring pixels all around them so zero artifacts [grin]

As I said this is one solution, there are others [wink] ...

You could just shift the uv coords in by half a pixel without adding any padding pixels ..this works but can still lead to the same artifacts problems when you scale sprites

or

You could clamp the texture ...this won't work if you are wanting to have more than one sprite per texture (sprite sheets)

Seriously hope this helps [lol]
ViLiO

[Edited by - ViLiO on May 19, 2006 4:33:00 PM]

Share this post


Link to post
Share on other sites
don    431
ViLiO,

He isn't getting a bleeding from sampling at the edges of the texture - it's because his transparent areas within the texture are black. If you take the image you posted and imagine that the white area around the object (cat?) is black and has an alpha of 0, that is what I think his image looks like. It's hard to tell but it almost sounds like he doesn't have alpha blending enabled.


Share this post


Link to post
Share on other sites
JohnnyCasil    373
You want to set up an alpha test on the pixels, so that way any pixel that is below a certain alpha threshold is tossed before it even gets processed. This should solve your bleeding issues, because the edges will have nothing to bleed with.

Share this post


Link to post
Share on other sites
ViLiO    1326
Quote:
Original post by JonWoyame
I shouldn't really say "border", it is a black outline around the image contents. For example, I have a hand cursor texture, and the cursor has a dark outline around it because it is blending with the neighboring black pixels which have an alpha of 0.

Oh, do you mean something like ..
?
...and you don't want the white pointer to have a dark border around it when filtering and alpha-blending?

Cause if that is the case then adding the padding could be done around the pointer itself and not the edges of the whole texture. You could take the colour from a neighbouring light-coloured point pixel and the alpha value from a neighbouring dark mask pixel and it would probably work.

If this ain't it then maybe you should provide some screenshots yourself [wink]

All the best,
ViLiO

Share this post


Link to post
Share on other sites
JohnnyCasil    373
That isn't the problem. The problem is he is using linear filter on the texture. This causes the textures colors to blend together in order to get rid of the pixelation. Since he is using alpha-blending, the edges of the sprite that touch the transparent part of the texture are blending. Think of it like this: You have an image that has a color of red, and right beside it you have a completely transparent color of black. Because of the linear blending, instead of it just ending at the red like he wants, the colors blend from the red to the transparent black and it leaves an outline. To get rid of this you can add an alpha test that will discard those pixels so they will not be processed in the linear filter.

Share this post


Link to post
Share on other sites
ViLiO    1326
Well arguably he did say...

"I want to be able to have an alpha gradient in images, rather than just all-or-nothing transparency"

..and alpha-testing does result in a sharp cut-off and not the smooth gradient alpha-blending provides [smile]

Of course, a picture paints a thousand words ...so if the answer hasn't already been provided, then screenshots of the problem are in order [wink]

Regards,
ViLiO

Share this post


Link to post
Share on other sites
JonW    173
Just out of interest, does anyone know why OpenGL doesn't have this problem with bilinear filtering when the border pixels are black? I know the Direct3D algo is using a box filter.

Quote:
Original post by JohnnyCasil
That isn't the problem. The problem is he is using linear filter on the texture. This causes the textures colors to blend together in order to get rid of the pixelation. Since he is using alpha-blending, the edges of the sprite that touch the transparent part of the texture are blending.


Yep, right on the money.

I don't have the code in front of me right now, so I can't try alpha testing just yet. I wasn't aware that the alpha-tested pixels are totally thrown out before the texture filtering stage.

I'll try it out and get back with how it worked.

Share this post


Link to post
Share on other sites
JohnnyCasil    373
Quote:
Original post by ViLiO
..and alpha-testing does result in a sharp cut-off and not the smooth gradient alpha-blending provides [smile]


You are right. I forgot to clarify that alpha-testing will only solve the problem of the edge pixels, but if you set the alpha test to only throw out small alpha values, blending should still be performed with the rest of the data.

Quote:
Original post by JonWoyame
I don't have the code in front of me right now, so I can't try alpha testing just yet. I wasn't aware that the alpha-tested pixels are totally thrown out before the texture filtering stage.


I'm pretty sure it will work. I've personally done it before, but this is all going off the top of my head. I don't remember if there is more to it than this or not, but I am pretty sure that the data will get thrown out before hand.

Share this post


Link to post
Share on other sites
don    431
Yes, it will work. The pixels that pass the alpha test will be linear filtered and the pixels that fail will be transparent.

Does OpenGL have an alpha test state? It could be that it doesn't have this specific state and instead always tests if the alpha value is 0 before performing the filtering operation.

Share this post


Link to post
Share on other sites
LeGreg    754
Premultiplied is the way to go.

If you don't know how, with a little bit of maths you can fall back on your feet.

What you really want is your blending to be something like that :

final color = 
w1 * (alpha1 * color1 + (1 - alpha1) * dest)
+ w2 * (alpha2 * color2 + (1 - alpha2) * dest)
+ w3 * (alpha3 * color3 + (1 - alpha3) * dest)
+ w4 * (alpha4 * color4 + (1 - alpha4) * dest);



wi being the bilinear weight of the ith texel (sum(wi) = 1). alphai being 0 or 1 depending on where you stand. dest being the destination color. There is only one dest color per pixel obviously.

The goal of the formula above would be that only the texel with alpha != 0 contribute to the final color. So that there is no bleeding black in your picture. Of course the above formula is too complicated for fixed function hardware and even a pixel shader would be horribly slow if we tried to implement it as such.

Now how can we simplify that to make it work on every hardware ?

First you can distribute wi :

final color =   
w1 * (alpha1 * color1) + w1 * (1 - alpha1) * dest
+ w2 * (alpha2 * color2) + w2 * (1 - alpha2) * dest
+ w3 * (alpha3 * color3) + w3 * (1 - alpha3) * dest
+ w4 * (alpha4 * color4) + w4 * (1 - alpha4) * dest;



Then factorize dest :

final color = 
w1 * (alpha1 * color1)
+ w2 * (alpha2 * color2)
+ w3 * (alpha3 * color3)
+ w4 * (alpha4 * color4)
+ (w1 * (1 - alpha1) + w2 * (1 - alpha2) + w3 * (1 - alpha3) + w4 * (1 - alpha4)) * dest;



Distribute wi again :

final color = 
w1 * (alpha1 * color1)
+ w2 * (alpha2 * color2)
+ w3 * (alpha3 * color3)
+ w4 * (alpha4 * color4)
+ (w1 - w1 * alpha1 + w2 - w2 * alpha2 + w3 - w3 * alpha3 + w4 - w4 * alpha4) * dest;



Then use the property that sum(wi) = 1 :

final color = 
w1 * (alpha1 * color1)
+ w2 * (alpha2 * color2)
+ w3 * (alpha3 * color3)
+ w4 * (alpha4 * color4)
+ (w1 + w2 + w3 + w4
- (w1 * alpha1 + w2 * alpha2 + w3 * alpha3 + w4 * alpha4)) * dest;
final color =
w1 * (alpha1 * color1)
+ w2 * (alpha2 * color2)
+ w3 * (alpha3 * color3)
+ w4 * (alpha4 * color4)
+ (1 - (w1 * alpha1 + w2 * alpha2 + w3 * alpha3 + w4 * alpha4)) * dest;



This looks familiar.

So from here you can guess the right solution :

Replace your texture with a new premultiplied texture that means
premulti = alphai * colori and copy alphai from your original texture to your new texture unchanged.

Then set the following renderstates :

pDevice->SetRenderState(D3DRS_BLENDENABLE, TRUE);
pDevice->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_ONE); // <- don't use srcAlpha because it is already factored in.
pDevice->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA); // <- this isn't pure additive we still have to fade the destination color for opaque texels
pDevice->SetRenderState(D3DRS_BLENDOP, D3DBLENDOP_ADD);



LeGreg

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
    • By Picpenguin
      Hi
      I'm new to learning OpenGL and still learning C. I'm using SDL2, glew, OpenGL 3.3, linmath and stb_image.
      I started following through learnopengl.com and got through it until I had to load models. The problem is, it uses Assimp for loading models. Assimp is C++ and uses things I don't want in my program (boost for example) and C support doesn't seem that good.
      Things like glVertexAttribPointer and shaders are still confusing to me, but I have to start somewhere right?
      I can't seem to find any good loading/rendering tutorials or source code that is simple to use and easy to understand.
      I have tried this for over a week by myself, searching for solutions but so far no luck. With tinyobjloader-c and project that uses it, FantasyGolfSimulator, I was able to actually load the model with plain color (always the same color no matter what I do) on screen and move it around, but cannot figure out how to use textures or use its multiple textures with it.
      I don't ask much: I just want to load models with textures in them, maybe have lights affect them (directional spotlight etc). Also, some models have multiple parts and multiple textures in them, how can I handle those?
      Are there solutions anywhere?
      Thank you for your time. Sorry if this is a bit confusing, English isn't my native language
  • Popular Now