Sign in to follow this  
JonW

OpenGL Bleeding Color in Linear Filtered Textures

Recommended Posts

Hi, I'm working on a 2D game with Direct3D version 2. I create all my textures as 32-bit with an alpha channel and use bilinear filtering for both scaling down and scaling up. The textures have a opaque portion surrounded by a black field that has an alpha of 0. When I display the textures with the nearest filter, they look OK, but when I use the bilinear filter the opaque parts of the image get blended with the black and so they have a dark border around them. In OpenGL, I was able to fix the problem of bleeding color by making the surrounding pixels black (a=0,r=0,g=0,b=0). In Direct3D it still bleeds anyway; apparently they must use different algos for bilinear filtering. I want to be able to have an alpha gradient in images, rather than just all-or-nothing transparency. Is there anyway I can keep the border from showing?

Share this post


Link to post
Share on other sites
Adding a 1-pixel border around your sprite textures should be enough in Direct3D as well (I'm not 100% sure on this btw [smile])

The only thing I can think of is that if you are using the D3DXSprite interface and not making sure to pass it a RECT that reflects the added pixel border you would still be facing the same bleeding problems. The same could be said for screen-aligned quads and their texture coords.

All the best,
ViLiO

Share this post


Link to post
Share on other sites
I'm just drawing a textured quad with DrawPrimitive.

The border is a result of the opaque color blending with the transparent background color when linear filtering is used.

Share this post


Link to post
Share on other sites
Quote:
Original post by JonWoyame
Sorry if I was unclear, but I'm actually trying to get rid of the border thats around the sprites.

The border is a result of the opaque color blending with the transparent background color when linear filtering is used.

Assuming i'm still not misunderstanding on this [wink] ....i'll expand on what I had said previously.

I am suggesting that in your paint program you add a 1-pixel border around your texture (ie the edge colour and alpha is extended out by 1 pixel in all directions). Then you shift your texture coords (or RECT for D3DXSprite) in by one pixel. This now means that when filtering, the outmost pixels of your sprite will get filtered with the added 1-pixel border.

Hope this helps, [smile]
ViLiO

Share this post


Link to post
Share on other sites
I shouldn't really say "border", it is a black outline around the image contents. For example, I have a hand cursor texture, and the cursor has a dark outline around it because it is blending with the neighboring black pixels which have an alpha of 0.

Share this post


Link to post
Share on other sites
Ok, diagramatics time [lol]

Let's say we have a sprite ..we'll call him whiskers [wink]

When whiskers is filtered, the outer ring of pixels will get filtered with some full black opaque pixels (assuming this is correct for Direct3D [smile])
This will of course cause some artifacts as we want to have the white pixels transparent.

One solution is to add a border (or padding) of pixels around the outside of the sprite (this can be done in your paint program or you can modify the image yourself when you load it as a texture into Direct3D).
Here's one we prepared earlier ...


Now if we were to load and use this texture exactly as before the artifacts would still be there as the outer ring of white pixels would still be getting filtered with some black ones. So the solution is to shift your uv coords in by 1 pixel on all sides
Like so....


This now means that that very outer ring of pixels will still get filtered with some black ones, but they wont be part of out visible texture. And the ring of pixels just inside our uv area (the red line) will have valid neighbouring pixels all around them so zero artifacts [grin]

As I said this is one solution, there are others [wink] ...

You could just shift the uv coords in by half a pixel without adding any padding pixels ..this works but can still lead to the same artifacts problems when you scale sprites

or

You could clamp the texture ...this won't work if you are wanting to have more than one sprite per texture (sprite sheets)

Seriously hope this helps [lol]
ViLiO

[Edited by - ViLiO on May 19, 2006 4:33:00 PM]

Share this post


Link to post
Share on other sites
ViLiO,

He isn't getting a bleeding from sampling at the edges of the texture - it's because his transparent areas within the texture are black. If you take the image you posted and imagine that the white area around the object (cat?) is black and has an alpha of 0, that is what I think his image looks like. It's hard to tell but it almost sounds like he doesn't have alpha blending enabled.


Share this post


Link to post
Share on other sites
You want to set up an alpha test on the pixels, so that way any pixel that is below a certain alpha threshold is tossed before it even gets processed. This should solve your bleeding issues, because the edges will have nothing to bleed with.

Share this post


Link to post
Share on other sites
Quote:
Original post by JonWoyame
I shouldn't really say "border", it is a black outline around the image contents. For example, I have a hand cursor texture, and the cursor has a dark outline around it because it is blending with the neighboring black pixels which have an alpha of 0.

Oh, do you mean something like ..
?
...and you don't want the white pointer to have a dark border around it when filtering and alpha-blending?

Cause if that is the case then adding the padding could be done around the pointer itself and not the edges of the whole texture. You could take the colour from a neighbouring light-coloured point pixel and the alpha value from a neighbouring dark mask pixel and it would probably work.

If this ain't it then maybe you should provide some screenshots yourself [wink]

All the best,
ViLiO

Share this post


Link to post
Share on other sites
That isn't the problem. The problem is he is using linear filter on the texture. This causes the textures colors to blend together in order to get rid of the pixelation. Since he is using alpha-blending, the edges of the sprite that touch the transparent part of the texture are blending. Think of it like this: You have an image that has a color of red, and right beside it you have a completely transparent color of black. Because of the linear blending, instead of it just ending at the red like he wants, the colors blend from the red to the transparent black and it leaves an outline. To get rid of this you can add an alpha test that will discard those pixels so they will not be processed in the linear filter.

Share this post


Link to post
Share on other sites
Well arguably he did say...

"I want to be able to have an alpha gradient in images, rather than just all-or-nothing transparency"

..and alpha-testing does result in a sharp cut-off and not the smooth gradient alpha-blending provides [smile]

Of course, a picture paints a thousand words ...so if the answer hasn't already been provided, then screenshots of the problem are in order [wink]

Regards,
ViLiO

Share this post


Link to post
Share on other sites
Just out of interest, does anyone know why OpenGL doesn't have this problem with bilinear filtering when the border pixels are black? I know the Direct3D algo is using a box filter.

Quote:
Original post by JohnnyCasil
That isn't the problem. The problem is he is using linear filter on the texture. This causes the textures colors to blend together in order to get rid of the pixelation. Since he is using alpha-blending, the edges of the sprite that touch the transparent part of the texture are blending.


Yep, right on the money.

I don't have the code in front of me right now, so I can't try alpha testing just yet. I wasn't aware that the alpha-tested pixels are totally thrown out before the texture filtering stage.

I'll try it out and get back with how it worked.

Share this post


Link to post
Share on other sites
Quote:
Original post by ViLiO
..and alpha-testing does result in a sharp cut-off and not the smooth gradient alpha-blending provides [smile]


You are right. I forgot to clarify that alpha-testing will only solve the problem of the edge pixels, but if you set the alpha test to only throw out small alpha values, blending should still be performed with the rest of the data.

Quote:
Original post by JonWoyame
I don't have the code in front of me right now, so I can't try alpha testing just yet. I wasn't aware that the alpha-tested pixels are totally thrown out before the texture filtering stage.


I'm pretty sure it will work. I've personally done it before, but this is all going off the top of my head. I don't remember if there is more to it than this or not, but I am pretty sure that the data will get thrown out before hand.

Share this post


Link to post
Share on other sites
Yes, it will work. The pixels that pass the alpha test will be linear filtered and the pixels that fail will be transparent.

Does OpenGL have an alpha test state? It could be that it doesn't have this specific state and instead always tests if the alpha value is 0 before performing the filtering operation.

Share this post


Link to post
Share on other sites
Premultiplied is the way to go.

If you don't know how, with a little bit of maths you can fall back on your feet.

What you really want is your blending to be something like that :

final color = 
w1 * (alpha1 * color1 + (1 - alpha1) * dest)
+ w2 * (alpha2 * color2 + (1 - alpha2) * dest)
+ w3 * (alpha3 * color3 + (1 - alpha3) * dest)
+ w4 * (alpha4 * color4 + (1 - alpha4) * dest);



wi being the bilinear weight of the ith texel (sum(wi) = 1). alphai being 0 or 1 depending on where you stand. dest being the destination color. There is only one dest color per pixel obviously.

The goal of the formula above would be that only the texel with alpha != 0 contribute to the final color. So that there is no bleeding black in your picture. Of course the above formula is too complicated for fixed function hardware and even a pixel shader would be horribly slow if we tried to implement it as such.

Now how can we simplify that to make it work on every hardware ?

First you can distribute wi :

final color =   
w1 * (alpha1 * color1) + w1 * (1 - alpha1) * dest
+ w2 * (alpha2 * color2) + w2 * (1 - alpha2) * dest
+ w3 * (alpha3 * color3) + w3 * (1 - alpha3) * dest
+ w4 * (alpha4 * color4) + w4 * (1 - alpha4) * dest;



Then factorize dest :

final color = 
w1 * (alpha1 * color1)
+ w2 * (alpha2 * color2)
+ w3 * (alpha3 * color3)
+ w4 * (alpha4 * color4)
+ (w1 * (1 - alpha1) + w2 * (1 - alpha2) + w3 * (1 - alpha3) + w4 * (1 - alpha4)) * dest;



Distribute wi again :

final color = 
w1 * (alpha1 * color1)
+ w2 * (alpha2 * color2)
+ w3 * (alpha3 * color3)
+ w4 * (alpha4 * color4)
+ (w1 - w1 * alpha1 + w2 - w2 * alpha2 + w3 - w3 * alpha3 + w4 - w4 * alpha4) * dest;



Then use the property that sum(wi) = 1 :

final color = 
w1 * (alpha1 * color1)
+ w2 * (alpha2 * color2)
+ w3 * (alpha3 * color3)
+ w4 * (alpha4 * color4)
+ (w1 + w2 + w3 + w4
- (w1 * alpha1 + w2 * alpha2 + w3 * alpha3 + w4 * alpha4)) * dest;
final color =
w1 * (alpha1 * color1)
+ w2 * (alpha2 * color2)
+ w3 * (alpha3 * color3)
+ w4 * (alpha4 * color4)
+ (1 - (w1 * alpha1 + w2 * alpha2 + w3 * alpha3 + w4 * alpha4)) * dest;



This looks familiar.

So from here you can guess the right solution :

Replace your texture with a new premultiplied texture that means
premulti = alphai * colori and copy alphai from your original texture to your new texture unchanged.

Then set the following renderstates :

pDevice->SetRenderState(D3DRS_BLENDENABLE, TRUE);
pDevice->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_ONE); // <- don't use srcAlpha because it is already factored in.
pDevice->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA); // <- this isn't pure additive we still have to fade the destination color for opaque texels
pDevice->SetRenderState(D3DRS_BLENDOP, D3DBLENDOP_ADD);



LeGreg

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628377
    • Total Posts
      2982326
  • Similar Content

    • By test opty
      Hi all,
       
      I'm starting OpenGL using a tut on the Web. But at this point I would like to know the primitives needed for creating a window using OpenGL. So on Windows and using MS VS 2017, what is the simplest code required to render a window with the title of "First Rectangle", please?
       
       
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
  • Popular Now