# OpenGL Seams revisited

## Recommended Posts

There has been discussion about this, but I have a case I can't fix. As can be seen in the attached picture, there is a white line above the flowers I can't get rid of. The details are as follows:[list=1]
[*]I draw the flowers on a simple quad. The bitmap I send to OpenGL contains no visible white.
[*]I use mipmaps with GL_NEAREST_MIPMAP_NEAREST.
[*]The bitmap is of the format GL_RGBA8. The alpha channel is either 0 or 255 (yes, I will probably switch to GL_RGB5_A1).
[*]The fragment shader doesn't use blending, it uses the alpha to discard pixels.
[*]I use GL_CLAMP_TO_EDGE for S and T.
[*]The color of the transparent areas are pure white (Photoshop makes it that way). I think some of this white is bleeding through. Although, a little bit of the line is black sometimes (flickering depending when turning camera).
[*]If I don't use mipmaps, the problem goes away.
[/list]
What am I missing?

##### Share on other sites
I would bet, that you put multiple pictures on a single texture and your flowers are only mapped to limited part of your texture (aka texture atlas) ?

In this case read this [url="http://www.gamedev.net/topic/597244-seams-between-tiles/"]thread[/url], even if it sounds differently at first, it is most likely the same problem and should be solved in a similar way (add border).

##### Share on other sites
may be problem in texture?
are you sure that all white pixels with alpha less in you discard value in fragment shader?

##### Share on other sites
Those areas shouldn't be white. When your mipmaps are created the mipmap pixels are set to the average of several pixels in the larger mip-levels, which causes part of the white pixels to be included in non-transparent pixels. Also, alpha will no longer be 0 or 255, but will also be averaged over several pixels. Depending on what your discard threshold is this can change behavior.
Change those white pixels to mirror the color of the edges of the flower. There are Photoshop plugins that can do this automatically.

##### Share on other sites
[quote name='HolyDel' timestamp='1341308368' post='4955214']
may be problem in texture?
are you sure that all white pixels with alpha less in you discard value in fragment shader?
[/quote]
Thanks, this enabled me to fix the problem! In my shader, I used the condition "if (alpha=0) discard;". I changed it to "alpha<0.5", and the extra lines are gone. Obviously, not all transparent area had alpha exactly 0 when using mipmaps. This is a little funny, as I used "GL_NEAREST_MIPMAP_NEAREST", and the original bitmap only has alpha 0 or 1. Something funny is happening when using mipmaps. If I turn the mipmaps off, there are no lines anymore, which would indicate that there are no half-way values of alpha in the original bitmap.

Of course, comparing a float against exact equality to 0 is dangerous sometimes. But in this case, the threshold seems to be 0.33 (after some trial-and-error testing)
[quote name='Erik Rufelt' timestamp='1341316706' post='4955251']
When your mipmaps are created the mipmap pixels are set to the average of several pixels in the larger mip-levels, ...
[/quote]
Thanks for the suggestion, which is the obvious answer, but probably not relevant as I don't use interpolation or averaging.

##### Share on other sites
[quote name='larspensjo' timestamp='1341340692' post='4955394']
Of course, comparing a float against exact equality to 0 is dangerous sometimes. But in this case, the threshold seems to be 0.33 (after some trial-and-error testing)
[quote name='Erik Rufelt' timestamp='1341316706' post='4955251']
When your mipmaps are created the mipmap pixels are set to the average of several pixels in the larger mip-levels, ...
[/quote]
Thanks for the suggestion, which is the obvious answer, but probably not relevant as I don't use interpolation or averaging.
[/quote]

OpenGL uses averaging when creating mipmaps. Your problem has nothing to do with floating point comparison, in which case 0.33 would be a ridiculously high threshold for a number between 0 and 1, but the fact that mipmaps use averaging even with GL_NEAREST_MIPMAP_NEAREST. Mipmaps are defined that way, and level 2 will always be half as wide and high as level 1, and each pixel in level 2 will be the average of 4 pixels in level 1. That way your alpha values could be 0, 0.25, 0.5, or 1.0 in the first created mipmap. In the next level it could have any possible average of 4 of the previous levels values.

##### Share on other sites
[quote name='Erik Rufelt' timestamp='1341352326' post='4955459']
OpenGL uses averaging when creating mipmaps.
[/quote]

I see, I have misunderstood how mipmaps were created. The [color=#282828][font=helvetica, arial, verdana, tahoma, sans-serif][size=3][left][background=rgb(250, 251, 252)]GL_NEAREST_MIPMAP_NEAREST controls how the mipmap is used, not how it is created. I also found, as could be expected, that using 0.5 as threshold for the alpha made some pixels go away that shouldn't. In that case, the problem was the other way around.[/background][/left][/size][/font][/color]

The problem is that I get alpha values between 0 and 1, but I have a pixel culling algorithm in the fragment shader that depends on the alpha being either 0 or 1.

I am not sure if this design can be combined with the use of mipmaps?

##### Share on other sites
[quote name='larspensjo' timestamp='1341387098' post='4955551']
The problem is that I get alpha values between 0 and 1, but I have a pixel culling algorithm in the fragment shader that depends on the alpha being either 0 or 1.

I am not sure if this design can be combined with the use of mipmaps?
[/quote]

You can create the mipmaps manually, and use an algorithm that doesn't do that. For example, instead of averaging 4 pixels, you can use the maximum value, or the minimum value, or the median, or the average of only pixels with alpha = 1, or similar.
How do you create mipmaps?
glTexSubImage2d can be used to manually specify a separate image for each mip level, which allows you to control how the different mip levels look.

##### Share on other sites
[quote name='Erik Rufelt' timestamp='1341489184' post='4955933']
You can create the mipmaps manually
[/quote]
That seems to be the way to go. And it looks like the natural way, as I have my own special requirements on them. The algorithm with averaging only those pixels with alpha=1 looks promising.

[size=4]Today, I am simply using glGenerateMipmap(GL_TEXTURE_2D);[/size]

## Create an account

Register a new account

• ### Forum Statistics

• Total Topics
627767
• Total Posts
2978991
• ### Similar Content

• Hello! As an exercise for delving into modern OpenGL, I'm creating a simple .obj renderer. I want to support things like varying degrees of specularity, geometry opacity, things like that, on a per-material basis. Different materials can also have different textures. Basic .obj necessities. I've done this in old school OpenGL, but modern OpenGL has its own thing going on, and I'd like to conform as closely to the standards as possible so as to keep the program running correctly, and I'm hoping to avoid picking up bad habits this early on.
Reading around on the OpenGL Wiki, one tip in particular really stands out to me on this page:
For something like a renderer for .obj files, this sort of thing seems almost ideal, but according to the wiki, it's a bad idea. Interesting to note!
So, here's what the plan is so far as far as loading goes:
Set up a type for materials so that materials can be created and destroyed. They will contain things like diffuse color, diffuse texture, geometry opacity, and so on, for each material in the .mtl file. Since .obj files are conveniently split up by material, I can load different groups of vertices/normals/UVs and triangles into different blocks of data for different models. When it comes to the rendering, I get a bit lost. I can either:
Between drawing triangle groups, call glUseProgram to use a different shader for that particular geometry (so a unique shader just for the material that is shared by this triangle group). or
Between drawing triangle groups, call glUniform a few times to adjust different parameters within the "master shader", such as specularity, diffuse color, and geometry opacity. In both cases, I still have to call glBindTexture between drawing triangle groups in order to bind the diffuse texture used by the material, so there doesn't seem to be a way around having the CPU do *something* during the rendering process instead of letting the GPU do everything all at once.
The second option here seems less cluttered, however. There are less shaders to keep up with while one "master shader" handles it all. I don't have to duplicate any code or compile multiple shaders. Arguably, I could always have the shader program for each material be embedded in the material itself, and be auto-generated upon loading the material from the .mtl file. But this still leads to constantly calling glUseProgram, much more than is probably necessary in order to properly render the .obj. There seem to be a number of differing opinions on if it's okay to use hundreds of shaders or if it's best to just use tens of shaders.
So, ultimately, what is the "right" way to do this? Does using a "master shader" (or a few variants of one) bog down the system compared to using hundreds of shader programs each dedicated to their own corresponding materials? Keeping in mind that the "master shaders" would have to track these additional uniforms and potentially have numerous branches of ifs, it may be possible that the ifs will lead to additional and unnecessary processing. But would that more expensive than constantly calling glUseProgram to switch shaders, or storing the shaders to begin with?
With all these angles to consider, it's difficult to come to a conclusion. Both possible methods work, and both seem rather convenient for their own reasons, but which is the most performant? Please help this beginner/dummy understand. Thank you!

• I want to make professional java 3d game with server program and database,packet handling for multiplayer and client-server communicating,maps rendering,models,and stuffs Which aspect of java can I learn and where can I learn java Lwjgl OpenGL rendering Like minecraft and world of tanks

• A friend of mine and I are making a 2D game engine as a learning experience and to hopefully build upon the experience in the long run.

-What I'm using:
C++;. Since im learning this language while in college and its one of the popular language to make games with why not.     Visual Studios; Im using a windows so yea.     SDL or GLFW; was thinking about SDL since i do some research on it where it is catching my interest but i hear SDL is a huge package compared to GLFW, so i may do GLFW to start with as learning since i may get overwhelmed with SDL.
-Questions
Knowing what we want in the engine what should our main focus be in terms of learning. File managements, with headers, functions ect. How can i properly manage files with out confusing myself and my friend when sharing code. Alternative to Visual studios: My friend has a mac and cant properly use Vis studios, is there another alternative to it?

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.

• 11
• 10
• 10
• 23
• 14