Blending & Polygon Smoothing = Wireframe

Started by
6 comments, last by tomek_zielinski 19 years, 7 months ago
Hi guys. This is my first post here, but I just can't figure out what the problem is. I did a search, but couldn't find any info on it. If I turn on Blending in my OpenGL code, everything looks good. But if I enable Polygon Smoothing, I see the textures as well as a wireframe of the model. It does this even if I disable textures and just use colors. Here's part of the code I'm using: glEnable (GL_BLEND); glBlendFunc (GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); glEnable(GL_POLYGON_SMOOTH); glHint(GL_POLYGON_SMOOTH_HINT, GL_NICEST); And here's a picture of the problem: http://www.leadingedgesim.com/mkaprocki/Smoothing.jpg The smoothing looks real good, so it's doing it's job, I just can't figure out how to get rid of that wireframe. Any ideas? Thanks!
Advertisement
It's not that a wireframe is being drawn, but when polygon smoothing is turned off, a particular pixel is drawn either from one polygon, or it's neighbor.

When you activate smoothing, then two neighboring polygons overlap with alpha, both contributing to the final color along the edges. The solution is to not use polygon smoothing, but just go for FSAA/multisampling, or perhaps play around with different blending functions (maybe).
Ok, I did like you suggested and switched to multisampling. The results look real good, but now I have a problem with the depth buffer drawing incorrectly if I'm not real close to the object. It works correctly up close, but from a distance some parts appear through the other side of the object, and flicker when I move the camera. I guess that's because of the depth buffer precision?
Perhaps you are using too huge coordinates or far to near clip plane distance ratio is greater than, let's say 1000. 32-bit z-buffer could perform better but try scaling down your world by an order of mamgnitude adjusting everything accordingly so that finally everything will look the same.
www.tmreality.com
My coords are pretty big, but I'm not sure of any other way to do it. I'm using meters as my coord system, with a visibility of 50 km. If I scale the coords down to kilometers, I'd be working with really small numbers, which makes it a pain in the butt. If I scale everything down, would that really make any difference in the precision? I'd still be just as far away from the model as before, only with coords 1000 times smaller.
The depth buffer issues only appeared only after activating multisampling? The only way I can explain that is if because of the multisampling, you're getting a smaller depth buffer due to memory constraints.

What library are you using to create the context? (GLUT, SDL, WGL, etc). Most of these do not guarentee that you get a context that matches the resources you asked for, they only try to get as close as possible.

What 3d card are you using (and how much vram?). What resolution are you running at? What multisampling mode did you request?

Explicitly check how many bits of depth precision you got after creating the context, and post that here (along with how many you origionally asked for).
Actually, the code I had to enable multisampling set the depth buffer to 16 bit, oops. ;-) I changed it back to 24 bit and everything looks good again. :-D
BTW: Scaling everything down MATTERS a lot. Float numbers have better precision when are around 1 that 10000. Yes the distances would stay the same but precision would increase
www.tmreality.com

This topic is closed to new replies.

Advertisement