Fixing pixel imperfections in 2D OpenGL

Started by
6 comments, last by Roots 15 years, 6 months ago
First, I have read the FAQ and this thread about 2D OpenGL programming. My question is about how to achieve 2D pixel-perfect texture drawing. We have a map full of tiles and sprites and often there will be columns or rows of duplicated pixels. Here's what I mean. We have a debug tileset that looks like this: Which when used in the game, produces a map like this: And other visual objects also share this problem: I'm still not certain as to the cause of the problem, but I have come up with a temporary solution. Our screen dimensions are stored in a coordinate system object with floating point values representing screen width and screen height. What I've discovered that is if I draw each object so that it is aligned perfectly on a pixel boundary, the drawing is always correct. So for example, if my screen resolution is 1024x768 and my coordinate system is 64x48, then one pixel length is 64/1024 = 0.0625. I adjust the draw cursor to be equal to a multiple of that value prior to drawing. (BTW, this fix works wonderful on Linux and Windows, but not OS X for some reason). But this doesn't fix the fundamental problem, which is that our (custom made) graphics engine is rendering 2D images like this and I'm not sure why. I've tried to play with setting texture properties to GL_LINEAR and GL_NEAREST and what not, but that hasn't seemed to help any. Is there some GL property that has to be set in order to not distort the texture any when drawing it? Thanks for any advice you can give (BTW: I'm not very familiar nor experienced with OpenGL).

Hero of Allacrost - A free, open-source 2D RPG in development.
Latest release June, 2015 - GameDev annoucement

Advertisement
What kind of openGL code are you using to draw your tiles?

Are you constructing polygons (quads) for each tile, and if so, how do you calculate the size of the quad?
Graphics Engine Code

The project is open source so there's a link to the graphics code if you wish to take a look. Relevant files include image.* and texture.*, and to a lesser extent image_base.* and texture_controller.*. I think the primary image/texture draw function is this one below:

void ImageDescriptor::_DrawTexture(const Color* draw_color) const {	// Array of the four vertexes defined on the 2D plane for glDrawArrays()	// This is no longer const, because when tiling the background for the menu's	// sometimes you need to draw part of a texture	float vert_coords[] = {		_u1, _v1,		_u2, _v1,		_u2, _v2,		_u1, _v2,	};	// If no color array was passed, use the image's own vertex colors	if (draw_color == NULL)		draw_color = _color;	// Set blending parameters	if (VideoManager->_current_context.blend) {		glEnable(GL_BLEND);		if (VideoManager->_current_context.blend == 1)			glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // Normal blending		else			glBlendFunc(GL_SRC_ALPHA, GL_ONE); // Additive blending	}	else if (_blend) {		glEnable(GL_BLEND);		glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA); // Normal blending	}	else {		glDisable(GL_BLEND);	}	// If we have a valid image texture poiner, setup texture coordinates and the texture coordinate array for glDrawArrays()	if (_texture != NULL) {		// Set the texture coordinates		float s0, s1, t0, t1;		s0 = _texture->u1 + (_u1 * (_texture->u2 - _texture->u1));		s1 = _texture->u1 + (_u2 * (_texture->u2 - _texture->u1));		t0 = _texture->v1 + (_v1 * (_texture->v2 - _texture->v1));		t1 = _texture->v1 + (_v2 * (_texture->v2 - _texture->v1));		// Swap x texture coordinates if x flipping is enabled		if (VideoManager->_current_context.x_flip) {			float temp = s0;			s0 = s1;			s1 = temp;		}		// Swap y texture coordinates if y flipping is enabled		if (VideoManager->_current_context.y_flip) {			float temp = t0;			t0 = t1;			t1 = temp;		}		// Place the texture coordinates in a 4x2 array mirroring the structure of the vertex array for use in glDrawArrays().		float tex_coords[] = {			s0, t1,			s1, t1,			s1, t0,			s0, t0,		};		// Enable texturing and bind texture		glEnable(GL_TEXTURE_2D);		TextureManager->_BindTexture(_texture->texture_sheet->tex_id);		_texture->texture_sheet->Smooth(_texture->smooth);		// Enable and setup the texture coordinate array		glEnableClientState(GL_TEXTURE_COORD_ARRAY);		glTexCoordPointer(2, GL_FLOAT, 0, tex_coords);		if (_unichrome_vertices == true) {			glColor4fv((GLfloat*)draw_color[0].GetColors());		}		else {			glEnableClientState(GL_COLOR_ARRAY);			glColorPointer(4, GL_FLOAT, 0, (GLfloat*)draw_color);		}	} // if (_texture != NULL)	// Otherwise there is no image texture, so we're drawing pure color on the vertices	else {		// Use a single call to glColor for unichrome images, or a setup a gl color array for multiple colors		if (_unichrome_vertices == true) {			glColor4fv((GLfloat*)draw_color[0].GetColors());			glDisableClientState(GL_COLOR_ARRAY);		}		else {			glEnableClientState(GL_COLOR_ARRAY);			glColorPointer(4, GL_FLOAT, 0, (GLfloat*)draw_color);		}		// Disable texturing as we're using pure colour		glDisable(GL_TEXTURE_2D);	}	// Use a vertex array to draw all of the vertices	glEnableClientState(GL_VERTEX_ARRAY);	glVertexPointer(2, GL_FLOAT, 0, vert_coords);	glDrawArrays(GL_QUADS, 0, 4);	if (VideoManager->_current_context.blend || _blend == true)		glDisable(GL_BLEND);	if (VideoManager->CheckGLError() == true) {		IF_PRINT_WARNING(VIDEO_DEBUG) << "an OpenGL error occurred: " << VideoManager->CreateGLErrorString() << endl;	}} // void ImageDescriptor::_DrawTexture(const Color* color_array) const



So it looks like we're drawing quads and using vertex/texture arrays. Both arrays are computed based on the underlying textures UV coordinates. (FYI: we store each texture in a texture sheet, which is nothing more than a large texture containing a collection of smaller textures, such as tiles).

Hero of Allacrost - A free, open-source 2D RPG in development.
Latest release June, 2015 - GameDev annoucement

This may not be what you want, but here goes:

Typical 2D rendering with openGL: to ready the matrices:
glMatrixMode(GL_PROJECTION);glLoadIdentity();gluOrtho2D(0, res_width, 0, res_height);glMatrixMode(GL_MODELVIEW);glLoadIdentity();



to render a sprite with corners (ax,ay), (bx,by) do:


int ax,ay,bx,by;int atx,aty,btx,bty;int tex_x_res, tex_y_res;float fatx,faty,fbtx,fbty;//set values of ax,ay,bx,by (verts)//set values of tex corners atx,aty,btx,btyfatx= static_cast<float>(atx)/static_cast<float>(tex_x_res);faty= static_cast<float>(aty)/static_cast<float>(tex_y_res);fbtx= static_cast<float>(btx)/static_cast<float>(tex_x_res);fbty= static_cast<float>(bty)/static_cast<float>(tex_y_res); //bind you texture//now draw the quad:glBegin(GL_QUADS);glTexCoord2f(fatx,faty);glVertex2f( static_cast<float>(ax)+0.25f, static_cast<float>(ay)+0.25f);glTexCoord2f(fatx,fbty);glVertex2f( static_cast<float>(ax)+0.25f, static_cast<float>(by)+0.25f);glTexCoord2f(fbtx, fbty);glVertex2f( static_cast<float>(bx)+0.25f, static_cast<float>(by)+0.25f);glTexCoord2f(fbtx, faty);glVertex2f( static_cast<float>(bx)+0.25f, static_cast<float>(ay)+0.25f);glEnd();


note the "fudging", this is to guarantee that the the vertex of the quad gets sent to the correct pixel (because of rounding that can happen along the way).


edit:

I wrote the above before you wrote your reply of reply, but the part that looks fishy to me is:

Quote:
s0 = _texture->u1 + (_u1 * (_texture->u2 - _texture->u1));
s1 = _texture->u1 + (_u2 * (_texture->u2 - _texture->u1));
t0 = _texture->v1 + (_v1 * (_texture->v2 - _texture->v1));
t1 = _texture->v1 + (_v2 * (_texture->v2 - _texture->v1));


you are using the position of the sprite on screen to determine the texture corners you are using?




[Edited by - kRogue on October 13, 2008 10:25:36 PM]
Close this Gamedev account, I have outgrown Gamedev.
The fudging is something that I recently tried, but in a different way. This is the function that is called whenever our coordinate system is changed (function is in the file video.cpp).

void GameVideo::SetCoordSys(const CoordSys& coordinate_system) {	_current_context.coordinate_system = coordinate_system;	glMatrixMode(GL_PROJECTION);	glLoadIdentity();	glOrtho(_current_context.coordinate_system.GetLeft(), _current_context.coordinate_system.GetRight(),		_current_context.coordinate_system.GetBottom(), _current_context.coordinate_system.GetTop(), -1, 1); 	glMatrixMode(GL_MODELVIEW);	glLoadIdentity();	// This small translation is supposed to help with pixel-perfect 2D rendering in OpenGL.	// Reference: http://www.opengl.org/resources/faq/technical/transformations.htm#tran0030	glTranslatef(0.375, 0.375, 0);}


Specifically what I added a couple days ago was that glTranslate call, but it doesn't seem to have helped at all with this issue.

Hero of Allacrost - A free, open-source 2D RPG in development.
Latest release June, 2015 - GameDev annoucement

use gl_clamp_to_edge
Black Sky A Star Control 2/Elite like game
I'm guessing that the data being used to create vert_coords is slightly wrong.

You mentioned that you use a coordinate system where the screen is 64x48 (so on a 1024x768 screen, 1 unit = 16 pixels)... When you're deciding how large each tile should be you might be making some mistakes.

In my engine I use a system where the screen is 1x1 units (so 1 horizontal unit = 1024px and 1 vertical unit = 768px).

Pixels can be interpreted a lot of different ways (they're just sample 'points')... When you're doing pixel-perfect 2D stuff though, it's best to think of them as a series of rectangles.
The vertices that make up your quads should always lie at the corners of these pixel-rectangles. As the following picture shows, vertex positions should correspond with the edges of a pixel, not the centers.

Imaginary 4x1 pixel display, with a "1 unit = 4px" vertex coordinate system.
0  .25  .5 .75  1 <-- Vertex coordinates|---|---|---|---|| 0 | 1 | 2 | 3 | <-- Pixel center coordinates|---|---|---|---|


If we wanted to draw a quad that covered pixels #1 and #2 only, the leftmost vertex would be positioned at 0.25 and the rightmost at 0.75.
Note that when finding the coordinates for a leftmost pixel, you have to find the location of that pixels left-hand edge. When finding the coordinates for a right-most pixel, you have to find the location of that pixels right-hand edge.

This is the code that I use to convert back and forth between pixel coordinates (0..1023, 0..767) and normalized (0..1, 0..1)). Normalize/Denormalize are used for any coordinates that represent widths/heights and bottom/left positions. NormalizeTR/DenormalizeTR are used for any top/right positions:
void Normalize( float& pos, float range ){	pos /= range;}void Denormalize( float& pos, float range ){	pos *= range;}void NormalizeTR( float& pos, float range ){	pos++;	Normalize( pos, range );}void DenormalizeTR( float& pos, float range ){	Denormalize( pos, range );	pos--;}
Thanks ViperG & Hodgman. I'll look into what you both suggested and see if I can find a remedy to this problem.

Hero of Allacrost - A free, open-source 2D RPG in development.
Latest release June, 2015 - GameDev annoucement

This topic is closed to new replies.

Advertisement