Jump to content
  • Advertisement
Sign in to follow this  
Dark Dude

Tiles Rendering Badly?

This topic is 3819 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So I've spent a while developing a tile system in OpenGL, and it generally works fine... All except, it's displaying it with random gridlines that I cant seem to destroy. I only found out about this when testing it on another machine, so you'll see what I mean: Here's what it's like on my machine: legrj5.png Here's what it's like on my friends' machine: legaciumlulz01qg7.jpg The system basically works like... X=0, Y=0. Get the tile data in the variable at that point, and then draw a square and bind a texture to it with the tile bitmap. Then increase X, glTranslate across by 2.0f and draw the next etc etc until you get to the end, then increase Y and put X back to 0 again. Fairly basic procedure, but doesnt seem to work on other computers. Even when I tell it to overlap the preceding tile by 0.01f, it still renders faultily. Anyone know what the problem is?

Share this post


Link to post
Share on other sites
Advertisement
Its certainly something to do with the filling convention, or possibly the effects of filtering.

BAsically, the "bottom" or "right" triangle edges are not filled by convention, because it is assumed that adjacent polygons share vertices. This flies against a traditional blitter-based tile engine, where one tile ends at x, and the next begins at x+1 pixels. Google top left fill rule OpenGL.

If it comes down to filtering, try disabling filtering and anti-aliasing and see if there's any difference.

Share this post


Link to post
Share on other sites
Ok, I disabled Anti-Aliasing and Filtering and it's still not working apparently.

Here's the necessary code, if it helps identify where I'm going wrong:

int InitGL(GLvoid) // All Setup For OpenGL Goes Here
{
if(!LoadGLTextures())
{
return FALSE;
}
glEnable(GL_TEXTURE_2D);
glDisable(GL_LINE_SMOOTH);
glShadeModel(GL_SMOOTH); // Enable Smooth Shading
glClearColor(0.0f, 0.0f, 0.0f, 0.5f); // Black Background
glClearDepth(1.0f); // Depth Buffer Setup
glEnable(GL_DEPTH_TEST); // Enables Depth Testing
glDepthFunc(GL_LESS); // The Type Of Depth Testing To Do
glHint(GL_PERSPECTIVE_CORRECTION_HINT, GL_NICEST); // Really Nice Perspective Calculations
return TRUE; // Initialization Went OK
}


int LoadGLTextures() // Load the textures
{
int Status=1; // Status indicator

StartScr=SOIL_load_OGL_texture("Graphics\\Splash.PNG",SOIL_LOAD_AUTO,SOIL_CREATE_NEW_ID,SOIL_FLAG_MIPMAPS | SOIL_FLAG_INVERT_Y | SOIL_FLAG_NTSC_SAFE_RGB | SOIL_FLAG_COMPRESS_TO_DXT);
if(StartScr==0) ReportGLError(SOIL_last_result(),&Status);

for(int x=0;x<6;x++)
{
Tile[x]=SOIL_load_OGL_texture(TilePath[x],SOIL_LOAD_AUTO,SOIL_CREATE_NEW_ID,SOIL_FLAG_MIPMAPS | SOIL_FLAG_INVERT_Y | SOIL_FLAG_NTSC_SAFE_RGB | SOIL_FLAG_COMPRESS_TO_DXT | SOIL_FLAG_POWER_OF_TWO);
if(Tile[x]==0) ReportGLError(SOIL_last_result(),&Status);
if(Status==0) break;
}

return Status;
}

int DrawGLTileMap(void) // Draw The Background Scenery Tiles Here
{
signed int CurrentX=0, CurrentY=0;

CurrentX=Player.XCoord;
CurrentY=Player.YCoord;

glTranslatef(-21.15f,14.1f,0.0f);

while(CurrentY<Player.YCoord+16)
{
if(CurrentY<Area[Player.Area].YWidth)
{
while(CurrentX<Player.XCoord+21)
{
glTranslatef(2.0f,0.0f,0.0f);

if(Area[Player.Area].YWidth-CurrentY<=Area[Player.Area].YWidth)
{

if(CurrentX<Area[Player.Area].XWidth)
{
if(Area[Player.Area].XWidth-CurrentX<=Area[Player.Area].XWidth)
{
Area[Player.Area].TileTypes=(Area[Player.Area].TileTypes+(CurrentY*Area[Player.Area].XWidth))+CurrentX;
glBindTexture(GL_TEXTURE_2D,Tile[*Area[Player.Area].TileTypes]);

glBegin(GL_QUADS);
glTexCoord2f(0.0f,0.0f); // Bottom left
glVertex3f(-1.0f,-1.0f,-33.0f); //
glTexCoord2f(1.0f,0.0f); // Bottom right
glVertex3f(1.0f,-1.0f,-33.0f); //
glTexCoord2f(1.0f,1.0f); // Top right
glVertex3f(1.0f,1.0f,-33.0f); //
glTexCoord2f(0.0f,1.0f); // Top left
glVertex3f(-1.0f,1.0f,-33.0f); //
glEnd();
}
}
}

Area[Player.Area].TileTypes=Area[Player.Area].TileTypesR;

CurrentX++;
}
}

CurrentX=Player.XCoord;
glTranslatef(-42.0f,-2.0f,0.0f);
CurrentY++;

Area[Player.Area].TileTypes=Area[Player.Area].TileTypesR;
}

Area[Player.Area].TileTypes=Area[Player.Area].TileTypesR;

return 0;
}

Share this post


Link to post
Share on other sites
Main problem one:
Vista. ( j/k ) ;p

For two:
Oh, the syntax and the dot operators! So many! My eyes! Not spaced enough! (j/k ;p )

For three:
This might work. I'm taking a guess:

glDepthFunc( GL_EQUAL );

For four:
Why the random z coordinates? -33.0f? Why not something more simplistic as in 30.0f?

Rather than that, after going through of what I could, there doesn't seem much to cause that problem. Does it look like that when you move around, too? Or does it flicker in/out when you move?

Share this post


Link to post
Share on other sites
In future, use the [ source][/ source] tags, please. And on these boards, images are embedded with good ol' < img src="" />.

As far as your problem... it looks to me like a simple rounding error. Try not translating by (2.0f, 0, 0) for each tile, and instead using the CurrentX variable as your vertex positions in each row.

Share this post


Link to post
Share on other sites
Yeah, looking at the code (and yes, please use the source tags next time) its probably a rounding or accumulation error. Floating point numbers arent't exact, so '2.0f' might actually end up as 1.999873 or 1.000126 or something. Combine that with the fact that you are repeatedly adding these not-quite-right coordinates incrementally, and the error term can quickly become significant. Different video cards have different internal precision and rounding conventions sometimes as well, which is probably what makes the issue appear only on the one card.

Share this post


Link to post
Share on other sites
Actually, 2.0f had damn well better equal exactly 2.0f; it is a power of two. However, 16.0f + 2.0f is theoretically 18.0f, but might just round to 18.0021 or somesuch. And then you subtract 42 from that. And then you add 2 a whole ton. And then you subtract 42 again... it's potentially glitchy.

Share this post


Link to post
Share on other sites
Sorry, but that's just wrong. The IEEE-754 standard mandates that the results of all arithmetic operations on floating point numbers be exactly represented if possible. The maximum permissible error is 0.5ulp, i.e. half the distance between two successive values; so if your result is an integer whose magnitude is less than 2^24 for single-precision or 2^53 for double-precision, it's guaranteed to be exact. If 2.0+16.0 ever fails to compare equal to 18.0, it means your processor's floating point math is buggy.

Of course, if you start using values like 3.0/7.0 that can't be exactly represented in binary, then you're asking for trouble; in particular, a value you might expect to be stored in a 32-bit float might actually be held in an 80-bit FPU register, for example. But you can't get less precision this way, and in any case, the difference would be far smaller than your example.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!