Sign in to follow this  
Sean_Seanston

Trouble Drawing Sprites with Orthographic Projection...

Recommended Posts

What I'm trying to do is simply set up a sprite sheet system where I can load an image into a texture and draw part of it onto a quad. I'm having some issues though, particularly with the use of an orthographic matrix and how that works compared with using a perspective matrix on a 3D object.

 

If anyone has a link to any implementation of this kind of thing, that might be very helpful since I haven't been able to find an example that actually used shaders and an orthographic matrix etc.

 

What I'm currently doing is based on the orthographic projection example at the bottom of this page:

http://www.arcsynthesis.org/gltut/Positioning/Tutorial%2004.html

In which he just outputs gl_Position in the vertex shader as the position of the object added to an offset vector.

(I assume that in practice, an orthographic matrix would be used, but what exactly are the differences between that method and using a proper orthographic matrix? Am I right in assuming that such an orthographic matrix allows you to use actual screen coordinates to display objects (since that would seem to make sense for how most people would want to draw something orthographically most of the time) rather than "world" coordinates where even 5.0f might be almost off the screen?)

 

At the moment I'm able to display a quad in solid white (I've disabled textures until I get the positioning right), but when I try to use an orthographic matrix I get a blank screen.

 

Maybe I'm doing some part of it completely wrong but basically I just copied what I did for rendering 3D. The shader code that doesn't work looks like this:

gl_Position = projMatrix*modelViewMatrix*vec4( inPosition, 1.0 );

Where projMatrix is a vec4 uniform that I set with an orthographic matrix from within my C++ program. That gets set using:

glm::ortho( 0.0f, ( float ) width, 0.0f, ( float ) height );

Every time the reshape() function is called, where width and height are the dimensions of the screen.

 

Does that seem to make sense in theory? The ModelView is set like this:

glm::vec2 spritePos( posX, posY );
glm::mat4 modelView = glm::translate( glm::mat4(1.0f), glm::vec3( spritePos, 1.0f ) );
sProgram.setUniform( "modelViewMatrix", modelView);

Is there anything wrong there? I saw something similar in an example, but perhaps I missed something important somewhere.

 

Another problem I've noticed is that even when I use the matrix-less position code in the shader...

gl_Position = vec4( inPosition.x + offset.x, inPosition.y + offset.y, inPosition.z, 1.0 );.

If I try to enable texturing, for some reason the texture isn't displayed but the white quad that was drawn fine previously seems to get distorted and comes out as a smaller triangle. I THINK I've set up the stride and offset parameters etc. correctly, from comparing it to examples that did work, but another thought I've had is that I'm setting up the position attribute in a different function to the texture coordinate attribute. I don't think I've ever really seen that done before, but I don't see why it would cause a problem.

Is it bad to bind a VBO and VAO, call glEnableVertexAttribArray( 0 ) and glVertexAttribPointer(), then unbind and later bind and try to set up attribute 1?

 

Does this seem to make sense?

Position:

glEnableVertexAttribArray( 0 );
glVertexAttribPointer( 0, 3, GL_FLOAT, GL_FALSE, sizeof( glm::vec3 ) + sizeof( glm::vec2 ), 0 );

Then later texture:

glEnableVertexAttribArray( 1 );
glVertexAttribPointer( 1, 2, GL_FLOAT, GL_FALSE, sizeof( glm::vec3 ) + sizeof( glm::vec2 ), reinterpret_cast<void*>( sizeof( glm::vec3 ) ) );

I've spent the whole day changing things and trying to figure it out but I haven't been able to and I'm out of ideas.

Share this post


Link to post
Share on other sites

All the code you posted looks good.

 

1. Verify that the texture coordinates are correct by using (tx, ty) to generate a gradient, eg.:

color = vec4(texCoord.x, 0.0, texCoord.y, 1.0);

 

If the mesh still has a solid color, your texture coordinates are wrong. The range is [0, 1], unless you are repeating the texture.

 

2. Check for GL errors with glGetError() for each call during texture creation.

 

3. Verify that the texture is correctly loaded by for example reading directly from it:

http://www.opengl.org/sdk/docs/man/html/glGetTexImage.xhtml

 

Compare the first pixel and the second row pixel against your image, or if you want RGBA:

// note that im just writing this off the top of my head
uint32_t RGBA8(int r, int g, int b, int a)
{
	return r + (g << 8) + (b << 16) + (a << 24);
}
void printColor(uint32_t color)
{
	int r = color & 255; color >>= 8;
	int g = color & 255; color >>= 8;
	int b = color & 255; color >>= 8;
	int a = color;
	
	printf("RGBA(%d, %d, %d, %d)", r, g, b, a);
}

// Note that your image might be BGRA if its a BMP, so just use photoshop to read the colors manually.

4. Use math to determine the values in the shader, to make sure they are actually getting there properly.

For example, if the position is supposed to be (5, 5) then we could say the correct color is 100% gray, incorrect is black or white:

color = vec4(position.xy / 5.0 - vec2(0.5), 0.5, 1.0);

 

5. Verify that your reshape() function actually gets called.

 

And so on.. smile.png

Edited by Kaptein

Share this post


Link to post
Share on other sites

All the code you posted looks good.

 

After messing around with things for a while I managed to get the orthographic matrix working, and as far as I can remember it seems like the code was correct alright. One thing I was doing was giving the same coordinates when I was using the orthographic matrix, so things would have been rendered much much smaller and I probably just missed the few pixels in the corner...

 

So that's good.

 

As for the textures, it turned out I was interleaving the attributes wrong in the buffer. I set up the VAO correctly, but I hadn't interleaved them in the actual VBO rolleyes.gif .

 

Then I could load some jpgs fine, such as one from a tutorial I downloaded, but others were all distorted. After saving them differently, by not interlacing or something I think, they wound up working. The functions must just not have been expecting that format... frustrating.

 

One more problem remains though: I made it so I can move the sprites around the screen, and implemented an interpolation effect like in this guide here (which I've used in the past):

http://dewitters.koonsolo.com/gameloop.html

 

That's fine and all, but the result is that the sprites are obviously being rendered to screen coordinates between whole integer values, and I think that's causing a problem because it doesn't happen when I turn off interpolation.

 

What's happening now is that when I move the sprites around (currently just a big group of font sheet letter sprites I was testing), there's often an ugly texture bleed effect where the edges of some of the textures are being rendered on the other side of the texture. That's definitely what's happening because I put a single pixel line of a different colour along an edge of the spritesheet and the bleed became that colour.

 

I've read up on this... but nothing has particularly helped. I tried GL_CLAMP_TO_EDGE but that didn't work, neither did BORDER. I'm already using GL_NEAREST for sampling.

 

I figure this must be a very common problem and you'd expect there to be some standard solution. I've seen people suggest adding border areas to texture atlases but that seems very elaborate.

I've seen a reference to texelFetch():

http://stackoverflow.com/questions/19323188/texture-coordinates-near-1-behave-oddly/19323954

 

But maybe I'll just try making it sample from the centre of each pixel like it mentions there and some other places, if I can manage to figure that out...

Share this post


Link to post
Share on other sites

Trying to figure it out now...

 

This page is helpful in understanding the concept:

http://gamedev.stackexchange.com/questions/46963/how-to-avoid-texture-bleeding-in-a-texture-atlas

 

Right now my code looks like this:

    //Set up texture coordinates (Vertically inverted to start from top left)
    const float tw = float( spriteWidth ) / ( float ) texWidth; //Width of sprite relative to texture
    const float th = float( spriteHeight ) / ( float ) texHeight; //Height of sprite relative to texture
    const int numPerRow = texWidth / spriteWidth;
    const float tx = ( float )( frameIndex % numPerRow ) * tw; //X texture coordinate
    const float ty = ( 1.0f - th ) - ( float )( frameIndex / numPerRow ) * th; //Y texture coordinate

    glm::vec2 texVerts[] =
    {
        glm::vec2( tx, ty ),
        glm::vec2( tx + tw, ty ),
        glm::vec2( tx + tw, ty + th ),
        glm::vec2( tx, ty + th )
    };

Does that look ok in the first place? Seems to be working at least apart from the bleed. Notice that I've set it up to start sampling the texture from the top left instead of the bottom left.

 

Now I'm trying to account for this centre of pixel sampling, but I haven't been quite able to figure it out yet... none of my attempts looked quite right.

 

tw and th must stay the same I imagine... but if I'm adding an offset to tx and ty so that they sample from the centre of the pixel, will it matter that what was previously 1.0f will be higher than that? Or does GL_CLAMP_TO_EDGE fix that problem?

Share this post


Link to post
Share on other sites

Use texture arrays if you have sprites with constant sizes, like tiles. Otherwise you need to create a proper texture atlas with "enough" distance between sprites.

If you are going to use a texture atlas, you will have some problems with mipmapping, if you need that. At least until you just create the mipmap levels yourself tongue.png

 

Clamp to edge won't help you here, and it's probably default anyways. Or maybe not.

Just shift your texture coordinates inwards by 1/4 pixel (or less), which is 4.0 / texWidth and 4.0 / texHeight respectively.

 

You can observe a working implementation here:

https://github.com/fwsGonzo/dm2/blob/master/src/spritegen.cpp

Edited by Kaptein

Share this post


Link to post
Share on other sites

Well I worked it out in the end by shifting the texture coordinates in. Strange though, 0.5 of a pixel wouldn't work right because it was cutting out a pixel of texture, and I think even 0.25 had some problem or other... perhaps the movement looked distorted or something; either way it seemed to get better with smaller values. Seems fine now.

 

One last thing though... if I want to avoid unnecessary draw calls by batching together sprites that use the same texture, is instancing the standard way to go about that? I've only looked into it a little bit but it seems to be what I'd need.

Share this post


Link to post
Share on other sites

Well I worked it out in the end by shifting the texture coordinates in. Strange though, 0.5 of a pixel wouldn't work right because it was cutting out a pixel of texture, and I think even 0.25 had some problem or other... perhaps the movement looked distorted or something; either way it seemed to get better with smaller values. Seems fine now.

 

One last thing though... if I want to avoid unnecessary draw calls by batching together sprites that use the same texture, is instancing the standard way to go about that? I've only looked into it a little bit but it seems to be what I'd need.

 

If you are just using quads, just brute-force it and call it a day. You aren't going to be bandwidth limited, you aren't going to be CPU-limited (except on potatoes, and they are thankfully close to extinct now), and the GPU is more or less going to be idle.

 

About "perhaps the movement looked distorted or something"

Depends. If you're drawing without interpolation then yes - there's going to be subpixel movement/aliasing. Nothing you can do except tighten the shift more, or use supersampling.

For a regular 2D game with nothing special going on 2x supersampling is probably not going to faze the GPU. All that means is rendering to an offscreen buffer that is twice the size of the original screen, then use glFramebufferBlit to downsample to normal size.

 

It all depends what you are doing. If you are going to render 3k quads per frame (you have no choice), then you will need to render efficiently and write minimal shaders. Otherwise, just write good code and make sure you're in a good spot, architecturally, with your thingamajig.

I'd say just sorting by shader->texture and sending everything that is actually on screen to GPU each frame is the most simplest flexible-enough case, as it lets you animate tiles without effort.

If your screen is very large and there are say 500+ tiles on-screen at all times, you may want to consider only updating tiles when something actually changes.

But I think that is largely futile as mechanically good players generally have high speed. It just feels like optimizing for a scenario that isn't worst-case (and just adds more if() statements). I guess it also depends on physics vs rendering update ticks. If you can prove that you have 1 camera update per 2 frames, it will pay off. But then that just sounds like a frameskip scenario. Which is simpler to implement. I guess.

 

Bah. Just do what you think you need to do.

Share this post


Link to post
Share on other sites

K. I think I'll just try to remove the worst and most obvious inefficiencies, for my own satisfaction at least... and probably implement sprites with this instancing thing just to see what it's like and learn more about OpenGL. Thanks.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this