• Advertisement
Sign in to follow this  
  • entries
  • comments
  • views

About this blog

From the view of an artist trapped inside a programmers body... or is it the other way around...

Entries in this blog

So, I've been spending a lot of time working on Sprite rendering for my framework. Rectangles with images on it. 4 vertices which make a plane and are rendered as a flat 2D object. Du'h.

I set up a test scenerio using a few objects, all doing particular thing, like rotating, moving positions, animating (sprite animation), interactive and a long string of characters.

To support OpenGL ES1 and 2 with a transparent interface.
To support Sprite Font rendering
Scalable sprites (Source, Destination, much like XNA SpriteBatch)

Target Platforms:
iTouch - with support for ES1 only
iPhone - with ES1 and ES2 support

I used 2 methods to see which would give me the best performance.

1) Scaling the Texture Matrix to the Source image area, and then scaling the Model matrix to the Size of the destination matrix. Applying local transformations then world space transformations to a 0 to 1 sized plane.

2) Generating a dynamic vertex buffer, generating the vertices based on the parameters passed. Then rendering a triangle strip containing degenerate triangles.

Firstly, I focused on ES1 and the iTouch, using method one to render each sprite. This was done using the fixed function calls to glTranslate, glRotate, and glScale. Each sprite would require 4 matrix mode switch. (GL_TEXTURE, GL_MODELVIEW, GL_TEXTURE, GL_MODELVIEW) in order to maintain the current matrices by using glPushMatrix and glPopMatrix. For a small number of sprites, it seem to work quite well, getting a steady frame rate of 60. But when applying this method to sprite font rendering. The frame rate dropped to around 30 to 35 frames per second.

-(void)drawSprite:(Texture2D*)texture destRect:(CGRect)destRect srcRect:(CGRect)srcRect
position:(CGPoint)position origin:(CGPoint)origin rotation:(float)angle scale:(float)scale
depth:(float)depth color:(color4)color
// apply the current texture if it needs to be changed
[self applyTexture:texture];

// set the current mode into the texture matrix mode

// push the current Texture Matrix (identity)

// work out the area we want to use from the texture
float scalarX = 1.0f / texture.width;
float scalarY = 1.0f / texture.height;

// Scale to source
// Offset into texture origin

// translate the rectangle into position
glTranslatef(srcRect.origin.x * scalarX, srcRect.origin.y * scalarY, 0.0f);

//scale the uvCoords to isolate the area of the texture we want to use
glScalef(srcRect.size.width * scalarX, srcRect.size.height * scalarY, 1.0f);


// must the current Model Matrix (identity);

float scaleX = 1.0f / (destRect.size.width * scale);
float scaleY = 1.0f / (destRect.size.height * scale);

// scale to size
// offset to the origin
// rotate
// move into world space position

// scale the plane to the size of the image to be displayed
glScalef((destRect.size.width * scale), (destRect.size.height * scale), scale);

glTranslatef(position.x * scaleX, position.y * scaleY, depth);
// rotate the object
glRotatef(angle, 0.0f, 0.0f, 1.0f);

// offset the to the origin
glTranslatef(-origin.x * scaleX, -origin.y * scaleY, 0.0f);

//set the color
glColor4f(color.r, color.g, color.b, color.a);

// draw the plane
glDrawArrays(GL_TRIANGLE_STRIP, 0, 4);

// pop the texture matrix


This led me to using method 2, allocation a dynamic vertex buffer, generating, binding and updating it using VBO, and doing the transformations myself. This however, also meant that I was required to write a vertex processor to the rotation values to each vertex.

Their were 3 vertex buffers, one for the position, one for the texture coordinates, and another for the color. Why color? This was so I could tint the sprite to what ever color I desired if i wanted to.

For each sprite that would be rendered, if the the sprite image was the same image as the previously rendered sprite, it would add that sprite to the buffer and update the number of stored sprites on the buffer. If the image was different, it would render the current sprites in the buffer, reset the buffer information, and then set the new sprite into the buffer.

Each sprite was processed individually before being stored into the vertex buffer for rendering. It was basically the same as calling glScale to source the sub image and scale it to the destination size without the matrix calls. As for rotation, a vertex processor which took a pointer to a list of vertices, a matrix, the size and number of vertices was used. At first, I thought this would be really inefficient, but it turns out it wasn't the case, this was probably due to the fact that is was only 4 vertices being processed. This whole process used pointer arithmetic, to squeeze as much power out of this process as possible.

(Sorry, no source code for this, but enough details to let people figure it out. HEY, it's the fun part of the all the code, I don't want to ruin it all by giving you the answer : D)

The final result for the iTouch, was a constant frame rate of 60. However, using this version on the iPhone, was 24 to 30 fps.

For the iPhone, ES2 support was written, using a shader, which performed the same tasks as method one, rendering each sprite individually. For Sprite Font rendering I thought it would be overkill. However it turns out it's wasn't. Getting a constant frame rate of 60.

The you would simply pass the parameters to the shader, and render.

uniform mat4 u_viewProjection;

uniform vec4 u_position;
uniform vec2 u_origin;
uniform float u_rotation; // in radians
uniform float u_scale;

uniform vec4 u_srcRect;
uniform vec4 u_destRect;
uniform vec2 u_imageDimensions;

attribute vec4 a_position;
attribute vec2 a_texCoord0;

varying highp vec2 v_texCoord0

// Z rotation around origin
mat4 mat4_ZRotation(float radians) {
float cosrad = cos(radians);
float sinrad = sin(radians);
return mat4(cosrad, -sinrad, 0.0, 0.0,
sinrad, cosrad, 0.0, 0.0,
0.0, 0.0, 1.0, 0.0,
0.0, 0.0, 0.0, 1.0 );

void main() {
// work out the image dimensions scalar
vec2 imageScalar = vec2(1.0 / u_imageDimensions.x, 1.0 / u_imageDimensions.y);

// work out the UV area offset values
vec2 uvOffset = vec2(u_srcRect.x, u_srcRect.y) * imageScalar;
vec2 uvScale = vec2(u_srcRect.z, u_srcRect.w) * imageScalar;

// set the texture coordinates
// multiply the current vertex value by the UV scale and then offset it
// into the texture space position
v_texCoord0 = a_texCoord0 * uvScale + uvOffset;

// scale the vertex position to the size of the image
vec4 posOffset = vec4(a_position.x * (u_destRect.z * u_scale), a_position.y * (u_destRect.w * u_scale), 0.0, 1.0);

// offset the the origin
posOffset -= vec4(u_origin, 0.0, 0.0);

// rotate the image around the origin
if(u_rotation != 0.0) {
posOffset *= mat4_ZRotation(u_rotation);

// add the specifed position with world space offset and project into into view
gl_Position = u_viewProjection * (posOffset + u_position);

After all this I can say that rendering each individual sprite instead of batch on an iphone with ES2, is far more effective, then using the batching method used on ES1.

With batching of sprites into a triangle strip, it is far more efficient on ES1, to perform the calculations yourself, then it is perform matrix transformations on each individual sprite through the fixed function pipeline.
Haven't really written here at all, so I figure now is probably a good time to start using this thing.

So, I've been bashing out code for my iPhone stuff and have been looking at Post Processing as part of my engines architecture. After much reading and browsing the answer to it all was to use a render to texture applied to a plane which was the size of the screen. Which you would then use a fragment shader (of course).

I spent a substantial amount of time screwing around trying to get it to work. Also looking at what possible methods could be used. Eventually it lead me to using a Frame Buffer Object (FBO).

For the iPhone it seem to be the ideal choice, considering it renders directly to the texture, with no need for copying pixels around using glCopyTexImage2D() or glCopySubTexImage2D(). And its relatively easy to set up once you figure it out.

To save some people the heart ache of screwing around for days, here's the code to generate the buffers. I used the GLES2Sample that you can get from the Apple Dev site to test out the procedure for doing, in case you were wondering.
(for ES1, just put OES in the right places)

// The buffer we will use to present our render to texture plane in
// We need this buffer, because it's what will be used to present
// our renders onto the iPhone screen

glGenFramebuffers(1, &defaultFramebuffer);
glGenFramebuffers(1, &colorRenderbuffer);

glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);
glBindFramebuffer(GL_RENDERBUFFER, colorRenderbuffer);

// attach the frame buffer to the color render buffer
glFramebufferRenderbuffer(GL_FRAME_BUFFER, GL_COLOR_ATTACHMENT0, GL_RENDERBUFFER, colorRenderbuffer);


// This is the buffer we will be rendering to and using as a texture
// on out screen plane

// create the a frame buffer to allows us to tie it to the texture
glGenFramebuffer(1, &textureFramebuffer);
glBindFramebuffer(GL_FRAMEBUFFER, textureFramebuffer);

// create the texture object
glGenTextures(1, &textureName);
glBindTexture(GL_TEXTURE_2D, textureName);

// set the texture parameter filtering (feel free to use other TexParams)
// you have to do this, forgetting to do this will make it not work.

// fill the texture data (the max texture size needs to be power of 2)

// attach the frameBuffer to the texture object
glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, textureName, 0);


//reset back to the main buffer
glBindTexture(GL_TEXTURE_2D, 0);
glBindFramebuffer(GL_FRAMEBUFFER, defaultFramebuffer);

When messing around with this thing and trying to find out how to get it working, I came across quite a few people that could not get the texture to bind unless they loaded an external image into it. The main reason for that was if you do not specify a glTexParameteri(), it just won't work. I had previously did the same, and was kind of disgusted in myself for doing so, and I just would not have it. After reading some more, and trying things it out, it now works with no need for allocating the texture data. Hoora.

If your wondering how to render. here's the procedure for doing so.

1) Bind the texture frame buffer object to the frame buffer
2) Set up your scene
3) Render
3) Bind the main frame buffer object to the frame buffer
4) Bind the texture object (that has just been render to)
5) render the screen plane
6) present buffer

If nothing is coming through, check if you have glEnable(GL_TEXTURE).
It's becoming a common occurrence, that lots of talented people wooed by the prospect of riches, or just the chance to do something they really want to do, are exploited for their talents and abilities.

In the pass few months I have been in a continuous battle trying to stop people from being exploited by one such known "Company". To avoid and legal finger pointing and such, I'll leaving the companies name out.

If your new to the games industry, just finished studying, or a freelance artist looking to break into the industry. Be very careful, be very VERY careful. Your first game studio job can make or break how you feel about working in the games industry. Not only that, you'll work away with not much to show for it.

Here's some steps you should take before you go for any job, or job interview.

Research the company - do they have a website, have they released any games, are they owned by a large or well known publisher, do they have people working for them already, are they a new company, how many years experience do they have.

Ask people who have been in the industry or are currently in the industry - Your best resource for information is knowing someone who is in the industry already. News travels fast, so they may hear the inside news about a particular company/person that isn't public knowledge.

How realistic is their vision - how much work do they need to get done at what price and time. If they expect you work for free, with promises of reimbursement, don't fall into the trap. Leave politely and never see them again.

Did they give you a job through MSN - This may sound stupid but believe me when I say, it has been done. Any "serious" posing company who offers you a job through MSN, is a good sign of how disorganised they are. Most or at least 99% of the time, a company will contact you via email, or telephone, once they have reviewed your folio.

If the studio you are going for speaks nothing of the future, and barely mentions anything about the present or past achievements, what they already have achieved, and is currently in place. Run... run for your life. Because usually, they're delusional and egotistical maniacs, who really don't have much of a clue about what they're doing, looking to exploit others to get they're work done for them, only to shaft them later on.

I myself have been exploited in the past, and those experiences with the business were essentially a waste of time. Not only was I not able to list them as experience in my resume, I was also out of pocket for the work that I did do.

So be careful. It's better to waste your time doing your own thing or continuing with your studies, then take a risk with a business/studio/company promising you fame, glory and riches. Because doing all the hard work on your folio and making yourself appealing to a real studios, usually lands you one of the greatest jobs in the world!!!
Sign in to follow this  
  • Advertisement