Sign in to follow this  

OpenGL 2D Rendering Engine Architecture

This topic is 1793 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hey,

 

I'm new to this forum, and to OpenGL as well, but I was looking into hopefully learning more about OpenGL by developing my own 2D rendering engine. I was just curious if anyone had any good reference as to how such an engine should be designed from a high level architectural standpoint. I found very few examples of 2D OpenGL Rendering Engines scouring the web, and all of which had very little to no design value. I would like to point out that I am purely interested in a rendering engine based on OpenGL, not a game engine. 

 

Any help would be appreciated.

 

Thanks!

Share this post


Link to post
Share on other sites

I would start by thinking of all the abstractions you want from a high level.   Then think of the abstractions required to support those, and keep working your way down until you're hitting OpenGL.  

 

I would think for a 2D engine, you would need the concept of a layers.  Background and foreground layers.  I would guess they need to be z sorted as well.  The layers are probably bigger than the screen, so they need some dimensions, and they might need to rotate, so a layer would need a rotation parameter.  You probably want to have sprites be on the layers, so layers need to have a list of sprites and where to put them (sprite position might be part of the sprite).  You probably need some kind of container for the layers to keep them in the right order for rendering (although simply sorting the list of layers every time you render probably won't show up in a profiler).

 

Sprites probably contain some kind of graphic, a position, scale, rotation.  If its animated, it might contain an animation time, looping variable, and a list of graphics to animate between. 

 

Your renderer might take a list of layers, sort them, then iterate over each layers sprites, rendering the graphic for each one.  Try to keep each object you design have one and only one job.

 

Hope this gets you thinking in the right direction. 

 

cheers, 

 

Bob

Share this post


Link to post
Share on other sites

Okay, this is my perspective on game engine design (at least the graphic part).  I'm assuming you already have mastered your desired programming language (i.e. C/C++, C#, Java, Objective-C, whatever) and already know how to use your desired compiler to build programs/apps for your target platform (if you don't know these things, please specify), so IMO the best thing to do when starting off is to do a bit of brainstorming about what functionality you'd like in your 2D rendering engine.  Examples, 2D sprites, animated sprites, scaling, rotation, z-sorting, sprites that flash a certain colour when you shoot it, transparency, colour keying, etc.  After you have all of that down, create like a flow-chart or some pseudocode to get a visual idea of what your engine and APIs would look like.  Then you'll be better prepared to write your 2D engine.

 

What I won't assume is that you are familiar with OpenGL (or at least not an advanced user yet), but it would help if you could share with us how much programming knowledge and/or OpenGL experience you have already... so I'll go ahead and give you some source examples.  I'll start off by giving you a fixed-pipeline example of how to set up a 2D orthographic view:

 

//--------------------------------------------------------------------------------------
// Name: glEnable2D
// Desc: Enables 2D rendering via Ortho projection.
//--------------------------------------------------------------------------------------
GLvoid glEnable2D( GLvoid )
{
	GLint iViewport[4];

	// Get a copy of the viewport
	glGetIntegerv( GL_VIEWPORT, iViewport );

	// Save a copy of the projection matrix so that we can restore it 
	// when it's time to do 3D rendering again.
	glMatrixMode( GL_PROJECTION );
	glPushMatrix();
	glLoadIdentity();

	// Set up the orthographic projection
	glOrtho( iViewport[0], iViewport[0]+iViewport[2],
			 iViewport[1]+iViewport[3], iViewport[1], -1, 1 );
	glMatrixMode( GL_MODELVIEW );
	glPushMatrix();
	glLoadIdentity();

	// Make sure depth testing and lighting are disabled for 2D rendering until
	// we are finished rendering in 2D
	glPushAttrib( GL_DEPTH_BUFFER_BIT | GL_LIGHTING_BIT );
	glDisable( GL_DEPTH_TEST );
	glDisable( GL_LIGHTING );
}


//--------------------------------------------------------------------------------------
// Name: glDisable2D
// Desc: Disables the ortho projection and returns to the 3D projection.
//--------------------------------------------------------------------------------------
void glDisable2D( GLvoid )
{
	glPopAttrib();
	glMatrixMode( GL_PROJECTION );
	glPopMatrix();
	glMatrixMode( GL_MODELVIEW );
	glPopMatrix();
}

 

This code should be really easy to understand once you understand OpenGL well enough.  Starting in glEnable2D, it grabs a copy of the current viewport and uses it to set up the orthographic projection.  Before setting up the orthographic projection, it saves a copy of the previous matrix (assuming it's your 3D perspective matrix) and pushes it onto the stack.  Then it saves a copy of the modelview matrix and also pushes it to the stack before setting it to the identity matrix.  Lastly, it disables lighting and depth testing so that it doesn't interfere with any 3D rendering that has already taken place (i.e. HUD).  After that you're ready to start drawing in 2D!  When finished, call glDisable2D.  It restores the previous render states that were modified by glEnable2D and also restores the projection and modelview matrices to their original state and taking them off the stack.

 

I'll give you an example of how to draw a 2D sprite also.

 

//--------------------------------------------------------------------------------------
// Name: draw_quad
// Desc: Renders a 2D quad to the screen (using a triangle fan).
//--------------------------------------------------------------------------------------
void draw_quad( GLuint texture, float x, float y, float w, float h )
{
	float vertices[] = { x, y, x+w, y, x+w, y+h, x, y+h };
	float tex[] = { 0.0f, 1.0f, 1.0f, 1.0f, 1.0f, 0.0f, 0.0f, 0.0f };

	glBindTexture( GL_TEXTURE_2D, texture );

	glEnableClientState( GL_VERTEX_ARRAY );
	glEnableClientState( GL_TEXTURE_COORD_ARRAY );
	glTexCoordPointer( 2, GL_FLOAT, 0, tex );
	glVertexPointer( 2, GL_FLOAT, sizeof(float)*2, vertices );
	glDrawArrays( GL_QUADS, 0, 4 );
	glDisableClientState( GL_TEXTURE_COORD_ARRAY );
	glDisableClientState( GL_VERTEX_ARRAY );
}

//--------------------------------------------------------------------------------------
// Name: draw_quad2
// Desc: Renders a 2D quad to the screen (using a triangle fan), but allows you to specify
//		 the texture coordinates yourself.
//--------------------------------------------------------------------------------------
void draw_quad2( GLuint texture, float* tex, float x, float y, float w, float h )
{
	float vertices[] = { x, y, x+w, y, x+w, y+h, x, y+h };

	glBindTexture( GL_TEXTURE_2D, texture );

	glEnableClientState( GL_VERTEX_ARRAY );
	glEnableClientState( GL_TEXTURE_COORD_ARRAY );
	glTexCoordPointer( 2, GL_FLOAT, 0, tex );
	glVertexPointer( 2, GL_FLOAT, sizeof(float)*2, vertices );
	glDrawArrays( GL_QUADS, 0, 4 );
	glDisableClientState( GL_TEXTURE_COORD_ARRAY );
	glDisableClientState( GL_VERTEX_ARRAY );
}

 

Keep in mind that these two functions aren't optimized, so I encourage you to do better!  So, these functions should be rather straight forward, but the difference between the 1st and the second is that the first one simply maps the entire texture to the quad, the second takes in custom texture coordinates for a sprite sheet.  One thing I recommend you NOT do is constantly enable/disable certain client states for vertex arrays.  Second, if possible, implement a basic sprite batching routine to draw multiple sprites in one draw call if your game has very heavy sprite usage.  Also, keep in mind that if you're using OpenGL ES, then you can't use GL_QUADS (which kinda sucks IMO), so you'll have to change it to use GL_TRIANGLE_FAN or split it up into 2 triangles if you're going to batch sprites.

 

Also before I for get, always remember that OpenGL uses texture coordinates that stem from the bottom left hand corner instead of the top left hand corner like Direct3D does.  I wrote a basic function that takes care of generating OpenGL compatible texture coords for me.  

 

void get_tex_coords( struct texture_t* t, struct Rect* r, float* tc )
{
    tc[0] = r->x1 / t->width;
    tc[1] = 1.0f - ( r->y1 / t->height );
    tc[2] = r->x2 / t->width;
    tc[3] = 1.0f - ( r->y1 / t->height );
    tc[4] = r->x2 / t->width;
    tc[5] = 1.0f - ( r->y2 / t->height );
    tc[6] = r->x1 / t->width;
    tc[7] = 1.0f - ( r->y2 / t->height );
}

 

The first two structure parameters are custom, but the necessary fields should be common sense (forgive me if I'm overwhelming you already).  So with all this you can render 2D textured sprites in OpenGL.  But what about rotation and scaling?  That's a good question.  If you plan on rotating and scaling your sprites, use the Z-axis to do it.  But before rotating and scaling anything, you should translate the sprites to your point of origin and "draw around" that point, not necessarily "draw at" that point.  What I mean by this is to center your sprite around the destination point.   If you don't, it's not going to work right and will likely piss you off.  Here's another example.

 

Given some random sprite dimensions:

Position (X = 300, Y = 200)

Size (Width = 64, Height = 64)

 

draw_quad( texture, X-(Width/2.0), Y-(Height/2.0), Width, Height );

 

This will center your sprite and it will rotate properly.  When I was new to writing 2D OpenGL games, I had to figure this out on my own, so learn from my trial and error.

 

Now, there's one more thing I want to share with you.  If you decide you want to use OpenGL ES 2.0 or OpenGL 3.x or OpenGL 4.x, then the above implementation of glEnable/Disable2D isn't going to work on it's own.  You'll have to manually create your orthographic matrix and feed it to your vertex program.  You can use this function to create a custom orthographic projection:

 

void glOrtho(float* out, float left, float right,float bottom, float top,float near, float far)
{
    
    float a = 2.0f / (right - left);
    float b = 2.0f / (top - bottom);
    float c = -2.0f / (far - near);
    
    float tx = - (right + left)/(right - left);
    float ty = - (top + bottom)/(top - bottom);
    float tz = - (far + near)/(far - near);
    
    float ortho[16] = {
        a, 0, 0, 0,
        0, b, 0, 0,
        0, 0, c, 0,
        tx, ty, tz, 1
    };
    
    memcpy(out, ortho, sizeof(float)*16);
}

 

Just use the resulting matrix to multiply against your vertices in your vertex program the same way you'd do 3D with a perspective matrix.  I don't know if you've touched vertex or fragment programs (aka shaders) with GLSL yet, but if you haven't don't worry about it too much just yet and focus on the basics until you're ready.  The code and examples I've given you should be enough to get started (at least I hope so).

 

I use this code in my own 2D OpenGL games (and I have a handful of them) and so far it's portable enough to where I don't have to modify it much.  OpenGL ES 1.1 was a challenge though.  I hope this helps you in some way (or at least a random person who finds this on Google) and feel free to ask anything I might have left out.  Also if possible please share with us your skill level as a programmer if you haven't already to avoid any possible redundancy.

 

Shogun

 

EDIT: I wrote an article about 2D sprite rendering for OpenGL back in November 2007.  It uses a specific extension called GL_NV_texture_rectangle.  If at all possible, I recommend using the ARB version instead (if this is relevant to you) unless you checked your extension list first.  Even if your card does suppor it, it's best to have a more compatible fallback.  I didn't use it in my latest game engine because I was initially trying to keep compatibility with OpenGL ES, but if your game is for Windows, MacOSX or Linux, it should be beneficial.

Edited by blueshogun96

Share this post


Link to post
Share on other sites

I found very few examples of 2D OpenGL Rendering Engines scouring the web, and all of which had very little to no design value. I would like to point out that I am purely interested in a rendering engine based on OpenGL, not a game engine.

 

Have you seen libgdx? It's more than just a rendering engine, but the renderer is modularized into its own package so it's easy to study in isolation. It's built on top of OpenGL and OpenGL ES and uses modern rendering techniques. The 2D renderer is quite performant and, more importantly, has a good architecture. Great place to look for ideas.

Share this post


Link to post
Share on other sites

Wow, thank you all so much for your help. That gives me a lot to look into. I should be busy for a while.

 

Thanks again!

Share this post


Link to post
Share on other sites

This topic is 1793 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now