Sign in to follow this  

OpenGL Outlining character animation - methods?

This topic is 1909 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello guys. Sorry for my English in advance.

In brief: I’d like to get my game characters outlined like in the game MyBrute (http://www.mybrutecheats.com/wp-content/uploads/2009/10/team-simulator.jpg)
The character animation consists of a number of images that move around (it all works on OpenGLES2.0, iOS). I’d want to outline the animation dynamically while it’s playing, with all the moving parts.
I’ve already tried:
1. To make a virtual texture out of the animation, then to get the information about this texture and make the stroke (but FPS goes down a lot), then to draw the texture in the scene.
2. To make a virtual texture out of the animation and then to use a pixel shader on it while rendering.
I have noticed that drawing a texture lowers FPS by 10%, which is quite strange to me.
Probably, someone could think of any solution? And why you think drawing a texture affects FPS so much?
Thanks a lot in advance!

Share this post


Link to post
Share on other sites
You can probably do this in a shader.

Something like... if this pixel is transparent, then test the pixels around this pixel to see if any are NOT transparent, if any neighbors are NOT transparent then you are probably a border pixel and can draw it as black instead of it's alpha color.

Keep in mind you will need alpha testing off for this so that you are drawing transparent pixels.

Jeff.

Share this post


Link to post
Share on other sites
[quote name='Inness' timestamp='1350290164' post='4990308']
Hello guys. Sorry for my English in advance.
[/quote]

You only need to apologize if it's your native language [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]

[quote name='Inness' timestamp='1350290164' post='4990308']
In brief: I’d like to get my game characters outlined like in the game MyBrute ([url="http://www.mybrutecheats.com/wp-content/uploads/2009/10/team-simulator.jpg"]http://www.mybrutech...m-simulator.jpg[/url])
The character animation consists of a number of images that move around (it all works on OpenGLES2.0, iOS). I’d want to outline the animation dynamically while it’s playing, with all the moving parts.
I’ve already tried:
1. To make a virtual texture out of the animation, then to get the information about this texture and make the stroke (but FPS goes down a lot), then to draw the texture in the scene.
2. To make a virtual texture out of the animation and then to use a pixel shader on it while rendering.
I have noticed that drawing a texture lowers FPS by 10%, which is quite strange to me.
Probably, someone could think of any solution? And why you think drawing a texture affects FPS so much?
Thanks a lot in advance!
[/quote]

Never, never, ever use FPS to measure performance-- it [i]doesn't scale linearly[/i]! The difference between 1,000FPS and 900FPS is on the order of microseconds. The difference between 60FPS and 30FPS is literally hundreds of times that.

For your actual problem-- I would suggest doing things in three layers: At asset creation time, create a mask texture for each piece that consists of a scaled-up/blurred alpha mask ([url="http://www.valvesoftware.com/publications/2007/SIGGRAPH2007_AlphaTestedMagnification.pdf"]This[/url] could be an interesting read) At runtime, draw all your regular pieces as normal. Next, draw all the bits again, but this time bias the depth value slightly and use the blurred/fake distance field texture. So long as you have depth-testing enabled, fillrate consumption should still be at manageable levels since the original sprites will mask out areas already drawn to (early depth-stencil testing is out, but we don't care since this is an ultra-cheap shader) and we can skip all the complex outline shader work.

EDIT: Figures are approximate :) Edited by InvalidPointer

Share this post


Link to post
Share on other sites
A bit offtopic but:
Difference between 1000 FPS and 900 FPS is 1 ms. Difference between 60 FPS and 30 FPS is 16.6 ms.

Share this post


Link to post
Share on other sites
[quote name='Ripiz' timestamp='1350405819' post='4990792']
A bit offtopic but:
Difference between 1000 FPS and 900 FPS is 1 ms. Difference between 60 FPS and 30 FPS is 16.6 ms.
[/quote]
(1/30-1/60)/(1/900-1/1000) = 150

I guess you meant 0.1 ms.

Share this post


Link to post
Share on other sites
If I understand correctly and you have a texture sprite-sheet for your characters already, I'd just load it into a texture object, bind an FBO to write to a second texture, and use a pixel shader to run a sobel edge-detection filter on it (these are pretty well documented, but if there's a problem just ask around here). Then just blit the result texture into your original one. That should leave you with one texture with all of your outlines baked in to the animation already.

This is something you should only need to do once, when the game starts up (or even offline, and save the results to a new sprite-sheet for the game to use).

Share this post


Link to post
Share on other sites
[quote name='jeffkingdev' timestamp='1350306902' post='4990367']
Something like... if this pixel is transparent, then test the pixels around this pixel to see if any are NOT transparent, if any neighbors are NOT transparent then you are probably a border pixel and can draw it as black instead of it's alpha color.
[/quote]
This is the proper way to build outlines; to do it in a preprocessing step you need to leave enough transparent pixels between adjacent sprites in your sprite sheet and account for outline size in your draw calls, but it isn't a big deal.
Do you have variable scaling, or other features that require using a shader to generate throwaway outlines every frame?

Share this post


Link to post
Share on other sites

This topic is 1909 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now