Sign in to follow this  

OpenGL Text-rendering and forward-compatibility

This topic is 2899 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am creating my own game engine, and I am trying to make it as forward-compatible with opengl as possible. This means not using depreciated features of opengl like the modelview matrix and display lists and the like. While I'm ok with using vertex attribute arrays for everything, I'm at a loss of how to reproduce the functionality that display lists give me. For example, text rendering. The way I currently render text is to store the font faces as a bunch of textures (or perhaps one big one) and map that texture on to a quad, one quad for each character. I draw each character at a 'cursor position' and then advance the cursor position by the width of the character. I currently store these texture quad calls and cursor advancement in a display list, one for each glyph in the font. This makes it convenient to draw text because all I have to do now is use glCallLists() with the text string itself. But now display lists are depreciated, and there is no way to store uniform matrix multiplication operations (i.e. how I advance the 'cursor' every character) into a sort of buffer you can call repeatedly anymore. What am I to do to keep this forward-compatible, yet fast? Perhaps I could use instanced draw calls and put every glyph dimensions and texture offsets into a uniform buffer, and use the text string as a uniform array to use gl_InstanceID for the character being rendered, and use the string to determine which character we are rendering in that instance, but the font would have to be monospace. Something tells me I'm overthinking this and perhaps I'm trying too hard to avoid legacy coding. [Edited by - Cathbadh on February 8, 2010 1:28:16 PM]

Share this post


Link to post
Share on other sites
You could create a vertex buffer and index buffer. Load the index buffer such that every 6 indices draws two triangles. Before drawing, load the vertex buffer so that every 4 vertices (indexed by 6 indices) will draw a character quad. Loop the vertex buffer adjust the positions for each set of 4 vertices to be where you want the character drawn, the next 4 vertices, will be character 2, so on, also adjust the texture coordinates to pull the correct character coordinates from one large texture. The index buffer wouldnt need to be updated. You could load the vertex buffer with enough space to draw, a single 64 or 128 character line at a time, If you need more less or in a different position, adjust your draw calls and call counts. Once the first 4 vertices are in the correct place, the second character position and size can be adjusted based on the one before it.

Share this post


Link to post
Share on other sites
This won't be particularly easy, but I would recommend something along the following lines:

  1. Upload your font texture.
  2. Upload your string of text as a 1D texture.
  3. Use the instancing extension to render a single quad N times, where N is the number of characters.
  4. Use the instance ID to calculate the position the quad should be rendered (in the vertex shader).
  5. Use the instance ID to find the character in the string texture (in the fragment shader).
  6. Use the character to render the correct portion of the font texture.

I am pretty sure that this is the fastest possible approach to text rendering, although variations are possible, for instance: using a geometry shader or histopyramid expansion, rather than the instancing extension.

Share this post


Link to post
Share on other sites
That's just it-- I know how to use element arrays and vertex buffers, I just miss the ability to just be able to (after some lengthy setup with generating font bitmaps and display lists) give opengl my text string and nothing more, and have it draw the text to the screen correctly. I'd like to not have the cpu involved in anything more during runtime. Is this a pipe dream? I know I can get monospace fonts to work like I stated above because I know that characters are all the same width, so I can figure out where to place a quad simply by gl_InstanceID * glyph_width, but the moment glyph_width is not constant across all characters, all bets are off.

Share this post


Link to post
Share on other sites
Quote:
Original post by Cathbadh
That's just it-- I know how to use element arrays and vertex buffers, I just miss the ability to just be able to (after some lengthy setup with generating font bitmaps and display lists) give opengl my text string and nothing more, and have it draw the text to the screen correctly.
My method will do that. Upload the string as a 1D texture, bind the shader and a single call to drawInstanced
Quote:
I'd like to not have the cpu involved in anything more during runtime. Is this a pipe dream?
Nope, just a fair amount of work to implement.
Quote:
I know I can get monospace fonts to work like I stated above because I know that characters are all the same width, so I can figure out where to place a quad simply by gl_InstanceID * glyph_width, but the moment glyph_width is not constant across all characters, all bets are off.
Use a second texture containing widths for each glyph.

Share this post


Link to post
Share on other sites
I think I am already saving a lot of cycles by rendering the text to an offscreen framebuffer and just plastering that framebuffer on top of the view as just a single big quad. On-screen text doesn't change very often, so I figure it would be redundant to have it draw that string of text over and over again. The text is processed and rendered to the FBO only when the text changes.

It is just bugging me that I can get legacy opengl to do variable-width fonts, and I can't figure out a way to get the core spec to do it too with the same runtime complexity.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Quote:
Original post by Cathbadh
That's just it-- I know how to use element arrays and vertex buffers, I just miss the ability to just be able to (after some lengthy setup with generating font bitmaps and display lists) give opengl my text string and nothing more, and have it draw the text to the screen correctly.
My method will do that. Upload the string as a 1D texture, bind the shader and a single call to drawInstanced
Quote:
I'd like to not have the cpu involved in anything more during runtime. Is this a pipe dream?
Nope, just a fair amount of work to implement.
Quote:
I know I can get monospace fonts to work like I stated above because I know that characters are all the same width, so I can figure out where to place a quad simply by gl_InstanceID * glyph_width, but the moment glyph_width is not constant across all characters, all bets are off.
Use a second texture containing widths for each glyph.


Your method is essentially what I was thinking about in the OP, but using uniform buffers and uniform blocks instead of 1D textures. I can specify character width in a uniform block just fine, but the problem is that I cannot sum up the widths of all the previous characters to determine where the shader should place the instanced quad. It'd be great if I could somehow store the progress of the cursor across the screen in a uniform during shader execution, but as we all know uniforms are read-only in shaderland.

Share this post


Link to post
Share on other sites
Quote:
Original post by CathbadhI can specify character width in a uniform block just fine, but the problem is that I cannot sum up the widths of all the previous characters to determine where the shader should place the instanced quad. It'd be great if I could somehow store the progress of the cursor across the screen in a uniform during shader execution, but as we all know uniforms are read-only in shaderland.
Uniforms aren't the only way to pass data around. Either geometry shaders or a variation on histopyramid expansion would allow you to sum up the variable distances for each character.

If you want an even simpler solution, and you know the length of the string on the CPU, use a multi-pass approach. Attach an empty 1D texture to a framebuffer object, and in the first pass, render the width of each character into its location. Then in consecutive passes, shift the texture containing the string one character to the right, and add to the existing value. For a string of length N, after N-1 passes you will have the correct offset of each character.

Share this post


Link to post
Share on other sites
Since the text change rarely, I think it's better to calculate those offsets on the cpu. It's indeed simpler to implement and easier to understand than the above gpu-based methods. Moreover, you only have to do it when the text change and only once, so it will never be your bottleneck.

EDIT: It's also a lot simpler to implement multi-line layouts or features like kerning doing it in the cpu.

Share this post


Link to post
Share on other sites
Quote:
Original post by apatriarca
Since the text change rarely, I think it's better to calculate those offsets on the cpu.

Not only the offsets, but the entire text rendering. Rendering bitmap fonts on the CPU is dirt cheap, and vector fonts aren't slow either. Both are highly multithreadable. And since text usually doesn't change multiple times per frame, it's a rather low frequency operation. It's a waste to use GPU geometry shaders or similar to recomposite something 60 times per second that changes once every ten minutes or so.

Just render all text on the CPU (if possible in parallel to the GPU doing something else) into a cache texture, and upload the new data on-demand to the GPU. The latter would then render entire words or rows using a single quad.

BTW, just to clear up a misconception here: the 'old style' one glyph per display list call thing was maybe convenient for the developer, but it was anything but fast. It was (and is) one of the least efficient ways to render text short of plotting glyphs using large amounts of GL_POINTS... (which ironically could even be faster than the display list thing on modern GPUs !)

Share this post


Link to post
Share on other sites
Thanks guys, I'll just break down and send the string as well as character positions to the shader and then do an instanced draw call whenever the text changes.

Share this post


Link to post
Share on other sites

This topic is 2899 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now