Sign in to follow this  
Cathbadh

OpenGL Text-rendering and forward-compatibility

Recommended Posts

Cathbadh    100
I am creating my own game engine, and I am trying to make it as forward-compatible with opengl as possible. This means not using depreciated features of opengl like the modelview matrix and display lists and the like. While I'm ok with using vertex attribute arrays for everything, I'm at a loss of how to reproduce the functionality that display lists give me. For example, text rendering. The way I currently render text is to store the font faces as a bunch of textures (or perhaps one big one) and map that texture on to a quad, one quad for each character. I draw each character at a 'cursor position' and then advance the cursor position by the width of the character. I currently store these texture quad calls and cursor advancement in a display list, one for each glyph in the font. This makes it convenient to draw text because all I have to do now is use glCallLists() with the text string itself. But now display lists are depreciated, and there is no way to store uniform matrix multiplication operations (i.e. how I advance the 'cursor' every character) into a sort of buffer you can call repeatedly anymore. What am I to do to keep this forward-compatible, yet fast? Perhaps I could use instanced draw calls and put every glyph dimensions and texture offsets into a uniform buffer, and use the text string as a uniform array to use gl_InstanceID for the character being rendered, and use the string to determine which character we are rendering in that instance, but the font would have to be monospace. Something tells me I'm overthinking this and perhaps I'm trying too hard to avoid legacy coding. [Edited by - Cathbadh on February 8, 2010 1:28:16 PM]

Share this post


Link to post
Share on other sites
NumberXaero    2624
You could create a vertex buffer and index buffer. Load the index buffer such that every 6 indices draws two triangles. Before drawing, load the vertex buffer so that every 4 vertices (indexed by 6 indices) will draw a character quad. Loop the vertex buffer adjust the positions for each set of 4 vertices to be where you want the character drawn, the next 4 vertices, will be character 2, so on, also adjust the texture coordinates to pull the correct character coordinates from one large texture. The index buffer wouldnt need to be updated. You could load the vertex buffer with enough space to draw, a single 64 or 128 character line at a time, If you need more less or in a different position, adjust your draw calls and call counts. Once the first 4 vertices are in the correct place, the second character position and size can be adjusted based on the one before it.

Share this post


Link to post
Share on other sites
swiftcoder    18437
This won't be particularly easy, but I would recommend something along the following lines:

  1. Upload your font texture.
  2. Upload your string of text as a 1D texture.
  3. Use the instancing extension to render a single quad N times, where N is the number of characters.
  4. Use the instance ID to calculate the position the quad should be rendered (in the vertex shader).
  5. Use the instance ID to find the character in the string texture (in the fragment shader).
  6. Use the character to render the correct portion of the font texture.

I am pretty sure that this is the fastest possible approach to text rendering, although variations are possible, for instance: using a geometry shader or histopyramid expansion, rather than the instancing extension.

Share this post


Link to post
Share on other sites
Cathbadh    100
That's just it-- I know how to use element arrays and vertex buffers, I just miss the ability to just be able to (after some lengthy setup with generating font bitmaps and display lists) give opengl my text string and nothing more, and have it draw the text to the screen correctly. I'd like to not have the cpu involved in anything more during runtime. Is this a pipe dream? I know I can get monospace fonts to work like I stated above because I know that characters are all the same width, so I can figure out where to place a quad simply by gl_InstanceID * glyph_width, but the moment glyph_width is not constant across all characters, all bets are off.

Share this post


Link to post
Share on other sites
swiftcoder    18437
Quote:
Original post by Cathbadh
That's just it-- I know how to use element arrays and vertex buffers, I just miss the ability to just be able to (after some lengthy setup with generating font bitmaps and display lists) give opengl my text string and nothing more, and have it draw the text to the screen correctly.
My method will do that. Upload the string as a 1D texture, bind the shader and a single call to drawInstanced
Quote:
I'd like to not have the cpu involved in anything more during runtime. Is this a pipe dream?
Nope, just a fair amount of work to implement.
Quote:
I know I can get monospace fonts to work like I stated above because I know that characters are all the same width, so I can figure out where to place a quad simply by gl_InstanceID * glyph_width, but the moment glyph_width is not constant across all characters, all bets are off.
Use a second texture containing widths for each glyph.

Share this post


Link to post
Share on other sites
Cathbadh    100
I think I am already saving a lot of cycles by rendering the text to an offscreen framebuffer and just plastering that framebuffer on top of the view as just a single big quad. On-screen text doesn't change very often, so I figure it would be redundant to have it draw that string of text over and over again. The text is processed and rendered to the FBO only when the text changes.

It is just bugging me that I can get legacy opengl to do variable-width fonts, and I can't figure out a way to get the core spec to do it too with the same runtime complexity.

Share this post


Link to post
Share on other sites
Cathbadh    100
Quote:
Original post by swiftcoder
Quote:
Original post by Cathbadh
That's just it-- I know how to use element arrays and vertex buffers, I just miss the ability to just be able to (after some lengthy setup with generating font bitmaps and display lists) give opengl my text string and nothing more, and have it draw the text to the screen correctly.
My method will do that. Upload the string as a 1D texture, bind the shader and a single call to drawInstanced
Quote:
I'd like to not have the cpu involved in anything more during runtime. Is this a pipe dream?
Nope, just a fair amount of work to implement.
Quote:
I know I can get monospace fonts to work like I stated above because I know that characters are all the same width, so I can figure out where to place a quad simply by gl_InstanceID * glyph_width, but the moment glyph_width is not constant across all characters, all bets are off.
Use a second texture containing widths for each glyph.


Your method is essentially what I was thinking about in the OP, but using uniform buffers and uniform blocks instead of 1D textures. I can specify character width in a uniform block just fine, but the problem is that I cannot sum up the widths of all the previous characters to determine where the shader should place the instanced quad. It'd be great if I could somehow store the progress of the cursor across the screen in a uniform during shader execution, but as we all know uniforms are read-only in shaderland.

Share this post


Link to post
Share on other sites
swiftcoder    18437
Quote:
Original post by CathbadhI can specify character width in a uniform block just fine, but the problem is that I cannot sum up the widths of all the previous characters to determine where the shader should place the instanced quad. It'd be great if I could somehow store the progress of the cursor across the screen in a uniform during shader execution, but as we all know uniforms are read-only in shaderland.
Uniforms aren't the only way to pass data around. Either geometry shaders or a variation on histopyramid expansion would allow you to sum up the variable distances for each character.

If you want an even simpler solution, and you know the length of the string on the CPU, use a multi-pass approach. Attach an empty 1D texture to a framebuffer object, and in the first pass, render the width of each character into its location. Then in consecutive passes, shift the texture containing the string one character to the right, and add to the existing value. For a string of length N, after N-1 passes you will have the correct offset of each character.

Share this post


Link to post
Share on other sites
apatriarca    2365
Since the text change rarely, I think it's better to calculate those offsets on the cpu. It's indeed simpler to implement and easier to understand than the above gpu-based methods. Moreover, you only have to do it when the text change and only once, so it will never be your bottleneck.

EDIT: It's also a lot simpler to implement multi-line layouts or features like kerning doing it in the cpu.

Share this post


Link to post
Share on other sites
Yann L    1802
Quote:
Original post by apatriarca
Since the text change rarely, I think it's better to calculate those offsets on the cpu.

Not only the offsets, but the entire text rendering. Rendering bitmap fonts on the CPU is dirt cheap, and vector fonts aren't slow either. Both are highly multithreadable. And since text usually doesn't change multiple times per frame, it's a rather low frequency operation. It's a waste to use GPU geometry shaders or similar to recomposite something 60 times per second that changes once every ten minutes or so.

Just render all text on the CPU (if possible in parallel to the GPU doing something else) into a cache texture, and upload the new data on-demand to the GPU. The latter would then render entire words or rows using a single quad.

BTW, just to clear up a misconception here: the 'old style' one glyph per display list call thing was maybe convenient for the developer, but it was anything but fast. It was (and is) one of the least efficient ways to render text short of plotting glyphs using large amounts of GL_POINTS... (which ironically could even be faster than the display list thing on modern GPUs !)

Share this post


Link to post
Share on other sites
Cathbadh    100
Thanks guys, I'll just break down and send the string as well as character positions to the shader and then do an instanced draw call whenever the text changes.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Similar Content

    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
    • By Picpenguin
      Hi
      I'm new to learning OpenGL and still learning C. I'm using SDL2, glew, OpenGL 3.3, linmath and stb_image.
      I started following through learnopengl.com and got through it until I had to load models. The problem is, it uses Assimp for loading models. Assimp is C++ and uses things I don't want in my program (boost for example) and C support doesn't seem that good.
      Things like glVertexAttribPointer and shaders are still confusing to me, but I have to start somewhere right?
      I can't seem to find any good loading/rendering tutorials or source code that is simple to use and easy to understand.
      I have tried this for over a week by myself, searching for solutions but so far no luck. With tinyobjloader-c and project that uses it, FantasyGolfSimulator, I was able to actually load the model with plain color (always the same color no matter what I do) on screen and move it around, but cannot figure out how to use textures or use its multiple textures with it.
      I don't ask much: I just want to load models with textures in them, maybe have lights affect them (directional spotlight etc). Also, some models have multiple parts and multiple textures in them, how can I handle those?
      Are there solutions anywhere?
      Thank you for your time. Sorry if this is a bit confusing, English isn't my native language
  • Popular Now