Sign in to follow this  

OpenGL Are there any text libraries that rely on GL 3.3 (or higher) core context?

This topic is 1661 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I restricting myself to the core context for a project and I need some simple text rendering for stats. ALL of the text libraries that I have found use glVertex calls, even the ones that claim to be based on GL 3.3 or higher. I have yet to see a single library of code sample that uses VBOs and the programmable pipeline.

 

Does such a beast exist? This is something that I would really like to not have to write myself, if possible.

 

And please don't say FreeType. FreeType is not a rendering library and is not based on, nor does it use, OpenGL. I am familiar with FreeType. It doesn't do what I want.

Edited by MarkS

Share this post


Link to post
Share on other sites

And please don't say FreeType... I am familiar with FreeType. It doesn't do what I want.

Sure it does - it renders text into a bitmap, which can be streamed to a texture and rendered on a quad in your scene. I'd even go one further and suggest you use Pango.
 
Text rendering is a terrible use of OpenGL. If you go the typical quad-per-character route, you need to regularly queue up hundreds or even thousands of separate draw calls, each for a tiny run of primitives - this is OpenGL's pathologically worst-case performance scenario. You still don't have decent kerning, ligatures or antialiasing either...
 
And it's not as if the other 2-3 cores in your quad-core CPU are actually doing anything useful most of the time. Letting them get busy rendering beautifully antialiased text is a pretty decent idea. Edited by swiftcoder

Share this post


Link to post
Share on other sites

Well, FreeType doesn't use OpenGL.  But it IS a rendering library.  It will give you bitmaps of a font, that it has rendered.  The trick is to use these in OpenGL.  Swiftcoder is right in that it can be REALLY insane how many draw calls you get if you don't do it right.  I create a texture atlas of all the usable characters (or those that you know you'll use) and then keep VBOs for the locations of rectangles and tex coords.  Then, I just keep an array of all of the strings that have been added to that one specific font, and whenever a new one is added, the VBOs get updated (well, not always, but that's neither here nor there).  This means for any one font, you can limit it to one call.  You can even do color with another VBO.  If you have a lot of fonts though, that gets pretty untenable, but I haven't had that problem :)

I also don't antialias, but that's because I'm not THAT worried about the text.  Anyway, I think I've gotten a bit away from the OP, but really you can use FreeType with OpenGL, it just isn't a direct text drawer, but I don't know of any of those that are specifically 3.3+

Share this post


Link to post
Share on other sites

Text rendering is a terrible use of OpenGL. If you go the typical quad-per-character route, you need to regularly queue up hundreds or even thousands of separate draw calls, each for a tiny run of primitives - this is OpenGL's pathologically worst-case performance scenario. You still don't have decent kerning, ligatures or antialiasing either...

 

Not necessarily. The only pathological things about that are overdraw and the pixel shader running twice on the pixels that are on the diagonal of each quad. Other than this, there is no real issue. You can perfectly batch a few thousand characters into one draw call using one vertex buffer, and you can do perfectly good kerning and antialiasing. Of course OpenGL won't "magically" do the kerning for you, you'll have to do the pairing yourself from the information in the truetype (or whatever font format you use) font. The same goes for ligatures. Distance field bitmaps, if produced with a little care, antialias very nicely at practically every reasonable scale, and nearly as fast as the graphics card can render simple textured quads. There is hardly an observable difference in speed.

 

Freetype is nice insofar as it allows you to both render single glyphs to bitmaps (which you preferrably copy to an atlas) and access kerning information with an easy to use API and without actually knowing the details of truetype (or several other convoluted font file formats). Though of course that's something you can normally do with an offline tool such as BMFont in a very convenient manner.

 

Using OpenGL for font rendering is the same as doing it with a "pure software" renderer, except you get to use dedicated hardware for the dirty work. You still need to figure where to put a glyph to have it properly kerned with the preceding one, etc. as there are no high-level utility functions that render a whole line or a whole paragraph for you. If that is what one wants, pango sure is a good option (though pango is the exact opposite of what I'd want to use personally, far too much smartness and internationalisation features that I wouldn't want -- but you mileage may vary).

Share this post


Link to post
Share on other sites

You can perfectly batch a few thousand characters into one draw call using one vertex buffer, and you can do perfectly good kerning and antialiasing.

 

That's not my point. Of course you can, but do you actually derive a benefit from doing so?

 

You still have to perform all the unicode normalisation, layout, kerning and ligatures on the CPU, and (at least for small font sizes) the amount of data you upload to the GPU is very similar for both the vertex buffer and texture cases.

 

 

Distance field bitmaps, if produced with a little care, antialias very nicely at practically every reasonable scale, and nearly as fast as the graphics card can render simple textured quads.

 

If you go this route, you give up on hinting. That's probably a reasonable tradeoff for large (18pt+) font sizes on a high-resolution display, but it's something you need to be aware of at small font sizes.

 

 

though pango is the exact opposite of what I'd want to use personally, far too much smartness and internationalisation features that I wouldn't want -- but you mileage may vary

 

It's fine to ignore internationalisation in a hobby project, but if you ever plan to release in other (particularly non-latin) languages, you want that Pango "smartness". Languages like Arabic and Mandarin are pure hell to deal with in a hand-written text engine.

Share this post


Link to post
Share on other sites
Of course you can, but do you actually derive a benefit from doing so?

What you gain is that the GPU is doing the not-quite-trivial work of sampling texels, antialiasing, blending pixels together, and all that. Sure, you can do all that on the CPU no problem, but why do that when there's a dedicated workhorse for the task?
 

 

You still have to perform all the unicode normalisation, layout, kerning and ligatures on the CPU, and (at least for small font sizes) the amount of data you upload to the GPU is very similar for both the vertex buffer and texture cases.

 

Yes and no. Kerning and ligatures (or formatting a paragraph) you certainly have to do yourself. Unicode normalisation is not something you do at all. This abomination (which in my opinion is a good reason why Unicode is totally unsuitable for what it's used for) is something you should handle by policy or in the build pipeline. Your renderer should not have to guess how to compose a glyph, and your text system should not have to guess how to compare or sort two strings. There should be one and only one possible way, even if Unicode allows for 2 or 3 ways that are equally "valid".

 

The amount of data you send to the GPU can be as little as a point and a character index, so anywhere from 6 to 12 bytes. Quad extents can be read from constant buffers at no performance penalty on present-day GPUs. Compared to that, a "small" character may easily have 200-300 pixels, which is over 10-15 times as much for an 8-bit monochrome bitmap (or 30-45 times for RGB). Color and size don't change every 3-4 characters (not normally, at least!), so it's reasonable to just set these as uniforms.

 

Mandarin

Ah yes, but Mandarin is something you would not normally consider anyway, unless some stupefied executive forces you to.

Mandarin means Chinese market, and while every executive nowadays seems to think of China as El Dorado, reality has it that it means a lot of work and many extra complications for very little revenue. The Chinese pay a lot less money for the same product, if they do.

So unless you're working for a company like Microsoft, Blizzard, or EA (who will want to be in this market despite bad revenues), it's a good business plan to grab every guy pronouncing "Chin..." and arrange an accident involving him falling out of the 8th floor window before he can finish the sentence.

 

Note that this isn't about not liking the Chinese, it's about being reasonable on what you have to invest, what risks you have to cope with, and what you get back.

 

Take Blizzard and WoW as an example: If you research on the internet, you'll find out that WoW costs around 7 cents per hour in China. Compared to Europe where the same game costs €12.99 per month (~16.91 USD). At an average weekly play time of 20 hours, this translates to slightly over 21 cents per hour. In other words, Blizzard puts extra work into localizing and setting up extra servers, and takes up the risk of doing business in a location where laws are... somewhat special (a very friendly wording), only to sell their product at 1/3 the rate.

 

Maybe that makes sense from an executive point of view if you assume that you get another 1 billion subscriptions (but, do you, really?), and those outweight the fact that you're selling under price. For every "normal" business it's just madness to think about such a plan.

Share this post


Link to post
Share on other sites
I have yet to see a single library of code sample that uses VBOs and the programmable pipeline.

 

It sounds like you want to make a font system using meshes.  If this is the case then fire up Blender or whatever 3D modeling program you prefer and use the text tool to type out meshes that are shaped like text.  Then export them individually, then load each model into one of the many open source model loaders, then have that model loader spit out vertex arrays.  Now load those vertex arrays into VBO's in your rendering program.

 

It has been pointed out, this would not be the most efficient way to render text, however, if this is how you want to do it, then do it this way.

 

There is no easy way to get what you want here.  I doubt that there is a library of text that is made of 3D models and I doubt that even if you found this that it would also include the necessary code to select and place the individual rendered VBO's in the proper positions based on typed or defined sentences.  But you can make your own version of this by scrutinizing the code written in the following chapter of the OpenGL Redbook.

 

 

 

look for: Executing Multiple Display Lists in chapter04.

 

Even though you do not want to use display lists, the text input and selection code can be re-written to do what you want with VBO's instead of display lists.

 

Or you can simplify, optimize and streamline this whole process a bit by not exporting your models from Blender but instead simply rendering each letter as a single image.  Save all the images to your hard-drive either individually or as a font set.

 

If you save them individually you can render a letter super easy.  Select the texture you want and render a quad with that texture applied. Move over a space with glTranslatef and render the same quad with the image for the next letter applied to its texture unit.

 

If you render out a sprite sheet from the modeling program then you can do the same thing as above but move the quads texture coordinates around to select the appropriate letter instead of changing the selected texture each time.

 

If you want to do things this way then post a response and we'll take this a step further.  If you've decide to rethink things based on what's been said by others here, then ask them for an elaboration on how to tie those libraries to OpenGL rendering.

Share this post


Link to post
Share on other sites

*SIGH* I knew I shouldn't have mentioned FreeType.

 

I'm not looking for a fight. I just want to be able to render performance data in the viewport in a 3.3 core context. I cannot use display lists or anything from GL 1.1 and all text libraries that I have found are based upon those old techniques.

 

There has to be a fast way to render text quickly using new techniques.

Share this post


Link to post
Share on other sites


There has to be a fast way to render text quickly using new techniques.

Sure. Let some other library render it for you, and then slap the texture on a quad.

 

Sorry for sounding like a broken record here, but if you want to do the whole character-per-quad display list thing in a core context, you're going to have to roll it yourself. There are only a handful of OpenGL-based text rendering libraries around, and to the best of my knowledge, none have been ported to a core context.

Share this post


Link to post
Share on other sites

 


There has to be a fast way to render text quickly using new techniques.

Sure. Let some other library render it for you, and then slap the texture on a quad.

 

Sorry for sounding like a broken record here, but if you want to do the whole character-per-quad display list thing in a core context, you're going to have to roll it yourself. There are only a handful of OpenGL-based text rendering libraries around, and to the best of my knowledge, none have been ported to a core context.

 

Well, rolling it myself is what I was hoping to avoid. Just out of curiosity, does anyone ever actually use the core context? To be honest, I have found very few examples of tutorials or practically anything based on it. It seems like everyone uses the compatibility profile. Should I just cave in and switch to the compatibility profile?

Share this post


Link to post
Share on other sites


Just out of curiosity, does anyone ever actually use the core context? To be honest, I have found very few examples of tutorials or practically anything based on it. It seems like everyone uses the compatibility profile. Should I just cave in and switch to the compatibility profile?

As far as I know, nobody actually ships games on the core profile. I use it for development, but that's just to make sure I don't accidentally fall into old habits.

 

NVidia actually warns you not to use the core profile, because they add runtime checks for all the deprecated functionality, and that comes at a performance cost.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now