Are there any text libraries that rely on GL 3.3 (or higher) core context?

Started by
8 comments, last by Promit 10 years, 10 months ago

I restricting myself to the core context for a project and I need some simple text rendering for stats. ALL of the text libraries that I have found use glVertex calls, even the ones that claim to be based on GL 3.3 or higher. I have yet to see a single library of code sample that uses VBOs and the programmable pipeline.

Does such a beast exist? This is something that I would really like to not have to write myself, if possible.

And please don't say FreeType. FreeType is not a rendering library and is not based on, nor does it use, OpenGL. I am familiar with FreeType. It doesn't do what I want.

Advertisement

And please don't say FreeType... I am familiar with FreeType. It doesn't do what I want.

Sure it does - it renders text into a bitmap, which can be streamed to a texture and rendered on a quad in your scene. I'd even go one further and suggest you use Pango.

Text rendering is a terrible use of OpenGL. If you go the typical quad-per-character route, you need to regularly queue up hundreds or even thousands of separate draw calls, each for a tiny run of primitives - this is OpenGL's pathologically worst-case performance scenario. You still don't have decent kerning, ligatures or antialiasing either...

And it's not as if the other 2-3 cores in your quad-core CPU are actually doing anything useful most of the time. Letting them get busy rendering beautifully antialiased text is a pretty decent idea.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Well, FreeType doesn't use OpenGL. But it IS a rendering library. It will give you bitmaps of a font, that it has rendered. The trick is to use these in OpenGL. Swiftcoder is right in that it can be REALLY insane how many draw calls you get if you don't do it right. I create a texture atlas of all the usable characters (or those that you know you'll use) and then keep VBOs for the locations of rectangles and tex coords. Then, I just keep an array of all of the strings that have been added to that one specific font, and whenever a new one is added, the VBOs get updated (well, not always, but that's neither here nor there). This means for any one font, you can limit it to one call. You can even do color with another VBO. If you have a lot of fonts though, that gets pretty untenable, but I haven't had that problem :)

I also don't antialias, but that's because I'm not THAT worried about the text. Anyway, I think I've gotten a bit away from the OP, but really you can use FreeType with OpenGL, it just isn't a direct text drawer, but I don't know of any of those that are specifically 3.3+


Text rendering is a terrible use of OpenGL. If you go the typical quad-per-character route, you need to regularly queue up hundreds or even thousands of separate draw calls, each for a tiny run of primitives - this is OpenGL's pathologically worst-case performance scenario. You still don't have decent kerning, ligatures or antialiasing either...

Not necessarily. The only pathological things about that are overdraw and the pixel shader running twice on the pixels that are on the diagonal of each quad. Other than this, there is no real issue. You can perfectly batch a few thousand characters into one draw call using one vertex buffer, and you can do perfectly good kerning and antialiasing. Of course OpenGL won't "magically" do the kerning for you, you'll have to do the pairing yourself from the information in the truetype (or whatever font format you use) font. The same goes for ligatures. Distance field bitmaps, if produced with a little care, antialias very nicely at practically every reasonable scale, and nearly as fast as the graphics card can render simple textured quads. There is hardly an observable difference in speed.

Freetype is nice insofar as it allows you to both render single glyphs to bitmaps (which you preferrably copy to an atlas) and access kerning information with an easy to use API and without actually knowing the details of truetype (or several other convoluted font file formats). Though of course that's something you can normally do with an offline tool such as BMFont in a very convenient manner.

Using OpenGL for font rendering is the same as doing it with a "pure software" renderer, except you get to use dedicated hardware for the dirty work. You still need to figure where to put a glyph to have it properly kerned with the preceding one, etc. as there are no high-level utility functions that render a whole line or a whole paragraph for you. If that is what one wants, pango sure is a good option (though pango is the exact opposite of what I'd want to use personally, far too much smartness and internationalisation features that I wouldn't want -- but you mileage may vary).

You can perfectly batch a few thousand characters into one draw call using one vertex buffer, and you can do perfectly good kerning and antialiasing.

That's not my point. Of course you can, but do you actually derive a benefit from doing so?

You still have to perform all the unicode normalisation, layout, kerning and ligatures on the CPU, and (at least for small font sizes) the amount of data you upload to the GPU is very similar for both the vertex buffer and texture cases.

Distance field bitmaps, if produced with a little care, antialias very nicely at practically every reasonable scale, and nearly as fast as the graphics card can render simple textured quads.

If you go this route, you give up on hinting. That's probably a reasonable tradeoff for large (18pt+) font sizes on a high-resolution display, but it's something you need to be aware of at small font sizes.

though pango is the exact opposite of what I'd want to use personally, far too much smartness and internationalisation features that I wouldn't want -- but you mileage may vary

It's fine to ignore internationalisation in a hobby project, but if you ever plan to release in other (particularly non-latin) languages, you want that Pango "smartness". Languages like Arabic and Mandarin are pure hell to deal with in a hand-written text engine.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

Of course you can, but do you actually derive a benefit from doing so?

What you gain is that the GPU is doing the not-quite-trivial work of sampling texels, antialiasing, blending pixels together, and all that. Sure, you can do all that on the CPU no problem, but why do that when there's a dedicated workhorse for the task?

You still have to perform all the unicode normalisation, layout, kerning and ligatures on the CPU, and (at least for small font sizes) the amount of data you upload to the GPU is very similar for both the vertex buffer and texture cases.

Yes and no. Kerning and ligatures (or formatting a paragraph) you certainly have to do yourself. Unicode normalisation is not something you do at all. This abomination (which in my opinion is a good reason why Unicode is totally unsuitable for what it's used for) is something you should handle by policy or in the build pipeline. Your renderer should not have to guess how to compose a glyph, and your text system should not have to guess how to compare or sort two strings. There should be one and only one possible way, even if Unicode allows for 2 or 3 ways that are equally "valid".

The amount of data you send to the GPU can be as little as a point and a character index, so anywhere from 6 to 12 bytes. Quad extents can be read from constant buffers at no performance penalty on present-day GPUs. Compared to that, a "small" character may easily have 200-300 pixels, which is over 10-15 times as much for an 8-bit monochrome bitmap (or 30-45 times for RGB). Color and size don't change every 3-4 characters (not normally, at least!), so it's reasonable to just set these as uniforms.

Mandarin

Ah yes, but Mandarin is something you would not normally consider anyway, unless some stupefied executive forces you to.

Mandarin means Chinese market, and while every executive nowadays seems to think of China as El Dorado, reality has it that it means a lot of work and many extra complications for very little revenue. The Chinese pay a lot less money for the same product, if they do.

So unless you're working for a company like Microsoft, Blizzard, or EA (who will want to be in this market despite bad revenues), it's a good business plan to grab every guy pronouncing "Chin..." and arrange an accident involving him falling out of the 8th floor window before he can finish the sentence.

Note that this isn't about not liking the Chinese, it's about being reasonable on what you have to invest, what risks you have to cope with, and what you get back.

Take Blizzard and WoW as an example: If you research on the internet, you'll find out that WoW costs around 7 cents per hour in China. Compared to Europe where the same game costs €12.99 per month (~16.91 USD). At an average weekly play time of 20 hours, this translates to slightly over 21 cents per hour. In other words, Blizzard puts extra work into localizing and setting up extra servers, and takes up the risk of doing business in a location where laws are... somewhat special (a very friendly wording), only to sell their product at 1/3 the rate.

Maybe that makes sense from an executive point of view if you assume that you get another 1 billion subscriptions (but, do you, really?), and those outweight the fact that you're selling under price. For every "normal" business it's just madness to think about such a plan.

I have yet to see a single library of code sample that uses VBOs and the programmable pipeline.

It sounds like you want to make a font system using meshes. If this is the case then fire up Blender or whatever 3D modeling program you prefer and use the text tool to type out meshes that are shaped like text. Then export them individually, then load each model into one of the many open source model loaders, then have that model loader spit out vertex arrays. Now load those vertex arrays into VBO's in your rendering program.

It has been pointed out, this would not be the most efficient way to render text, however, if this is how you want to do it, then do it this way.

There is no easy way to get what you want here. I doubt that there is a library of text that is made of 3D models and I doubt that even if you found this that it would also include the necessary code to select and place the individual rendered VBO's in the proper positions based on typed or defined sentences. But you can make your own version of this by scrutinizing the code written in the following chapter of the OpenGL Redbook.

look for: Executing Multiple Display Lists in chapter04.

Even though you do not want to use display lists, the text input and selection code can be re-written to do what you want with VBO's instead of display lists.

Or you can simplify, optimize and streamline this whole process a bit by not exporting your models from Blender but instead simply rendering each letter as a single image. Save all the images to your hard-drive either individually or as a font set.

If you save them individually you can render a letter super easy. Select the texture you want and render a quad with that texture applied. Move over a space with glTranslatef and render the same quad with the image for the next letter applied to its texture unit.

If you render out a sprite sheet from the modeling program then you can do the same thing as above but move the quads texture coordinates around to select the appropriate letter instead of changing the selected texture each time.

If you want to do things this way then post a response and we'll take this a step further. If you've decide to rethink things based on what's been said by others here, then ask them for an elaboration on how to tie those libraries to OpenGL rendering.

Consider it pure joy, my brothers and sisters, whenever you face trials of many kinds, 3 because you know that the testing of your faith produces perseverance. 4 Let perseverance finish its work so that you may be mature and complete, not lacking anything.

*SIGH* I knew I shouldn't have mentioned FreeType.

I'm not looking for a fight. I just want to be able to render performance data in the viewport in a 3.3 core context. I cannot use display lists or anything from GL 1.1 and all text libraries that I have found are based upon those old techniques.

There has to be a fast way to render text quickly using new techniques.


There has to be a fast way to render text quickly using new techniques.

Sure. Let some other library render it for you, and then slap the texture on a quad.

Sorry for sounding like a broken record here, but if you want to do the whole character-per-quad display list thing in a core context, you're going to have to roll it yourself. There are only a handful of OpenGL-based text rendering libraries around, and to the best of my knowledge, none have been ported to a core context.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]


There has to be a fast way to render text quickly using new techniques.

Sure. Let some other library render it for you, and then slap the texture on a quad.

Sorry for sounding like a broken record here, but if you want to do the whole character-per-quad display list thing in a core context, you're going to have to roll it yourself. There are only a handful of OpenGL-based text rendering libraries around, and to the best of my knowledge, none have been ported to a core context.

Well, rolling it myself is what I was hoping to avoid. Just out of curiosity, does anyone ever actually use the core context? To be honest, I have found very few examples of tutorials or practically anything based on it. It seems like everyone uses the compatibility profile. Should I just cave in and switch to the compatibility profile?

This topic is closed to new replies.

Advertisement