# OpenGL [Solved] Rendering text hits fps - hard.

## Recommended Posts

So I'm using opengl, wgl and c++ to render text. The code makes a call to glCallLists to render the text - so it shouldn't be too slow, right? Wrong. Has a rotating skybox (6 quads, 6 textures, texture size 1024x1024, linear filter), and 1/2/3 characters of text drawn in colour at the top right (represents fps). At this point, fps is between 70 and 90. Uh oh. Draw some untextured quads and lines (I tried just the console without text, it has no fps hit) and a whole bunch more text, fps down to a persistent 2. Not good. Here is the relevant code: The calls are made from Lua into c++, but I don't think the problem lies there as I'm also making tonnes of other calls (like drawing each line, each quad) that are still being made back when fps is up at 80. Just to put this in perspective, when I render 3x3 segments of terrain at the same time (each segment is 64x64 quads, textured, with lighting), fps is about 20. Surely rendering some simple text can't be more intensive than rendering a full 3D terrain with lighting? Here is the text code:
// Include header
#include "text.h"

// Define functions
int font_create( char* family, int size, int weight ) {
HFONT font;
HFONT oldfont;

HDC hDC = GetHDC();

int base = glGenLists( 96 );
font = CreateFont( -size, 0, 0, 0, weight, false, false, false,
ANSI_CHARSET, OUT_TT_PRECIS, CLIP_DEFAULT_PRECIS, ANTIALIASED_QUALITY,
FF_DONTCARE | DEFAULT_PITCH, family );
oldfont = (HFONT)SelectObject( hDC, font );
wglUseFontBitmaps( hDC, 32, 96, base );
SelectObject( hDC, oldfont );
DeleteObject( font );

return base;
}

void font_destroy( int id ) {
glDeleteLists( id, 96 );
}

void font_render_noraster( char* text, int id ) {
glPushAttrib( GL_LIST_BIT );
glListBase( id-32 );
glCallLists( strlen( text ), GL_UNSIGNED_BYTE, text );
glPopAttrib();
}

void font_render( char* text, int id, int x, int y ) {
glRasterPos2f( float( x ), float( y ) );
font_render_noraster( text, id );
}

int font_getwidth( char* text, int id ) {

GLint oldbuffer;
glGetIntegerv( GL_DRAW_BUFFER, &oldbuffer );
glDrawBuffer( GL_NONE );

GLfloat o_rpos[4];
glGetFloatv( GL_CURRENT_RASTER_POSITION, o_rpos );

font_render_noraster( text, id );

GLfloat n_rpos[4];
glGetFloatv( GL_CURRENT_RASTER_POSITION, n_rpos );

glDrawBuffer( oldbuffer );

return int( n_rpos[0] - o_rpos[0] );
}


Here is the lua binding:
static int lbind_r_rendertext( lua_State* L ) {
char* text = const_cast<char*>( luaL_checkstring( L, 1 ) );
int base = luaL_checkint( L, 2 );
int x = luaL_checkint( L, 3 );
int y = luaL_checkint( L, 4 );
font_render( text, base, x, y );
return 0;
}


Perhaps it doesn't like the const_cast much? Any ideas? [Edited by - thomasfn1 on April 1, 2010 2:03:54 PM]

##### Share on other sites
http://www.gamedev.net/community/forums/faq.asp#tags

##### Share on other sites
Can you try a profiler or something to see if there is something obvious causing that perf hit? I use a similar method to render text and I've never seen any kind of performance hit from it.

I think something else has to be going on, because that shouldn't be that slow, unless you're calling font_create every frame or something.

##### Share on other sites
I'll do some more debugging to see if something silly like font_create being called every frame is happening. I'm not sure on the best way of implementing a profiler, I guess I could make something that records time differences between operations and writes it to the log but it isn't practical (especially since what gets written to the log gets written to that console too :P)

I also tried not casting into char* at the lua binding and keeping it as const char* and passing that into glCallLists instead, had no effect.

Edit:
font_create is getting called once, at the beginning of the program, as expected.

And who uses html in forum code anyways -_-

##### Share on other sites
You can use this profiler if you want, it is trivially easy to set up.

Very Sleepy

##### Share on other sites
Thanks - suitable name methinks. I'll have a go now - but I'll have to go soon, so I might not get back to you until tommorow.

Edit:

I ran it over a 10 second period, with the console rendering all the text.

Profiler Result

I'm not sure what it all means :/

##### Share on other sites
You could use GetTextExtentExPoint for getting the text width....
Just make sure to set the active font.

##### Share on other sites
Anyone got any more ideas? I replaced the text size calculation code with GetTextExtentPoint, it apparently works, nothing's moved off to weird places. But I'm still having problems with the fps levels.

##### Share on other sites
Where is your code to actually create the display list(glNewList)? I'd guess something odd maybe happening in there...

##### Share on other sites
As soon as I saw "glRasterPos2f" alarm bells started ringing and so I have an idea; don't use wglUseFontBitmaps to generate your text.

A quick look at the MSDN page on it hightlights the problem;
Quote:
 Each display list consists of a single call to glBitmap

That is going to be a killer. The function is old and is going to hurt as its probably poorly optimised/implimented in modern systems, not to mention it probably sends a bitmap over the bus to the card for every call, even in a display list its not going to be fast.

The fastest way to render text is to create a texture with each character on (or more than one texture in the case of larger fonts), then build the list of tris or quads which access this texture at the right point to grab the letters and render them to the screen; it will be faster.

• Instead of wglUseFontBitmaps maybe try wglUseFontOutlines? I use that currently and I've never noticed any slowdown from it. I've tried building my own text from a character map texture, but I always thought that looked like crap unless I used a huge texture to store the text (either too jagged or too blurred with AA)

Heres a setup test you can quickly drop in to test to see if its any faster for you.

void GfxOpenGL::BuildOutlineFont(){		HFONT	font;	base_ = glGenLists(256);	font = CreateFont(	-24,				// Height Of Font				0,				// Width Of Font				0,				// Angle Of Escapement				0,				// Orientation Angle				400,			// Font Weight				FALSE,				// Italic				FALSE,				// Underline				FALSE,				// Strikeout				ANSI_CHARSET,			// Character Set Identifier				OUT_TT_PRECIS,			// Output Precision				CLIP_DEFAULT_PRECIS,		// Clipping Precision				0,		// Output Quality				FF_DONTCARE|DEFAULT_PITCH,	// Family And Pitch				"Arial");			// Font Name		SelectObject(hDC,font);			wglUseFontOutlines(	hDC,				// Select The Current DC			0,				// Starting Character			255,				// Number Of Display Lists To Build			base_,				// Starting Display Lists			0.8f,				// Deviation From The True Outlines			0.2f,				// Font Thickness In The Z Direction			WGL_FONT_POLYGONS,		// Use Polygons, Not Lines			gmf);				// Address Of Buffer To Recieve Data}

##### Share on other sites
I use the wglUseFontBitmaps method on a pretty recent card.

About 1000 glyphs displayed: 100 fps to 80 fps drop.

Nowhere near the 70-2 stuff. That old and nasty ugly bitmap/displaylist stuff should work much faster.

##### Share on other sites
I'm working on a pretty old laptop: 768mb ram, 2.8ghz single core, ati radeon mobility 7000. But still, the drop is ridiculous. I've been messing with using texture mapped fonts, once I get the damn thing to compile, I'll see how that works out. I'll try the outlined fonts in a sec.

Edit:
Probably something todo with that glyph metrics float structure thing.

##### Share on other sites
Ok so nevermind that error - I just allocated the structure to the heap and stopped that error. But then, as soon as the code returns out of that function, it gives me debug assertion failure (with no information). I'm also giving up on the bitmapped fonts, as that's giving me random debug assertion failures all over the place - when returning stuff from functions mostly.

I'm really at a loss here :(

Edit:
I am an idiot. You pass an array to that function.

Ok now it compiles and runs with the Outlines. But rendering text now glitches up everything; the only thing that renders is my console window and a few random lines, and there are some inexplicable dots at the top of the screen. I suspect the lists generated contains some translation call that's messing stuff up; I'll investigate. Thanks for your help guys.

Edit2:
Wrapping the list call in PushMatrix and PopMatrix doesn't work.

##### Share on other sites
Just a quick observation..
glListBase( id-32 );glCallLists( strlen( text ), GL_UNSIGNED_BYTE, text );

You can sub 32 from id after creation and add 32 before deletion.
That way to save a couple of clocks, but the real killer is strlen.
(Besides all the hidden code of course)

I wrote a couple of font rendering method tests in assembler...
http://board.flatassembler.net/topic.php?t=9885

##### Share on other sites
You should use this tutorial from nehe instead, the fonts looks ways better, and the best is they can scale if ur willing to tweak the code a bit. Im using it in my engine and don't see any performance hit like i did with the wglUseFontBitmaps method.

##### Share on other sites
Quote:
 Original post by VortezYou should use this tutorial from nehe instead, the fonts looks ways better, and the best is they can scale if ur willing to tweak the code a bit. Im using it in my engine and don't see any performance hit like i did with the wglUseFontBitmaps method.

Yea I had a look at that tutorial and got as far as downloading freetype and linking it before I got distracted with something else. I'll have a crack at implementing it if UseFontOutlines fails me.

##### Share on other sites
Ok I took a stab at implementing freetype. Good news is, framerate is now up nice and high. Bad news is:

##### Share on other sites
Good news! I got it to work! Thanks guys for all your help.

In case someone wants to know, I had all the font handling code in a class and I wasn't allocating the class objects to the heap so bits of the class memory were getting used for other things. Including the list base.

## Create an account

Register a new account

• ## Partner Spotlight

• ### Forum Statistics

• Total Topics
627652
• Total Posts
2978421
• ### Similar Content

• Both functions are available since 3.0, and I'm currently using glMapBuffer(), which works fine.
But, I was wondering if anyone has experienced advantage in using glMapBufferRange(), which allows to specify the range of the mapped buffer. Could this be only a safety measure or does it improve performance?
Note: I'm not asking about glBufferSubData()/glBufferData. Those two are irrelevant in this case.
• By xhcao
Before using void glBindImageTexture(    GLuint unit, GLuint texture, GLint level, GLboolean layered, GLint layer, GLenum access, GLenum format), does need to make sure that texture is completeness.
• By cebugdev
hi guys,
are there any books, link online or any other resources that discusses on how to build special effects such as magic, lightning, etc. in OpenGL? i mean, yeah most of them are using particles but im looking for resources specifically on how to manipulate the particles to look like an effect that can be use for games,. i did fire particle before, and I want to learn how to do the other 'magic' as well.
Like are there one book or link(cant find in google) that atleast featured how to make different particle effects in OpenGL (or DirectX)? If there is no one stop shop for it, maybe ill just look for some tips on how to make a particle engine that is flexible enough to enable me to design different effects/magic
let me know if you guys have recommendations.
• By dud3
How do we rotate the camera around x axis 360 degrees, without having the strange effect as in my video below?
Mine behaves exactly the same way spherical coordinates would, I'm using euler angles.
Tried googling, but couldn't find a proper answer, guessing I don't know what exactly to google for, googled 'rotate 360 around x axis', got no proper answers.

References:
Code: https://pastebin.com/Hcshj3FQ
The video shows the difference between blender and my rotation:

• By Defend
I've had a Google around for this but haven't yet found some solid advice. There is a lot of "it depends", but I'm not sure on what.
My question is what's a good rule of thumb to follow when it comes to creating/using VBOs & VAOs? As in, when should I use multiple or when should I not? My understanding so far is that if I need a new VBO, then I need a new VAO. So when it comes to rendering multiple objects I can either:
* make lots of VAO/VBO pairs and flip through them to render different objects, or
* make one big VBO and jump around its memory to render different objects.
I also understand that if I need to render objects with different vertex attributes, then a new VAO is necessary in this case.
If that "it depends" really is quite variable, what's best for a beginner with OpenGL, assuming that better approaches can be learnt later with better understanding?

• 10
• 12
• 22
• 13
• 33