# OpenGL [Solved] Rendering text hits fps - hard.

## Recommended Posts

thomasfn1    111
So I'm using opengl, wgl and c++ to render text. The code makes a call to glCallLists to render the text - so it shouldn't be too slow, right? Wrong. Has a rotating skybox (6 quads, 6 textures, texture size 1024x1024, linear filter), and 1/2/3 characters of text drawn in colour at the top right (represents fps). At this point, fps is between 70 and 90. Uh oh. Draw some untextured quads and lines (I tried just the console without text, it has no fps hit) and a whole bunch more text, fps down to a persistent 2. Not good. Here is the relevant code: The calls are made from Lua into c++, but I don't think the problem lies there as I'm also making tonnes of other calls (like drawing each line, each quad) that are still being made back when fps is up at 80. Just to put this in perspective, when I render 3x3 segments of terrain at the same time (each segment is 64x64 quads, textured, with lighting), fps is about 20. Surely rendering some simple text can't be more intensive than rendering a full 3D terrain with lighting? Here is the text code:
// Include header
#include "text.h"

// Define functions
int font_create( char* family, int size, int weight ) {
HFONT font;
HFONT oldfont;

HDC hDC = GetHDC();

int base = glGenLists( 96 );
font = CreateFont( -size, 0, 0, 0, weight, false, false, false,
ANSI_CHARSET, OUT_TT_PRECIS, CLIP_DEFAULT_PRECIS, ANTIALIASED_QUALITY,
FF_DONTCARE | DEFAULT_PITCH, family );
oldfont = (HFONT)SelectObject( hDC, font );
wglUseFontBitmaps( hDC, 32, 96, base );
SelectObject( hDC, oldfont );
DeleteObject( font );

return base;
}

void font_destroy( int id ) {
glDeleteLists( id, 96 );
}

void font_render_noraster( char* text, int id ) {
glPushAttrib( GL_LIST_BIT );
glListBase( id-32 );
glCallLists( strlen( text ), GL_UNSIGNED_BYTE, text );
glPopAttrib();
}

void font_render( char* text, int id, int x, int y ) {
glRasterPos2f( float( x ), float( y ) );
font_render_noraster( text, id );
}

int font_getwidth( char* text, int id ) {

GLint oldbuffer;
glGetIntegerv( GL_DRAW_BUFFER, &oldbuffer );
glDrawBuffer( GL_NONE );

GLfloat o_rpos[4];
glGetFloatv( GL_CURRENT_RASTER_POSITION, o_rpos );

font_render_noraster( text, id );

GLfloat n_rpos[4];
glGetFloatv( GL_CURRENT_RASTER_POSITION, n_rpos );

glDrawBuffer( oldbuffer );

return int( n_rpos[0] - o_rpos[0] );
}


Here is the lua binding:
static int lbind_r_rendertext( lua_State* L ) {
char* text = const_cast<char*>( luaL_checkstring( L, 1 ) );
int base = luaL_checkint( L, 2 );
int x = luaL_checkint( L, 3 );
int y = luaL_checkint( L, 4 );
font_render( text, base, x, y );
return 0;
}


Perhaps it doesn't like the const_cast much? Any ideas? [Edited by - thomasfn1 on April 1, 2010 2:03:54 PM]

##### Share on other sites
http://www.gamedev.net/community/forums/faq.asp#tags

##### Share on other sites
karwosts    840
Can you try a profiler or something to see if there is something obvious causing that perf hit? I use a similar method to render text and I've never seen any kind of performance hit from it.

I think something else has to be going on, because that shouldn't be that slow, unless you're calling font_create every frame or something.

##### Share on other sites
thomasfn1    111
I'll do some more debugging to see if something silly like font_create being called every frame is happening. I'm not sure on the best way of implementing a profiler, I guess I could make something that records time differences between operations and writes it to the log but it isn't practical (especially since what gets written to the log gets written to that console too :P)

I also tried not casting into char* at the lua binding and keeping it as const char* and passing that into glCallLists instead, had no effect.

Edit:
font_create is getting called once, at the beginning of the program, as expected.

And who uses html in forum code anyways -_-

##### Share on other sites
karwosts    840
You can use this profiler if you want, it is trivially easy to set up.

Very Sleepy

##### Share on other sites
thomasfn1    111
Thanks - suitable name methinks. I'll have a go now - but I'll have to go soon, so I might not get back to you until tommorow.

Edit:

I ran it over a 10 second period, with the console rendering all the text.

Profiler Result

I'm not sure what it all means :/

##### Share on other sites
szecs    2990
You could use GetTextExtentExPoint for getting the text width....
Just make sure to set the active font.

##### Share on other sites
thomasfn1    111
Anyone got any more ideas? I replaced the text size calculation code with GetTextExtentPoint, it apparently works, nothing's moved off to weird places. But I'm still having problems with the fps levels.

##### Share on other sites
mark ds    1786
Where is your code to actually create the display list(glNewList)? I'd guess something odd maybe happening in there...

##### Share on other sites
_the_phantom_    11250
As soon as I saw "glRasterPos2f" alarm bells started ringing and so I have an idea; don't use wglUseFontBitmaps to generate your text.

A quick look at the MSDN page on it hightlights the problem;
Quote:
 Each display list consists of a single call to glBitmap

That is going to be a killer. The function is old and is going to hurt as its probably poorly optimised/implimented in modern systems, not to mention it probably sends a bitmap over the bus to the card for every call, even in a display list its not going to be fast.

The fastest way to render text is to create a texture with each character on (or more than one texture in the case of larger fonts), then build the list of tris or quads which access this texture at the right point to grab the letters and render them to the screen; it will be faster.

• karwosts    840
Instead of wglUseFontBitmaps maybe try wglUseFontOutlines? I use that currently and I've never noticed any slowdown from it. I've tried building my own text from a character map texture, but I always thought that looked like crap unless I used a huge texture to store the text (either too jagged or too blurred with AA)

Heres a setup test you can quickly drop in to test to see if its any faster for you.

void GfxOpenGL::BuildOutlineFont(){		HFONT	font;	base_ = glGenLists(256);	font = CreateFont(	-24,				// Height Of Font				0,				// Width Of Font				0,				// Angle Of Escapement				0,				// Orientation Angle				400,			// Font Weight				FALSE,				// Italic				FALSE,				// Underline				FALSE,				// Strikeout				ANSI_CHARSET,			// Character Set Identifier				OUT_TT_PRECIS,			// Output Precision				CLIP_DEFAULT_PRECIS,		// Clipping Precision				0,		// Output Quality				FF_DONTCARE|DEFAULT_PITCH,	// Family And Pitch				"Arial");			// Font Name		SelectObject(hDC,font);			wglUseFontOutlines(	hDC,				// Select The Current DC			0,				// Starting Character			255,				// Number Of Display Lists To Build			base_,				// Starting Display Lists			0.8f,				// Deviation From The True Outlines			0.2f,				// Font Thickness In The Z Direction			WGL_FONT_POLYGONS,		// Use Polygons, Not Lines			gmf);				// Address Of Buffer To Recieve Data}

##### Share on other sites
szecs    2990
I use the wglUseFontBitmaps method on a pretty recent card.

About 1000 glyphs displayed: 100 fps to 80 fps drop.

Nowhere near the 70-2 stuff. That old and nasty ugly bitmap/displaylist stuff should work much faster.

##### Share on other sites
thomasfn1    111
I'm working on a pretty old laptop: 768mb ram, 2.8ghz single core, ati radeon mobility 7000. But still, the drop is ridiculous. I've been messing with using texture mapped fonts, once I get the damn thing to compile, I'll see how that works out. I'll try the outlined fonts in a sec.

Edit:
Probably something todo with that glyph metrics float structure thing.

##### Share on other sites
thomasfn1    111
Ok so nevermind that error - I just allocated the structure to the heap and stopped that error. But then, as soon as the code returns out of that function, it gives me debug assertion failure (with no information). I'm also giving up on the bitmapped fonts, as that's giving me random debug assertion failures all over the place - when returning stuff from functions mostly.

I'm really at a loss here :(

Edit:
I am an idiot. You pass an array to that function.

Ok now it compiles and runs with the Outlines. But rendering text now glitches up everything; the only thing that renders is my console window and a few random lines, and there are some inexplicable dots at the top of the screen. I suspect the lists generated contains some translation call that's messing stuff up; I'll investigate. Thanks for your help guys.

Edit2:
Wrapping the list call in PushMatrix and PopMatrix doesn't work.

##### Share on other sites
bitshifter    113
Just a quick observation..
glListBase( id-32 );glCallLists( strlen( text ), GL_UNSIGNED_BYTE, text );

You can sub 32 from id after creation and add 32 before deletion.
That way to save a couple of clocks, but the real killer is strlen.
(Besides all the hidden code of course)

I wrote a couple of font rendering method tests in assembler...
http://board.flatassembler.net/topic.php?t=9885

##### Share on other sites
Vortez    2714
You should use this tutorial from nehe instead, the fonts looks ways better, and the best is they can scale if ur willing to tweak the code a bit. Im using it in my engine and don't see any performance hit like i did with the wglUseFontBitmaps method.

##### Share on other sites
thomasfn1    111
Quote:
 Original post by VortezYou should use this tutorial from nehe instead, the fonts looks ways better, and the best is they can scale if ur willing to tweak the code a bit. Im using it in my engine and don't see any performance hit like i did with the wglUseFontBitmaps method.

Yea I had a look at that tutorial and got as far as downloading freetype and linking it before I got distracted with something else. I'll have a crack at implementing it if UseFontOutlines fails me.

##### Share on other sites
thomasfn1    111
Ok I took a stab at implementing freetype. Good news is, framerate is now up nice and high. Bad news is:

##### Share on other sites
thomasfn1    111
Good news! I got it to work! Thanks guys for all your help.

In case someone wants to know, I had all the font handling code in a class and I wasn't allocating the class objects to the heap so bits of the class memory were getting used for other things. Including the list base.

## Create an account

Register a new account

• ### Similar Content

• By Zaphyk
I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?

• I'm trying to get some legacy OpenGL code to run with a shader pipeline,
The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
I've got a version 330 vertex shader to somewhat work:
#version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
Question:
What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?

Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.

• Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.

• By KarimIO
Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!

• Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
I'm available for a good conversation about Game Engine / Graphics Programming

• 15
• 10
• 18
• 9
• 10