Quick tutorial: Variable width bitmap fonts

Started by
8 comments, last by vexe 8 years, 4 months ago

newfont6vl.png

The purpose of this thread is simple -- it's a really fast and easy introduction to creating nice rendered, variable width bitmap fonts.

The basis of our fonts is AngelCode's Bitmap Font Generator (BMFont). This utility can create variable width bitmap fonts, along with ASCII files that describe the characters' properties. I suggest you toy around with this program a little first, and get a feel for how it works. Create a test font or two. The only catch is, make sure that you keep your font to only one texture page, as the code I'm about to provide doesn't cope with multiple pages (mainly because I want to have a single draw call per string rendered).

When you save a font, BMFont generates two files, a targa texture and a FNT file that describes the font. I usually save 32 bit targas (which are exported with an alpha channel) and then store them to PNG, which compress very well. If you're using the targa natively, you might be better served exporting in 8 bit and computing a color key when you load the image. You can also compress the image using DXT/S3TC, either on disk or in video memory1.

Anyway, let's take a look a segment of the FNT file that BMFont generates.



common lineHeight=32 base=25 scaleW=256 scaleH=256 pages=1
char id=0    x=149   y=113   width=9     height=19    xoffset=1     yoffset=7     xadvance=11    page=0 
char id=1    x=159   y=112   width=9     height=19    xoffset=1     yoffset=7     xadvance=11    page=0 
char id=2    x=169   y=112   width=9     height=19    xoffset=1     yoffset=7     xadvance=11    page=0 
char id=3    x=179   y=112   width=9     height=19    xoffset=1     yoffset=7     xadvance=11    page=0 
char id=4    x=189   y=112   width=9     height=19    xoffset=1     yoffset=7     xadvance=11    page=0 
char id=5    x=199   y=110   width=9     height=19    xoffset=1     yoffset=7     xadvance=11    page=0 

We need to parse this ASCII file into our app. First off, let's set up some data structures to deal with this information. Notice that the first line has properties common to the entire character set, while the rest of the lines have information specific to a single character. So here are our structures:


struct CharDescriptor
{
	//clean 16 bytes
	unsigned short x, y;
	unsigned short Width, Height;
	float XOffset, YOffset;
	float XAdvance;
	unsigned short Page;

	CharDescriptor() : x( 0 ), y( 0 ), Width( 0 ), Height( 0 ), XOffset( 0 ), YOffset( 0 ),
		XAdvance( 0 ), Page( 0 )
	{ }
};

struct Charset
{
	unsigned short LineHeight;
	unsigned short Base;
	unsigned short Width, Height;
	unsigned short Pages;
	CharDescriptor Chars[256];
};




This is pretty straightforward. CharDescriptor holds the information for a single character; Charset holds the descriptions that apply to all characters, as well as the descriptors for every character. You may have noticed that BMFont allows you to generate less than a complete set of ASCII letters, but the character array here is still 256. We'll be using the ASCII values of characters to index into that array. The CharDescriptor constructor insures that if a character is used that has no representation, nothing will be drawn.

Writing the actual parser is fairly tedious, and not something I want to discuss here. I'm going to post my C++ code for parsing the FNT file; you are free to use that code, or write your own.


bool Font::ParseFont( std::istream& Stream, Font::Charset& CharsetDesc )
{
	String Line;
	String Read, Key, Value;
	std::size_t i;
	while( !Stream.eof() )
	{
		std::stringstream LineStream;
		std::getline( Stream, Line );
		LineStream << Line;

		//read the line's type
		LineStream >> Read;
		if( Read == "common" )
		{
			//this holds common data
			while( !LineStream.eof() )
			{
				std::stringstream Converter;
				LineStream >> Read;
				i = Read.find( '=' );
				Key = Read.substr( 0, i );
				Value = Read.substr( i + 1 );

				//assign the correct value
				Converter << Value;
				if( Key == "lineHeight" )
					Converter >> CharsetDesc.LineHeight;
				else if( Key == "base" )
					Converter >> CharsetDesc.Base;
				else if( Key == "scaleW" )
					Converter >> CharsetDesc.Width;
				else if( Key == "scaleH" )
					Converter >> CharsetDesc.Height;
				else if( Key == "pages" )
					Converter >> CharsetDesc.Pages;
			}
		}
		else if( Read == "char" )
		{
			//this is data for a specific char
			unsigned short CharID = 0;

			while( !LineStream.eof() )
			{
				std::stringstream Converter;
				LineStream >> Read;
				i = Read.find( '=' );
				Key = Read.substr( 0, i );
				Value = Read.substr( i + 1 );

				//assign the correct value
				Converter << Value;
				if( Key == "id" )
					Converter >> CharID;
				else if( Key == "x" )
					Converter >> CharsetDesc.Chars[CharID].x;
				else if( Key == "y" )
					Converter >> CharsetDesc.Chars[CharID].y;
				else if( Key == "width" )
					Converter >> CharsetDesc.Chars[CharID].Width;
				else if( Key == "height" )
					Converter >> CharsetDesc.Chars[CharID].Height;
				else if( Key == "xoffset" )
					Converter >> CharsetDesc.Chars[CharID].XOffset;
				else if( Key == "yoffset" )
					Converter >> CharsetDesc.Chars[CharID].YOffset;
				else if( Key == "xadvance" )
					Converter >> CharsetDesc.Chars[CharID].XAdvance;
				else if( Key == "page" )
					Converter >> CharsetDesc.Chars[CharID].Page;
			}
		}
	}

	return true;
}




So far, so good. After the parser does its thing, we have a complete, simple representation of the character details in memory. Now comes rendering. In order to render, I define a max length per string drawn (say, MAX_CHARS), and create a dynamic vertex buffer of that size. Every time a string is drawn, we lock the vertex buffer and compute all of the vertices, filling them into the buffer. We then render that buffer in one go. Note: It is important to lock the vertex buffer with the discard flag! If you do not, the pipeline will stall if you use the same font twice in a row. That goes for OpenGL/VBO too.2

How do we compute the vertices? Well, consider the psuedocode on AngelCode's page:


// Compute the source rect
Rect src;
src.left   = ch.x;
src.top    = ch.y;
src.right  = ch.x + ch.width;
src.bottom = ch.y + ch.height;

// Compute the destination rect
Rect dst;
dst.left   = cursor.x + ch.xoffset;
dst.top    = cursor.y + ch.yoffset;
dst.right  = dst.left + ch.width;
dst.bottom = dst.top + ch.height;

// Draw the image from the right texture
DrawRect(ch.page, src, dst);

// Update the position
cursor.x += ch.xadvance;




Their source rect will form our texture coordinates, and their destination rect will form our vertex coordinates3. We'll also have to convert source rect into [0,1] texture coordinate space. One last little detail: Notice that there is a virtual cursor position that is incremented, and that the increment value is not the same as the letter width. If you try to increment by the letter width, your letters will be smashed together. The rest is pure, simple arithmetic. This code generates clockwise OpenGL quads; in D3D it's a simple matter of copying the appropriate vertices to form a triangle list.


for( unsigned int i = 0; i < Str.size(); ++i )
{
	CharX = m_Charset.Chars[Str[i]].x;
	CharY = m_Charset.Chars[Str[i]].y;
	Width = m_Charset.Chars[Str[i]].Width;
	Height = m_Charset.Chars[Str[i]].Height;
	OffsetX = m_Charset.Chars[Str[i]].XOffset;
	OffsetY = m_Charset.Chars[Str[i]].YOffset;

	//upper left
	Verts[i*4].tu = (float) CharX / (float) m_Charset.Width;
	Verts[i*4].tv = (float) CharY / (float) m_Charset.Height;
	Verts[i*4].x = (float) CurX + OffsetX;
	Verts[i*4].y = (float) OffsetY;

	//upper right
	Verts[i*4+1].tu = (float) (CharX+Width) / (float) m_Charset.Width;
	Verts[i*4+1].tv = (float) CharY / (float) m_Charset.Height;
	Verts[i*4+1].x = (float) Width + CurX + OffsetX;
	Verts[i*4+1].y = (float) OffsetY;

	//lower right
	Verts[i*4+2].tu = (float) (CharX+Width) / (float) m_Charset.Width;
	Verts[i*4+2].tv = (float) (CharY+Height) / (float) m_Charset.Height;
	Verts[i*4+2].x = (float) Width + CurX + OffsetX;
	Verts[i*4+2].y = (float) Height + OffsetY;

	//lower left
	Verts[i*4+3].tu = (float) CharX / (float) m_Charset.Width;
	Verts[i*4+3].tv = (float) (CharY+Height) / (float) m_Charset.Height;
	Verts[i*4+3].x = (float) CurX + OffsetX;
	Verts[i*4+3].y = (float) Height + OffsetY;

	CurX += m_Charset.Chars[Str[i]].XAdvance;
}




Keep in mind the Verts is not a system memory array; it's the pointer returned from locking the vertex buffer (glMapBuffer, IDirect3DVertexBuffer9::Lock), which has been cast to a pointer to a custom vertex structure4.

Lastly, we draw the vertex buffer (glDrawArrays, IDirect3DDevice9::DrawPrimitive). The OGL vertex count will be the length of the string times 4, and the D3D primitive count will be the number of characters times 2. ind the texture for the font (this code only supports one page, remember) and set up alpha blending (source = source alpha, dest = one minus source alpha). For coloration, set up your texture units to modulate against a constant color. Our bitmap stores the fonts as white, so it'll take on the color of whatever constant you modulate against. And that's it, really. Beautiful, variable width, nicely rendered fonts, without any major hassles.


Notes:
1) I strongly suggest you use DXT/S3TC on the texture in video memory. However, it is critically important to use DXT3. DXT1 will mangle your fonts, as it does not cope well with sharp changes in alpha.
2) To discard a vertex buffer in OpenGL, call BufferData with the same size, but a NULL pointer. Then call MapBuffer.
3) You'll notice that the vertex coordinates here are in pixels. In order to render this correctly, you'll need to define an orthographic projection such that one unit corresponds to one pixel.
4) struct FontVertex { float x, y, tu, tv; };


So, questions/comments?


[EDIT 5/2/2006] It just came to my attention that this tutorial is a lot more well known than I thought. So I revised a few parser bugs that were pointed out to me but I never fixed in the posted code.
[EDIT 1/15/2013] Still trucking! Changed some unsigned shorts to floats to support signed values and especially signed distance field rendering.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Advertisement
Why not submit it for a sweet snippet?
Quote:Original post by Anonymous Poster
Why not submit it for a sweet snippet?


The turnover time between submission and posting is quite long. Besides, in a week or two it'll be available in a different, as-of-yet unannounced venue.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
I love AngelCode's bitmap font format. Your tutorial will open up a few eyes I am sure. Nice work.
....[size="1"]Brent Gunning
Maybe I'm missing something but is there any reason that your not doing any kerning? With out kerning being done certian letter pairs will always look a little odd for example "LI" can look like "L I" with out the propper kernining values or they can also appear to be too close to eachother. Kerning values are something you should be able to extract from the true type font on the export.

It's something you may want to consider doing along with a little bit of leding to adjust the space between multiple lines of text so that it's readable.
Joseph FernaldSoftware EngineerRed Storm Entertainment.------------------------The opinions expressed are that of the person postingand not that of Red Storm Entertainment.
Kerning information and line height is exported from BMFont and stored in the .fnt file. Promit already handles kerning and line height would only be an issue with multiple lines, which is fairly simple to implement, but not necessary for most non-rpg games.
....[size="1"]Brent Gunning
Hey uh... I apologize for bringing this 7 year old post back from the grave, but it's still a top result on google (and/or linked on angelcode's download page)...

if anyone tried using Promit's code with the latest version of BMFont, you'll notice crashes, here's what to look out for:

1. There's an unhandled new key in the if( Read == "char") section called "chnl" -- just add another case for it

2. All the values in the file can be signed, so don't use the unsigned shorts from the CharDescriptor struct as-is (otherwise half your letters won't show up and you'll want to kill yourself). Like Promit, cast everything to float! (3 days of "where the flying **** is the letter P?")

3. First char is now systematically id -1 (or 65535 unsigned); that's not a valid array index -- it's data for the unicode "unknown character" character. Either skip it or save it separately, maybe a "defaultChar" variable somewhere...

That's my quick and dirty debugging advice... Take it or leave it. Sorry again for the gravedigging.

Just what I was looking for. Thanks Promit from 10 years ago! ph34r.png

I think, therefore I am. I think? - "George Carlin"
My Website: Indie Game Programming

My Twitter: https://twitter.com/indieprogram

My Book: http://amzn.com/1305076532

cool.png Don't worry, I just checked and this code is still driving the text rendering for our current gen engine. I think a few minor tweaks were made over the years, and that's it.

SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.

Hey Promit, thanks for the tutorial. Helped me out remove dependecy to the font rendering libs I was using.

I was just wondering what does it take to include the font size with the vertices calculations to set the size of the printed font? do I just multiply by the font set size for each line of vertices calculation? or do I just multiply with a scale matrix?

Also, there's other values in the font format such as "aa", "stretchH", "lineHeight", "padding", "spacing", "scaleH" and "scaleW". I'm not sure where exactly each one of those fit into the picture?

Here's my rendering code, maybe someone will find it useful - or could suggest a better way/improvements. Currently I'm dynamically allocating the vertices/uv buffers, maybe I should just set a max value instead, but anyways...


void FontRender(font_renderer *Renderer, font_set *Font)
{
    u32 NumChars = StringLength(Renderer->Text);
    u32 BufferSize = NumChars * 12 * sizeof(r32);
    
    if (!Renderer->Initialized)
    {
        glGenBuffers(1, &Renderer->VBO);
        glBindBuffer(GL_ARRAY_BUFFER, Renderer->VBO);
        glBufferData(GL_ARRAY_BUFFER, BufferSize * 2, 0, GL_DYNAMIC_DRAW);

        glGenVertexArrays(1, &Renderer->VAO);
        glBindVertexArray(Renderer->VAO);
        glEnableVertexAttribArray(0);
        glVertexAttribPointer(0, 2, GL_FLOAT, 0, 0, 0);
        glEnableVertexAttribArray(1);
        glVertexAttribPointer(1, 2, GL_FLOAT, 0, 0, (void *)BufferSize);

        glBindBuffer(GL_ARRAY_BUFFER, 0);
        glBindVertexArray(0);

        Renderer->Initialized = 1;
    }

    r32 *VertPos = Calloc(NumChars * 12, r32);
    r32 *VertUV = Calloc(NumChars * 12, r32);

    For(u32, i, NumChars)
    {
        font_character Character = Font->Characters[Renderer->Text[i] - 32];
        r32 X = Character.X;
        r32 Y = Character.Y;
        r32 XOffset = Character.XOffset;
        r32 YOffset = Character.YOffset;
        r32 XAdvance = Character.XAdvance;
        r32 Width = Character.Width;
        r32 Height = Character.Height;

        // Triangle 1
        {
            // Top Left
            VertPos[i * 12] = Renderer->CurrentX + XOffset;
            VertPos[i * 12 + 1] = YOffset;

            // Bottom Left
            VertPos[i * 12 + 2] = Renderer->CurrentX + XOffset;
            VertPos[i * 12 + 3] = YOffset + Height;

            // Bottom Right
            VertPos[i * 12 + 4] = Renderer->CurrentX + XOffset + Width;
            VertPos[i * 12 + 5] = YOffset + Height;
        }

        // Triangle 2
        {
            // Bottom Right
            VertPos[i * 12 + 6] = VertPos[i * 12 + 4];
            VertPos[i * 12 + 7] = VertPos[i * 12 + 5];

            // Top Right
            VertPos[i * 12 + 8] = Renderer->CurrentX + XOffset + Width;
            VertPos[i * 12 + 9] = YOffset;

            // Top Left
            VertPos[i * 12 + 10] = VertPos[i * 12];
            VertPos[i * 12 + 11] = VertPos[i * 12 + 1];
        }

        // UV 1
        {
            // Top left
            VertUV[i * 12] = X / Font->Width;
            VertUV[i * 12 + 1] = Y / Font->Height;

            // Bottom left
            VertUV[i * 12 + 2] = X / Font->Width;
            VertUV[i * 12 + 3] = (Y + Height) / Font->Height;

            // Bottom right
            VertUV[i * 12 + 4] = (X + Width) / Font->Width;
            VertUV[i * 12 + 5] = (Y + Height) / Font->Height;
        }

        // UV 2
        {
            // Bottom right
            VertUV[i * 12 + 6] = VertUV[i * 12 + 4];
            VertUV[i * 12 + 7] = VertUV[i * 12 + 5];

            // Top right
            VertUV[i * 12 + 8] = (X + Width) / Font->Width;
            VertUV[i * 12 + 9] = Y / Font->Height;

            // Top left
            VertUV[i * 12 + 10] = VertUV[i * 12 ];
            VertUV[i * 12 + 11] = VertUV[i * 12 + 1];
        }

        Renderer->CurrentX += Character.XAdvance;
    }

    glBindBuffer(GL_ARRAY_BUFFER, Renderer->VBO);
    u32 Offset = 0;
    glBufferSubData(GL_ARRAY_BUFFER, Offset, BufferSize, VertPos);
    Offset += BufferSize;
    glBufferSubData(GL_ARRAY_BUFFER, Offset, BufferSize, VertUV);

    m4 FontProjection = Orthographic(0, 800, 600, 0, -1, +1);

    glDisable(GL_DEPTH_TEST);
    ShaderUse(Renderer->Shader);
    glBindVertexArray(Renderer->VAO);
    TextureBind(Font->Atlas);
    ShaderSetV3(Renderer->Shader, "Color", Renderer->Color);
    ShaderSetM4(Renderer->Shader, "Projection", &FontProjection);
    glDrawArrays(GL_TRIANGLES, 0, NumChars * 6);
    
    free(VertPos);
    free(VertUV);

    glBindBuffer(GL_ARRAY_BUFFER, 0);
    glBindVertexArray(0);
}

This topic is closed to new replies.

Advertisement