Understanding loading of OpenGL textures from FreeType2

Started by
3 comments, last by NordCoder 13 years, 1 month ago
Hi Gamedev,

I am writing a simple OpenGL/Freetype2 font engine for a project I am working on. I got it up and running so far, but there are a couple of things I hope someone can explain to me since I used some code I copied from the internet, and could not fully understand it.

I was unable to color my fonts using the original method (it was always black) and I don't understand how the new loading technique (gotten from NeHe tut #43) works to solve this. Moreover, why didn't the original method work? Could I still do it somehow by passing some other combination of GL enums to glTexImage2D? I am also not sure I understand the GL_* enums I pass. I would appreciate it if someone could explain it to me :)

Here's my new and current loading code which works:


bool Font::Load(const char* fontFilePath, int pixelSize)
{
// Error checking
if (fontFilePath == "")
{
std::cerr << "sl::Font: Empty file path passed as argument" << std::endl;
return false;
}

if (pixelSize == 0)
{
std::cerr << "sl::Font: Passed a zero character size" << std::endl;
return false;
}

if (!Font::library)
{
std::cerr << "sl::Font: FreeType2 library not initialized properly (" << ErrorString() << ")" << std::endl;
return false;
}

if (error = (FT_New_Face(Font::library, fontFilePath, 0, &face)))
{
std::cerr << "sl::Font: Unable to create new face (" << ErrorString() << ")" << std::endl;
return false;
}

source.assign(fontFilePath);
this->pixelSize = pixelSize;

// Warn the user if the font is not scalable (i.e. vector graphics/outline font)
if(!(face->face_flags & FT_FACE_FLAG_SCALABLE) || !(face->face_flags & FT_FACE_FLAG_HORIZONTAL))
std::cerr << "sl::Font warning: Font is not scalable (" << fontFilePath << ")" << std::endl;

FT_Set_Pixel_Sizes(face, pixelSize, pixelSize);

int textureWidth = 0;
int textureHeight = 0;

for(int c = 0; c < 256; ++c)
{
// Load the character into the face's glyph slot
FT_Load_Char(face, (char)c, FT_LOAD_RENDER);

// Save glyph stats
Glyph glyph;
glyph.Advance = face->glyph->advance.x >> 6;
glyph.Width = face->glyph->bitmap.width;
glyph.Height = face->glyph->bitmap.rows;
glyph.LeftBearing = face->glyph->bitmap_left;
glyph.TopBearing = face->glyph->bitmap_top;

textureWidth = NextPowerOf2(glyph.Width);
textureHeight = NextPowerOf2(glyph.Height);

// Adjust texture coordinate for glyph
glyph.texCoordX = (float)glyph.Width/(float)textureWidth;
glyph.texCoordY = (float)glyph.Height/(float)textureHeight;

// Array to hold pixel data
GLubyte* data = new GLubyte[2 * textureWidth * textureHeight];

// Pixel positions out of range (in padding area) are set to zero
for (int y = 0; y < textureHeight; ++y)
{
for (int x = 0; x < textureWidth; ++x)
{
//data[x + y * textureWidth] =
// (x >= glyph.Width || y >= glyph.Height) ?
// 0 :
// face->glyph->bitmap.buffer[x + y * glyph.Width];

data[2 * (x + y * textureWidth)] = data[2 * (x + y * textureWidth) + 1] =
(x >= glyph.Width || y >= glyph.Height) ?
0 : face->glyph->bitmap.buffer[x + glyph.Width * y];
}
}

if (!data)
{
std::cerr << "sl::Font: Font data is corrupt" << std::endl;
return false;
}

glGenTextures(1, &glyph.TextureID);
glBindTexture(GL_TEXTURE_2D, glyph.TextureID);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureWidth, textureHeight, 0, GL_LUMINANCE_ALPHA, GL_UNSIGNED_BYTE, data);

glyphList.push_back(glyph);

delete[] data;
}

std::cout << "Size: " << glyphList.size() << std::endl;

return true;
}


This is part of the original loading code (everything else is identical to the above):


// Same code here as above

// Array to hold pixel data
GLubyte* data = new GLubyte[textureWidth * textureHeight];

for (int y = 0; y < textureHeight; ++y)
{
for (int x = 0; x < textureWidth; ++x)
{
//data[x + y * textureWidth] =
// (x >= glyph.Width || y >= glyph.Height) ?
// 0 :
// face->glyph->bitmap.buffer[x + y * glyph.Width];

data[(x + y * textureWidth)] =
(x >= glyph.Width || y >= glyph.Height) ?
0 : face->glyph->bitmap.buffer[x + glyph.Width * y];
}
}

// Same code here, but different call to glTexImage2D
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, textureWidth, textureHeight, 0, GL_ALPHA, GL_UNSIGNED_BYTE, data);
Advertisement
From the documentation for glTexImage2D:

format determines the composition of each element in pixels. It can assume one of the following symbolic values:

GL_ALPHA
Each element is a single alpha component. The GL converts it to floating point and assembles it into an RGBA element by attaching 0 for red, green, and blue.[/quote]

You could try using GL_ALPHA as your internal texture format, though that would rely on hardware support.

For the GL_LUMINANCE_ALPHA input, you should store 255 for the luminance value. You might be able to use GL_LUMINANCE_ALPHA for your internal texture format, too. Try it and see. :)

...

As an aside, you should combine all your glyph bitmaps into one texture so you can avoid texture changes while drawing text.

My suggestion is to pick a texture width proportional to the size of your font. For example, you could add up the width of all your glyphs, divide by eight, and rounding up to the next power of two. Alternatively, you could pick it based on font height, such as 2048 for 60+, 1024 for 30-59, 512 for 15-29, and 256 for 0-14). Whatever method you choose, the next step is to fit the glyphs into rows in reading order (across, then down), leaving a bit of padding between them. Once you've laid them all out, you can measure how tall your texture will need to be and allocate it. Then just copy the glyph bitmaps to the proper locations.

When drawing text, use texture coordinates based on the generated positions.

(Incidentally, you should apply a half-pixel shift toward the upper-left when drawing the text instead of the traditional half-texel shift. While both work just fine without antialiasing, the half-pixel shift works with antialiasing in D3D9 while the half-texel shift doesn't. I only found this out recently...)
I read the docs, but I wasn't sure I understood it all.

For the GL_LUMINANCE_ALPHA input, you should store 255 for the luminance value.[/quote]

Could you clarify that? I am still somewhat new to OpenGL texture loading...at least with FreeType2 :)

You might be able to use GL_LUMINANCE_ALPHA for your internal texture format, too. Try it and see. :)[/quote]

Yep. Works with GL_LUMINANCE_ALPHA as the internal format.

I plan on recoding my texture loading to load all fonts into a single texture in the future, but I thought I would try and load them separately first, but thanks for the note, it is very helpful :)

When I load my fonts using the original method, the program crashes with an "Unhandled exception at...Access violation reading location..." message and points to the call to glTexImage2D. Why is that and why do I need to make my array twice as big in the current method as opposed to the original method?

I.e.:
GLubyte* data = new GLubyte[2 * textureWidth * textureHeight]
VS.
GLubyte* data = new Glubyte[textureWidth * textureHeight]
GL_LUMINANCE_ALPHA has two channels so it needs two bytes per pixel. :)

You can set the luminance for each pixel to white like this:

// Pixel positions out of range (in padding area) are set to zero
for (int y = 0; y < textureHeight; ++y)
{
for (int x = 0; x < textureWidth; ++x)
{
//data[x + y * textureWidth] =
// (x >= glyph.Width || y >= glyph.Height) ?
// 0 :
// face->glyph->bitmap.buffer[x + y * glyph.Width];

// luminance
data[2 * (x + y * textureWidth)] = 255;

// alpha
data[2 * (x + y * textureWidth) + 1] =
(x >= glyph.Width || y >= glyph.Height) ?
0 : face->glyph->bitmap.buffer[x + glyph.Width * y];
}
}
I get a much nicer and smooth result if I set the luminance channel to 255, thanks :D Afaik, I'm simply setting the perceived brightness of the texture to "100%" and storing the colors in the second channel (converted to floating numbers by OpenGL), right?

Still have a couple of questions though :rolleyes: .
Is this the "only" way of loading FreeType2 image information in OpenGL (possibly because of the way they are loaded)? A vague question, I know...

I also tried the following using 4 channels:


GLubyte* data = new GLubyte[4 * textureWidth * textureHeight];

// Pixel positions out of range (in padding area) are set to zero
for (int y = 0; y < textureHeight; ++y) {
for (int x = 0; x < textureWidth; ++x) {
// read in r, g, b and a components
data[4 * (x + y * textureWidth)] = (x >= glyph.Width || y >= glyph.Height) ? 0 : face->glyph->bitmap.buffer[x + glyph.Width * y];
data[4 * (x + y * textureWidth) + 1] = (x >= glyph.Width || y >= glyph.Height) ? 0 : face->glyph->bitmap.buffer[x + glyph.Width * y];
data[4 * (x + y * textureWidth) + 2] = (x >= glyph.Width || y >= glyph.Height) ? 0 : face->glyph->bitmap.buffer[x + glyph.Width * y];
data[4 * (x + y * textureWidth) + 3] = (x >= glyph.Width || y >= glyph.Height) ? 0 : face->glyph->bitmap.buffer[x + glyph.Width * y];
}
}


While it worked, but yielded a less satisfying result for some reason...
Thanks for all your help so far! I think I'm beginning to grasp the concepts :D

This topic is closed to new replies.

Advertisement