OpenGL FreeType

Recommended Posts

Hi all, Has anyone used FreeType2 with directx 10? I want to use it rather than the ID3DX10Font stuff, as i've heard it's alot faster. Is it compatabile with Dx10, all I can find stuff about it on is with OpenGL, does it just draw directly onto the target window? Thanks all.

Share on other sites
clb    2147
A part of the reason why FT2 is so portable is because it's well separated from the actual presentation side. This means that FT2 doesn't draw directly onto the target window. What it does for you is it takes in font files and produces raster bitmaps out of the glyphs present in the file. Whatever you do with those bitmaps after that is up to you. You can use FT2 with OGL, D3D9, D3D10 and D3D11 just fine, and it's usually quite straightforward to port the presentation code from one library to another, since OGL and D3D variants are very similar to each other.

There are some additional libraries that can do the drawing for you using different 3D APIs if you look around, but unfortunately I don't know of any that I could recommend.

About the performance bit.. well, yes, my implementation of FT2 fonts has significantly better performance than what I could get by using ID3DXFont. But do know that getting the maximum performance is not that straightforward and does require some careful design on how you draw text using the GPU (using texture atlases for glyph caches and minimizing # of draw calls with efficient use of dynamic vertex buffers). If implemented poorly, you can easily have a horrible performance compared to ID3DXFont.

If you go through the manual organization route, see if the code I use for caching font glyphs is any helpful: RectangleBinPack.h. An example of what the code produces can be found here:

[Edited by - clb on August 24, 2009 7:44:54 AM]

Share on other sites
So if I used freetype to generate a bitmap of the text I want to display I could just fill a texture resource with that data and send it through my shader?

How do you make textures in d3d10 like this not using createfromfile?

Thanks all.

Share on other sites
clb    2147
Yeah, that's pretty much how it would work.

If you look carefully at the documentation reference pages, you can find that there's ID3D10Device::CreateTexture2D that works pretty much as advertised.

Share on other sites
Ah okay cool, I will give it ago, I suppose at when you initialise the engine you could cache away common words or sentences and just reuse them with different world and scaling matricies, and then anything that needs to be created on the fly can be added to the cache to save time later.

Share on other sites
demonkoryu    980
Quote:
 Original post by clb

BTW an awesome piece of art. [smile]

Share on other sites
Okay, still not totally sure where FreeType creates the bitmap data, I have looked at the documentation and there is a face::bitmap variable which seems to be an array of chars?

Don't get how to turn this into the required data for Dx10.

Thanks

Share on other sites
clb    2147
Quote:
Original post by Konfusius
Quote:
 Original post by clb

BTW an awesome piece of art. [smile]

Thanks :) The rainbow palette shows the order in which the algorithm puts new characters into the atlas, i.e. characters with similar colors have been allocated nearly after each other.

Quote:
 Original post by MaddiusOkay, still not totally sure where FreeType creates the bitmap data, I have looked at the documentation and there is a face::bitmap variable which seems to be an array of chars?Don't get how to turn this into the required data for Dx10.Thanks

I put up some code that I use for extracting the glyph characters, see FT2/:

• FTFont.h

• FTSystem.h

• FTSystem.cpp

• RectangleBinPack.h

• There's some \TODO's around since that code is a bit old, from the time I was doing some initial tests. Unfortunately I can not post the version where I've finished the \TODO's because I do not have all rights to that code anymore (I'm sure you can understand). You can still see how the raster data is generated and extracted using FT2. If you look at the FT2 sample you have for OpenGL, you can see it does something very similar to this code. You may use the code above for whatever purposes and modify it how you wish, no need to retain original acknowledgements, it's in Public Domain. Hope it answers your questions!

Share on other sites
Thanks for posting that code but I just dont think I understand the method FreeType uses.

Does it store the data as RGB values for each pixel in the bitmap or are they shades of grey or what? I think I need a background on what it acctually does, sorry to be such a klutz but once I understand what it acctually does I can probably implement something.

Im just confused as to HOW FreeType stores it's bitmap data.

Share on other sites
clb    2147
Quote:
 Original post by MaddiusThanks for posting that code but I just dont think I understand the method FreeType uses.Does it store the data as RGB values for each pixel in the bitmap or are they shades of grey or what? I think I need a background on what it acctually does, sorry to be such a klutz but once I understand what it acctually does I can probably implement something.Im just confused as to HOW FreeType stores it's bitmap data. Thanks for your time mate.

See the FTSystem.cpp, line 100:

clb::u8 intensity = ftBitmap->buffer[y * ftBitmap->pitch + x];

That is exactly what FT2 does - it returns grayscale intensities at each pixel. The line above extracts this intensity at (x,y) and modulates a (r,g,b) triplet by this intensity.

FreeType 2 Documentation describes this whole process. The tutorial, step 1 says about FT_Render_Glyph:

Quote:
 The parameter render_mode is a set of bit flags used to specify how to render the glyph image. Set it to FT_RENDER_MODE_NORMAL to render a high-quality anti-aliased (256 gray levels) bitmap, as this is the default. You can alternatively use FT_RENDER_MODE_MONO if you want to generate a 1-bit monochrome bitmap.

The reference for FT_Render_Glyph is available online here. You draw to a FT_GlyphSlot, and its fields are specified in FT_GlyphSlotRec page. It has a member FT_Bitmap, which holds the actual pixel buffer for the outputted glyph data.

As you can see from FTSystem.cpp, the code required to do the glyph generation is not much. I recommend you read the tutorial closely since that describes how to place the glyphs properly on the drawing canvas (proper kerning, spacing and leading). They have a nice reference implementation you can use.

Hope that helps,

Share on other sites
Okay so bitmap buffer is a 1d array which contains values ranging from 0 to 128? Or 0 to 255? Either way that is how intense the pixel is. Okay, so then if you wanted to change the color of the text you could just change it in the shader I guess.

So you would use the glyph interface to create your (text or does it create just one character?) and then drop all the data in the buffer into some format that directx understands as a texture.

Okay I think i'm getting there will start implementing this soon.

Thanks a bunch man.

-------
edit:

I guess pixels with zero intensity can then have zero alpha in the texture so that you will only get the text displayed and everything else blends in with the scene behind.

[Edited by - Maddius on August 24, 2009 3:13:29 PM]

Share on other sites
Anon Mike    1098
I can understand the attraction of FreeType for people wanting portable text rendering. But if you're locked into Windows already (and it seems you are), and you are going to go through all the extra effort of manually caching and rendering rasterized glyphs, why not just use GDI? It'll give you bitmaps just fine and you don't have to deal with the additional dependencies of a third-party library.

Share on other sites
Yeah i'm beggining to think it might not be such a bad idea to just use the ID3DX10Font stuff, I guess it provides some functionality to just produce a texture and then shove it through a shader. My game is a FPS and shouldn't be too heavy on text, and any text there is can be pretty much created and stored at initialisation.

----
edit:

Or does it do it like it did in DX9, straight through GDI and onto the screen, which isn't bad I guess for something simple but you cant get alpha blending then I guess. How great would the slow down be with using the ID3DX10Font stuff do you think?

[Edited by - Maddius on August 24, 2009 5:30:41 PM]

Share on other sites
Hi guys,

I have started implementing ID3DX10Font and have run into the classic of the calls of the Font API changing the render states, is there a method to save and reload render states?

Thanks.

Create an account

Register a new account

• Similar Content

• By Zaphyk
I am developing my engine using the OpenGL 3.3 compatibility profile. It runs as expected on my NVIDIA card and on my Intel Card however when I tried it on an AMD setup it ran 3 times worse than on the other setups. Could this be a AMD driver thing or is this probably a problem with my OGL code? Could a different code standard create such bad performance?

• I'm trying to get some legacy OpenGL code to run with a shader pipeline,
The legacy code uses glVertexPointer(), glColorPointer(), glNormalPointer() and glTexCoordPointer() to supply the vertex information.
I know that it should be using setVertexAttribPointer() etc to clearly define the layout but that is not an option right now since the legacy code can't be modified to that extent.
I've got a version 330 vertex shader to somewhat work:
#version 330 uniform mat4 osg_ModelViewProjectionMatrix; uniform mat4 osg_ModelViewMatrix; layout(location = 0) in vec4 Vertex; layout(location = 2) in vec4 Normal; // Velocity layout(location = 3) in vec3 TexCoord; // TODO: is this the right layout location? out VertexData { vec4 color; vec3 velocity; float size; } VertexOut; void main(void) { vec4 p0 = Vertex; vec4 p1 = Vertex + vec4(Normal.x, Normal.y, Normal.z, 0.0f); vec3 velocity = (osg_ModelViewProjectionMatrix * p1 - osg_ModelViewProjectionMatrix * p0).xyz; VertexOut.velocity = velocity; VertexOut.size = TexCoord.y; gl_Position = osg_ModelViewMatrix * Vertex; } What works is the Vertex and Normal information that the legacy C++ OpenGL code seem to provide in layout location 0 and 2. This is fine.
What I'm not getting to work is the TexCoord information that is supplied by a glTexCoordPointer() call in C++.
Question:
What layout location is the old standard pipeline using for glTexCoordPointer()? Or is this undefined?

Side note: I'm trying to get an OpenSceneGraph 3.4.0 particle system to use custom vertex, geometry and fragment shaders for rendering the particles.

• Hi i am new to this forum  i wanted to ask for help from all of you i want to generate real time terrain using a 32 bit heightmap i am good at c++ and have started learning Opengl as i am very interested in making landscapes in opengl i have looked around the internet for help about this topic but i am not getting the hang of the concepts and what they are doing can some here suggests me some good resources for making terrain engine please for example like tutorials,books etc so that i can understand the whole concept of terrain generation.

• By KarimIO
Hey guys. I'm trying to get my application to work on my Nvidia GTX 970 desktop. It currently works on my Intel HD 3000 laptop, but on the desktop, every bind textures specifically from framebuffers, I get half a second of lag. This is done 4 times as I have three RGBA textures and one depth 32F buffer. I tried to use debugging software for the first time - RenderDoc only shows SwapBuffers() and no OGL calls, while Nvidia Nsight crashes upon execution, so neither are helpful. Without binding it runs regularly. This does not happen with non-framebuffer binds.
GLFramebuffer::GLFramebuffer(FramebufferCreateInfo createInfo) { glGenFramebuffers(1, &fbo); glBindFramebuffer(GL_FRAMEBUFFER, fbo); textures = new GLuint[createInfo.numColorTargets]; glGenTextures(createInfo.numColorTargets, textures); GLenum *DrawBuffers = new GLenum[createInfo.numColorTargets]; for (uint32_t i = 0; i < createInfo.numColorTargets; i++) { glBindTexture(GL_TEXTURE_2D, textures[i]); GLint internalFormat; GLenum format; TranslateFormats(createInfo.colorFormats[i], format, internalFormat); // returns GL_RGBA and GL_RGBA glTexImage2D(GL_TEXTURE_2D, 0, internalFormat, createInfo.width, createInfo.height, 0, format, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); DrawBuffers[i] = GL_COLOR_ATTACHMENT0 + i; glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0 + i, textures[i], 0); } if (createInfo.depthFormat != FORMAT_DEPTH_NONE) { GLenum depthFormat; switch (createInfo.depthFormat) { case FORMAT_DEPTH_16: depthFormat = GL_DEPTH_COMPONENT16; break; case FORMAT_DEPTH_24: depthFormat = GL_DEPTH_COMPONENT24; break; case FORMAT_DEPTH_32: depthFormat = GL_DEPTH_COMPONENT32; break; case FORMAT_DEPTH_24_STENCIL_8: depthFormat = GL_DEPTH24_STENCIL8; break; case FORMAT_DEPTH_32_STENCIL_8: depthFormat = GL_DEPTH32F_STENCIL8; break; } glGenTextures(1, &depthrenderbuffer); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); glTexImage2D(GL_TEXTURE_2D, 0, depthFormat, createInfo.width, createInfo.height, 0, GL_DEPTH_COMPONENT, GL_FLOAT, 0); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_NEAREST); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glBindTexture(GL_TEXTURE_2D, 0); glFramebufferTexture(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, depthrenderbuffer, 0); } if (createInfo.numColorTargets > 0) glDrawBuffers(createInfo.numColorTargets, DrawBuffers); else glDrawBuffer(GL_NONE); if (glCheckFramebufferStatus(GL_FRAMEBUFFER) != GL_FRAMEBUFFER_COMPLETE) std::cout << "Framebuffer Incomplete\n"; glBindFramebuffer(GL_FRAMEBUFFER, 0); width = createInfo.width; height = createInfo.height; } // ... // FBO Creation FramebufferCreateInfo gbufferCI; gbufferCI.colorFormats = gbufferCFs.data(); gbufferCI.depthFormat = FORMAT_DEPTH_32; gbufferCI.numColorTargets = gbufferCFs.size(); gbufferCI.width = engine.settings.resolutionX; gbufferCI.height = engine.settings.resolutionY; gbufferCI.renderPass = nullptr; gbuffer = graphicsWrapper->CreateFramebuffer(gbufferCI); // Bind glBindFramebuffer(GL_DRAW_FRAMEBUFFER, fbo); // Draw here... // Bind to textures glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, textures[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, textures[1]); glActiveTexture(GL_TEXTURE2); glBindTexture(GL_TEXTURE_2D, textures[2]); glActiveTexture(GL_TEXTURE3); glBindTexture(GL_TEXTURE_2D, depthrenderbuffer); Here is an extract of my code. I can't think of anything else to include. I've really been butting my head into a wall trying to think of a reason but I can think of none and all my research yields nothing. Thanks in advance!

• Hi everyone, I've shared my 2D Game Engine source code. It's the result of 4 years working on it (and I still continue improving features ) and I want to share with the community. You can see some videos on youtube and some demo gifs on my twitter account.
This Engine has been developed as End-of-Degree Project and it is coded in Javascript, WebGL and GLSL. The engine is written from scratch.
This is not a professional engine but it's for learning purposes, so anyone can review the code an learn basis about graphics, physics or game engine architecture. Source code on this GitHub repository.
I'm available for a good conversation about Game Engine / Graphics Programming

• 14
• 13
• 15
• 10
• 18