• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
MarkS

OpenGL
Are there any text libraries that rely on GL 3.3 (or higher) core context?

9 posts in this topic

I restricting myself to the core context for a project and I need some simple text rendering for stats. ALL of the text libraries that I have found use glVertex calls, even the ones that claim to be based on GL 3.3 or higher. I have yet to see a single library of code sample that uses VBOs and the programmable pipeline.

 

Does such a beast exist? This is something that I would really like to not have to write myself, if possible.

 

And please don't say FreeType. FreeType is not a rendering library and is not based on, nor does it use, OpenGL. I am familiar with FreeType. It doesn't do what I want.

Edited by MarkS
0

Share this post


Link to post
Share on other sites

And please don't say FreeType... I am familiar with FreeType. It doesn't do what I want.

Sure it does - it renders text into a bitmap, which can be streamed to a texture and rendered on a quad in your scene. I'd even go one further and suggest you use Pango.
 
Text rendering is a terrible use of OpenGL. If you go the typical quad-per-character route, you need to regularly queue up hundreds or even thousands of separate draw calls, each for a tiny run of primitives - this is OpenGL's pathologically worst-case performance scenario. You still don't have decent kerning, ligatures or antialiasing either...
 
And it's not as if the other 2-3 cores in your quad-core CPU are actually doing anything useful most of the time. Letting them get busy rendering beautifully antialiased text is a pretty decent idea. Edited by swiftcoder
2

Share this post


Link to post
Share on other sites

Well, FreeType doesn't use OpenGL.  But it IS a rendering library.  It will give you bitmaps of a font, that it has rendered.  The trick is to use these in OpenGL.  Swiftcoder is right in that it can be REALLY insane how many draw calls you get if you don't do it right.  I create a texture atlas of all the usable characters (or those that you know you'll use) and then keep VBOs for the locations of rectangles and tex coords.  Then, I just keep an array of all of the strings that have been added to that one specific font, and whenever a new one is added, the VBOs get updated (well, not always, but that's neither here nor there).  This means for any one font, you can limit it to one call.  You can even do color with another VBO.  If you have a lot of fonts though, that gets pretty untenable, but I haven't had that problem :)

I also don't antialias, but that's because I'm not THAT worried about the text.  Anyway, I think I've gotten a bit away from the OP, but really you can use FreeType with OpenGL, it just isn't a direct text drawer, but I don't know of any of those that are specifically 3.3+

0

Share this post


Link to post
Share on other sites

Text rendering is a terrible use of OpenGL. If you go the typical quad-per-character route, you need to regularly queue up hundreds or even thousands of separate draw calls, each for a tiny run of primitives - this is OpenGL's pathologically worst-case performance scenario. You still don't have decent kerning, ligatures or antialiasing either...

 

Not necessarily. The only pathological things about that are overdraw and the pixel shader running twice on the pixels that are on the diagonal of each quad. Other than this, there is no real issue. You can perfectly batch a few thousand characters into one draw call using one vertex buffer, and you can do perfectly good kerning and antialiasing. Of course OpenGL won't "magically" do the kerning for you, you'll have to do the pairing yourself from the information in the truetype (or whatever font format you use) font. The same goes for ligatures. Distance field bitmaps, if produced with a little care, antialias very nicely at practically every reasonable scale, and nearly as fast as the graphics card can render simple textured quads. There is hardly an observable difference in speed.

 

Freetype is nice insofar as it allows you to both render single glyphs to bitmaps (which you preferrably copy to an atlas) and access kerning information with an easy to use API and without actually knowing the details of truetype (or several other convoluted font file formats). Though of course that's something you can normally do with an offline tool such as BMFont in a very convenient manner.

 

Using OpenGL for font rendering is the same as doing it with a "pure software" renderer, except you get to use dedicated hardware for the dirty work. You still need to figure where to put a glyph to have it properly kerned with the preceding one, etc. as there are no high-level utility functions that render a whole line or a whole paragraph for you. If that is what one wants, pango sure is a good option (though pango is the exact opposite of what I'd want to use personally, far too much smartness and internationalisation features that I wouldn't want -- but you mileage may vary).

0

Share this post


Link to post
Share on other sites

You can perfectly batch a few thousand characters into one draw call using one vertex buffer, and you can do perfectly good kerning and antialiasing.

 

That's not my point. Of course you can, but do you actually derive a benefit from doing so?

 

You still have to perform all the unicode normalisation, layout, kerning and ligatures on the CPU, and (at least for small font sizes) the amount of data you upload to the GPU is very similar for both the vertex buffer and texture cases.

 

 

Distance field bitmaps, if produced with a little care, antialias very nicely at practically every reasonable scale, and nearly as fast as the graphics card can render simple textured quads.

 

If you go this route, you give up on hinting. That's probably a reasonable tradeoff for large (18pt+) font sizes on a high-resolution display, but it's something you need to be aware of at small font sizes.

 

 

though pango is the exact opposite of what I'd want to use personally, far too much smartness and internationalisation features that I wouldn't want -- but you mileage may vary

 

It's fine to ignore internationalisation in a hobby project, but if you ever plan to release in other (particularly non-latin) languages, you want that Pango "smartness". Languages like Arabic and Mandarin are pure hell to deal with in a hand-written text engine.

1

Share this post


Link to post
Share on other sites
Of course you can, but do you actually derive a benefit from doing so?

What you gain is that the GPU is doing the not-quite-trivial work of sampling texels, antialiasing, blending pixels together, and all that. Sure, you can do all that on the CPU no problem, but why do that when there's a dedicated workhorse for the task?
 

 

You still have to perform all the unicode normalisation, layout, kerning and ligatures on the CPU, and (at least for small font sizes) the amount of data you upload to the GPU is very similar for both the vertex buffer and texture cases.

 

Yes and no. Kerning and ligatures (or formatting a paragraph) you certainly have to do yourself. Unicode normalisation is not something you do at all. This abomination (which in my opinion is a good reason why Unicode is totally unsuitable for what it's used for) is something you should handle by policy or in the build pipeline. Your renderer should not have to guess how to compose a glyph, and your text system should not have to guess how to compare or sort two strings. There should be one and only one possible way, even if Unicode allows for 2 or 3 ways that are equally "valid".

 

The amount of data you send to the GPU can be as little as a point and a character index, so anywhere from 6 to 12 bytes. Quad extents can be read from constant buffers at no performance penalty on present-day GPUs. Compared to that, a "small" character may easily have 200-300 pixels, which is over 10-15 times as much for an 8-bit monochrome bitmap (or 30-45 times for RGB). Color and size don't change every 3-4 characters (not normally, at least!), so it's reasonable to just set these as uniforms.

 

Mandarin

Ah yes, but Mandarin is something you would not normally consider anyway, unless some stupefied executive forces you to.

Mandarin means Chinese market, and while every executive nowadays seems to think of China as El Dorado, reality has it that it means a lot of work and many extra complications for very little revenue. The Chinese pay a lot less money for the same product, if they do.

So unless you're working for a company like Microsoft, Blizzard, or EA (who will want to be in this market despite bad revenues), it's a good business plan to grab every guy pronouncing "Chin..." and arrange an accident involving him falling out of the 8th floor window before he can finish the sentence.

 

Note that this isn't about not liking the Chinese, it's about being reasonable on what you have to invest, what risks you have to cope with, and what you get back.

 

Take Blizzard and WoW as an example: If you research on the internet, you'll find out that WoW costs around 7 cents per hour in China. Compared to Europe where the same game costs €12.99 per month (~16.91 USD). At an average weekly play time of 20 hours, this translates to slightly over 21 cents per hour. In other words, Blizzard puts extra work into localizing and setting up extra servers, and takes up the risk of doing business in a location where laws are... somewhat special (a very friendly wording), only to sell their product at 1/3 the rate.

 

Maybe that makes sense from an executive point of view if you assume that you get another 1 billion subscriptions (but, do you, really?), and those outweight the fact that you're selling under price. For every "normal" business it's just madness to think about such a plan.

0

Share this post


Link to post
Share on other sites
I have yet to see a single library of code sample that uses VBOs and the programmable pipeline.

 

It sounds like you want to make a font system using meshes.  If this is the case then fire up Blender or whatever 3D modeling program you prefer and use the text tool to type out meshes that are shaped like text.  Then export them individually, then load each model into one of the many open source model loaders, then have that model loader spit out vertex arrays.  Now load those vertex arrays into VBO's in your rendering program.

 

It has been pointed out, this would not be the most efficient way to render text, however, if this is how you want to do it, then do it this way.

 

There is no easy way to get what you want here.  I doubt that there is a library of text that is made of 3D models and I doubt that even if you found this that it would also include the necessary code to select and place the individual rendered VBO's in the proper positions based on typed or defined sentences.  But you can make your own version of this by scrutinizing the code written in the following chapter of the OpenGL Redbook.

 

 

 

look for: Executing Multiple Display Lists in chapter04.

 

Even though you do not want to use display lists, the text input and selection code can be re-written to do what you want with VBO's instead of display lists.

 

Or you can simplify, optimize and streamline this whole process a bit by not exporting your models from Blender but instead simply rendering each letter as a single image.  Save all the images to your hard-drive either individually or as a font set.

 

If you save them individually you can render a letter super easy.  Select the texture you want and render a quad with that texture applied. Move over a space with glTranslatef and render the same quad with the image for the next letter applied to its texture unit.

 

If you render out a sprite sheet from the modeling program then you can do the same thing as above but move the quads texture coordinates around to select the appropriate letter instead of changing the selected texture each time.

 

If you want to do things this way then post a response and we'll take this a step further.  If you've decide to rethink things based on what's been said by others here, then ask them for an elaboration on how to tie those libraries to OpenGL rendering.

0

Share this post


Link to post
Share on other sites

*SIGH* I knew I shouldn't have mentioned FreeType.

 

I'm not looking for a fight. I just want to be able to render performance data in the viewport in a 3.3 core context. I cannot use display lists or anything from GL 1.1 and all text libraries that I have found are based upon those old techniques.

 

There has to be a fast way to render text quickly using new techniques.

0

Share this post


Link to post
Share on other sites


There has to be a fast way to render text quickly using new techniques.

Sure. Let some other library render it for you, and then slap the texture on a quad.

 

Sorry for sounding like a broken record here, but if you want to do the whole character-per-quad display list thing in a core context, you're going to have to roll it yourself. There are only a handful of OpenGL-based text rendering libraries around, and to the best of my knowledge, none have been ported to a core context.

0

Share this post


Link to post
Share on other sites

 


There has to be a fast way to render text quickly using new techniques.

Sure. Let some other library render it for you, and then slap the texture on a quad.

 

Sorry for sounding like a broken record here, but if you want to do the whole character-per-quad display list thing in a core context, you're going to have to roll it yourself. There are only a handful of OpenGL-based text rendering libraries around, and to the best of my knowledge, none have been ported to a core context.

 

Well, rolling it myself is what I was hoping to avoid. Just out of curiosity, does anyone ever actually use the core context? To be honest, I have found very few examples of tutorials or practically anything based on it. It seems like everyone uses the compatibility profile. Should I just cave in and switch to the compatibility profile?

0

Share this post


Link to post
Share on other sites


Just out of curiosity, does anyone ever actually use the core context? To be honest, I have found very few examples of tutorials or practically anything based on it. It seems like everyone uses the compatibility profile. Should I just cave in and switch to the compatibility profile?

As far as I know, nobody actually ships games on the core profile. I use it for development, but that's just to make sure I don't accidentally fall into old habits.

 

NVidia actually warns you not to use the core profile, because they add runtime checks for all the deprecated functionality, and that comes at a performance cost.

0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0

  • Similar Content

    • By mapra99
      Hello

      I am working on a recent project and I have been learning how to code in C# using OpenGL libraries for some graphics. I have achieved some quite interesting things using TAO Framework writing in Console Applications, creating a GLUT Window. But my problem now is that I need to incorporate the Graphics in a Windows Form so I can relate the objects that I render with some .NET Controls.

      To deal with this problem, I have seen in some forums that it's better to use OpenTK instead of TAO Framework, so I can use the glControl that OpenTK libraries offer. However, I haven't found complete articles, tutorials or source codes that help using the glControl or that may insert me into de OpenTK functions. Would somebody please share in this forum some links or files where I can find good documentation about this topic? Or may I use another library different of OpenTK?

      Thanks!
    • By Solid_Spy
      Hello, I have been working on SH Irradiance map rendering, and I have been using a GLSL pixel shader to render SH irradiance to 2D irradiance maps for my static objects. I already have it working with 9 3D textures so far for the first 9 SH functions.
      In my GLSL shader, I have to send in 9 SH Coefficient 3D Texures that use RGBA8 as a pixel format. RGB being used for the coefficients for red, green, and blue, and the A for checking if the voxel is in use (for the 3D texture solidification shader to prevent bleeding).
      My problem is, I want to knock this number of textures down to something like 4 or 5. Getting even lower would be a godsend. This is because I eventually plan on adding more SH Coefficient 3D Textures for other parts of the game map (such as inside rooms, as opposed to the outside), to circumvent irradiance probe bleeding between rooms separated by walls. I don't want to reach the 32 texture limit too soon. Also, I figure that it would be a LOT faster.
      Is there a way I could, say, store 2 sets of SH Coefficients for 2 SH functions inside a texture with RGBA16 pixels? If so, how would I extract them from inside GLSL? Let me know if you have any suggestions ^^.
    • By KarimIO
      EDIT: I thought this was restricted to Attribute-Created GL contexts, but it isn't, so I rewrote the post.
      Hey guys, whenever I call SwapBuffers(hDC), I get a crash, and I get a "Too many posts were made to a semaphore." from Windows as I call SwapBuffers. What could be the cause of this?
      Update: No crash occurs if I don't draw, just clear and swap.
      static PIXELFORMATDESCRIPTOR pfd = // pfd Tells Windows How We Want Things To Be { sizeof(PIXELFORMATDESCRIPTOR), // Size Of This Pixel Format Descriptor 1, // Version Number PFD_DRAW_TO_WINDOW | // Format Must Support Window PFD_SUPPORT_OPENGL | // Format Must Support OpenGL PFD_DOUBLEBUFFER, // Must Support Double Buffering PFD_TYPE_RGBA, // Request An RGBA Format 32, // Select Our Color Depth 0, 0, 0, 0, 0, 0, // Color Bits Ignored 0, // No Alpha Buffer 0, // Shift Bit Ignored 0, // No Accumulation Buffer 0, 0, 0, 0, // Accumulation Bits Ignored 24, // 24Bit Z-Buffer (Depth Buffer) 0, // No Stencil Buffer 0, // No Auxiliary Buffer PFD_MAIN_PLANE, // Main Drawing Layer 0, // Reserved 0, 0, 0 // Layer Masks Ignored }; if (!(hDC = GetDC(windowHandle))) return false; unsigned int PixelFormat; if (!(PixelFormat = ChoosePixelFormat(hDC, &pfd))) return false; if (!SetPixelFormat(hDC, PixelFormat, &pfd)) return false; hRC = wglCreateContext(hDC); if (!hRC) { std::cout << "wglCreateContext Failed!\n"; return false; } if (wglMakeCurrent(hDC, hRC) == NULL) { std::cout << "Make Context Current Second Failed!\n"; return false; } ... // OGL Buffer Initialization glClear(GL_DEPTH_BUFFER_BIT | GL_COLOR_BUFFER_BIT); glBindVertexArray(vao); glUseProgram(myprogram); glDrawElements(GL_TRIANGLES, indexCount, GL_UNSIGNED_SHORT, (void *)indexStart); SwapBuffers(GetDC(window_handle));  
    • By Tchom
      Hey devs!
       
      I've been working on a OpenGL ES 2.0 android engine and I have begun implementing some simple (point) lighting. I had something fairly simple working, so I tried to get fancy and added color-tinting light. And it works great... with only one or two lights. Any more than that, the application drops about 15 frames per light added (my ideal is at least 4 or 5). I know implementing lighting is expensive, I just didn't think it was that expensive. I'm fairly new to the world of OpenGL and GLSL, so there is a good chance I've written some crappy shader code. If anyone had any feedback or tips on how I can optimize this code, please let me know.
       
      Vertex Shader
      uniform mat4 u_MVPMatrix; uniform mat4 u_MVMatrix; attribute vec4 a_Position; attribute vec3 a_Normal; attribute vec2 a_TexCoordinate; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { v_Position = vec3(u_MVMatrix * a_Position); v_TexCoordinate = a_TexCoordinate; v_Normal = vec3(u_MVMatrix * vec4(a_Normal, 0.0)); gl_Position = u_MVPMatrix * a_Position; } Fragment Shader
      precision mediump float; uniform vec4 u_LightPos["+numLights+"]; uniform vec4 u_LightColours["+numLights+"]; uniform float u_LightPower["+numLights+"]; uniform sampler2D u_Texture; varying vec3 v_Position; varying vec3 v_Normal; varying vec2 v_TexCoordinate; void main() { gl_FragColor = (texture2D(u_Texture, v_TexCoordinate)); float diffuse = 0.0; vec4 colourSum = vec4(1.0); for (int i = 0; i < "+numLights+"; i++) { vec3 toPointLight = vec3(u_LightPos[i]); float distance = length(toPointLight - v_Position); vec3 lightVector = normalize(toPointLight - v_Position); float diffuseDiff = 0.0; // The diffuse difference contributed from current light diffuseDiff = max(dot(v_Normal, lightVector), 0.0); diffuseDiff = diffuseDiff * (1.0 / (1.0 + ((1.0-u_LightPower[i])* distance * distance))); //Determine attenuatio diffuse += diffuseDiff; gl_FragColor.rgb *= vec3(1.0) / ((vec3(1.0) + ((vec3(1.0) - vec3(u_LightColours[i]))*diffuseDiff))); //The expensive part } diffuse += 0.1; //Add ambient light gl_FragColor.rgb *= diffuse; } Am I making any rookie mistakes? Or am I just being unrealistic about what I can do? Thanks in advance
    • By yahiko00
      Hi,
      Not sure to post at the right place, if not, please forgive me...
      For a game project I am working on, I would like to implement a 2D starfield as a background.
      I do not want to deal with static tiles, since I plan to slowly animate the starfield. So, I am trying to figure out how to generate a random starfield for the entire map.
      I feel that using a uniform distribution for the stars will not do the trick. Instead I would like something similar to the screenshot below, taken from the game Star Wars: Empire At War (all credits to Lucasfilm, Disney, and so on...).

      Is there someone who could have an idea of a distribution which could result in such a starfield?
      Any insight would be appreciated
  • Popular Now