Jump to content
  • Advertisement

C++ Text. Legacy OpenGL 1.5, FreeGLUT

8Observer8

1914 views

What if you need to draw text with simple graphics? For example, you have a task in your college to draw plots with some text using C++. You can still use deprecated/legacy OpenGL 1.1 and FreeGLUT.

This example shows how to draw a text using FreeGLUT and deprecated/legacy OpenGL 1.5. And this example shows how to set up FreeGLUT in Visual Studio 2015.

Text_FreeGlutOpenGL15Cpp.zip - Just download and run this solution in your version of Visual Studio.  But do not forget to set "Platform Toolset" to "Your Version Of VS" in the project settings. See screenshot:

Spoiler

PlatformToolset.png.018bde1d802005f5655212e332d57fcc.png

If you want to set up FreeGLUT from scratch then download the "Libs" folders and set settings by yourself:

Libs: Libs_FreeGlutOpenGL15.zip

Settings:

Spoiler

1.
Configuration: All Configurations
Platforms: All Platforms

C/C++ -> Genaral -> Additional Include Directories:
$(SolutionDir)Libs\freeglut-3.0.0-2\include

Linker -> Input -> Additional Dependencies
freeglut.lib

2.
Configuration: All Configurations
Platforms: Win32

Linker -> General -> Additional Library Directories:
$(SolutionDir)Libs\freeglut-3.0.0-2\lib\Win32

Build Events -> Post-Build Event
xcopy /y /d "$(SolutionDir)Libs\freeglut-3.0.0-2\lib\Win32\freeglut.dll" "$(OutDir)"

3.
Configuration: All Configurations
Platforms: x64

Linker -> General -> Additional Library Directories:
$(SolutionDir)Libs\freeglut-3.0.0-2\lib\Win64

Build Events -> Post-Build Event
xcopy /y /d "$(SolutionDir)Libs\freeglut-3.0.0-2\lib\Win64\freeglut.dll" "$(OutDir)"
 

main.cpp

Spoiler

#include <GL/freeglut.h>
#include <string>

void drawText(float x, float y, std::string text)
{
    glRasterPos2f(x, y);
    glutBitmapString(GLUT_BITMAP_8_BY_13, (const unsigned char*)text.c_str());
}

void draw()
{
    glClear(GL_COLOR_BUFFER_BIT);
    drawText(0, 0, "Hello, World!");
    glutSwapBuffers();
}

int main(int argc, char** argv)
{
    glutInit(&argc, argv);
    glutInitDisplayMode(GLUT_DOUBLE | GLUT_RGB);
    glutInitWindowSize(256, 256);
    glutCreateWindow("Drawing Text");
    glutDisplayFunc(draw);
    glutMainLoop();
    return 0;
}

 

Text_FreeGlutOpenGL15Cpp.png.ce9c6d16db38e2d89a0aadbed600e087.png



2 Comments


Recommended Comments

I renamed a name of the blog entry:

  • from "Drawing a Text using FreeGLUT, OpenGL 1.5"
  • to "Text. Legacy OpenGL 1.5, FreeGLUT"

Because I need to add accent that it is legacy/deprecated OpenGL 1.1. I made this example to show the simplest way to draw a text using FreeGLUT and OpenGL 1.1.

Share this comment


Link to comment

@8Observer8 When I see grandfather GLUT I cringe a little but FreeGLUT is still relevant especially for quick and dirty demos. Personally I stayed clear.  But being aware of your evaluation of GLFW also (which I liked), I see nothing wrong with this. (trying to make sense of your current activity). From what I've observed, you tend to want to target the technology that leans toward browser based or very low end/extremely easy entry. I can respect that, especially if the goal is to not exclude anyone. I think I may have wasted some years not sticking with a setup I liked and got a jump start on the meat and potatoes of what was being presented such as the sin wave in one of your other blog entries. A lot of amazing things bloom from the use of (trigonometry) sin. :) It's a lot of work to get openGL loaded up without utilities such as these. I'm excited to see what you do when you move to game play math concepts, spacial management, rendering or back end coolness... This is the second time I've looked at this and both times I thought, "yup that's a FreeGLUT crisp font."

Share this comment


Link to comment

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Advertisement
  • Advertisement
  • Blog Entries

  • Similar Content

    • By joetext
      Hey all, I'm at my wits end with this weird memory leak. My issue is with the small preview-render view that comes up on the Windows 10 taskbar when you hover over your application. I don't know the name of it, but it's the little thumbnail view that pops up when you have your mouse hovered over a minimized application. When I do this with my dx11 program, memory absolutely rockets up, I'm talking like going from 1 gb to 4 gb in a few seconds. I managed to find a single line that I can comment out and "fix" the leak, but it's not making any sense to me. The graphics portions of my program are somewhat fragmented between different abstraction layers so bear with me.
      // the code that actually handles the render calls // map buffers and copy data from cpu buffers m_context->copyDataToBuffer((void*)verts.data(), sizeof(Vertex) * verts.size(), m_vertexBuffer); m_context->copyDataToBuffer((void*)indicies.data(), sizeof(uint) * indicies.size(), m_indexBuffer); m_context->copyDataToBuffer((void*)uniforms.data(), sizeof(HuiUniforms) * uniforms.size(), m_uniformBuffer); // render ISamplerState* sampler = shaderSys->getSamplerState(); m_context->setVertexBuffer(m_vertexBuffer); // this is the line that is causing the leak m_context->setIndexBuffer(m_indexBuffer); m_context->setUniformBuffer(m_uniformBuffer, ShaderType::Vert); m_context->setUniformBuffer(m_uniformBuffer, ShaderType::Frag); m_context->setTopology(PrimativeTopology::TriangleList); m_context->setSamplerState(sampler); Texture2D** textureArr = &diffuseTexture; m_context->setTextures(textureArr, 1); m_context->drawIndexed(indicies.size()); m_vertexBuffer is an abstraction for a ID3D11Buffer. Here's the code for that setVertextBuffer call
      void D3DRenderContext::setVertexBuffer(IBuffer* buffer) { D3DBuffer* d3dBuffer = static_cast<D3DBuffer*>(buffer); ID3D11Buffer** d3dBufferResource = d3dBuffer->getBuffer(); UINT stride = sizeof(Vertex); // Vertex is a simple struct UINT offset = 0; m_context->IASetVertexBuffers(0, 1, d3dBufferResource, &stride, &offset); } And for completeness sake here's the copy buffer function
      void D3DRenderContext::copyDataToBuffer(void* data, int dataSize, IBuffer* toBuffer) { D3DBuffer* d3dBuffer = static_cast<D3DBuffer*>(toBuffer); ID3D11Buffer** d3dBufferResource = d3dBuffer->getBuffer(); HRESULT result; D3D11_MAPPED_SUBRESOURCE resource; result = m_context->Map(*d3dBufferResource, NULL, D3D11_MAP_WRITE_DISCARD, NULL, &resource); memcpy(resource.pData, data, dataSize); m_context->Unmap(*d3dBufferResource, NULL); } A few things.. I removed like 15 ASSERTS to make the code shorter, so any pointer that could potentially be null, and the dx results are all checked in the actual code. Second, I have the dx debug output enabled and it has nothing to say. Third, I've tried flushing the context after every call to drawIndexed, to no avail.

      This leak is just beyond bizarre to me, granted I have no clue how the underpinnings of that preview window work. In the meantime, I'm going to figure out how to tell windows to disable it, but I'd still like to know why this is happening. Any suggestions appreciated!
    • By GameCreator
      Say you have a 4x4 grid.  How would you generate a path so that every tile is touched once?

      Rules:
      You can start and end anywhere, just not the same tile You can go off one side and resume on the opposite side.  You don't have to use this but you're encouraged to (this is also easily fixed by shifting the path either vertically or horizontally) You must use all 16 tiles once and only once (no crossing paths) I could probably do something like this with a recursive function but the problem is that sometimes you'd end up with the following situation where you'd trap yourself:

      How would you prevent this?  It's easy enough to detect; you haven't generated a path on all 16 tiles.  You can brute force it then by starting over and hoping it doesn't happen often.  But what's the proper way to do this?
    • By amtri
      Hello,
      I need to write a shader where, for a specific pixel, as a fragment comes in I will sort a struct stored in memory based on the value of depth.
      For the sake of argument, let's assume I have an array of 10 entries per pixel of the struct containing {float depth; vec4 color}. As a new fragment comes in for a pixel I will use an insertion sort so I only keep the 10 front-most fragments and discard the rest.
      If OpenGL were sequential, this is a rather trivial operation: just program a proper insertion sort. However, my worry is that the same pixel may be processed in 2 different fragments simultaneously, so my insertion sort would fail because multiple threads would be accessing the same array.
      This leads me to 2 questions I'm hoping somebody has the insight to help out:
      1) Can I be assured that, despite OpenGL's parallelism, no two fragments for the same pixel will ever be processed simultaneously? In other words, can I be sure that if I'm working on a pixel, then parallelism means that on another thread OpenGL will not be working on the same pixel?
      2) If the answer to (1) above is no - that is, there are no guarantees - is there a way that would allow me do this in a safe way?
      Thanks.
    • By Yarden2JR
      Hi there, everyone!
      I am programing a tessellation shader in OpenGL which computes the quartic Walton-Meek's Gregory patch. I am searching for a local G1 method with good shading/visual results. So I am trying this patch. I didn't get good (visual/shading) results with PN-Triangles.
      I've just had my first result with this WM patch, which isn't good as well. Perhaps the normal I am calculating is wrong. I will attach the equations I am using to compute the normal. Please, take a look there as I couldn't write in TeX here (I don't know why). Basically, they are the Bernstein polynomial, Bernstein-Bezier triangle and its derivatives. Then, I actually compute it through normalizing the cross product between the derivatives.
      If you want to take a look at the shader code, here it is (the first one is the tessellation control shader and the second one is the tessellation evaluation shader):
      #version 430 core layout (vertices = 3) out; in VertOut { vec3 normal; } vertOut[]; out TescOut { vec3 p0; vec3 p1; vec3 p2; vec3 p3; vec3 g0; vec3 g1; vec3 n; } tescOut[]; void main() { const float kTessLevel = 12.0; gl_TessLevelOuter[gl_InvocationID] = kTessLevel; gl_TessLevelInner[0] = kTessLevel; vec3 p0 = tescOut[gl_InvocationID].p0 = gl_in[gl_InvocationID].gl_Position.xyz; vec3 n = tescOut[gl_InvocationID].n = vertOut[gl_InvocationID].normal; const int nextInvID = gl_InvocationID < 2 ? gl_InvocationID + 1 : 0; vec3 edge = gl_in[nextInvID].gl_Position.xyz - p0; vec3 nNext = vertOut[nextInvID].normal; float d = length(edge), a = dot(n, nNext); vec3 gama = edge / d; float a0 = dot(n, gama), a1 = dot(nNext, gama); float ro = 6.0 * (2.0 * a0 + a * a1)/(4.0 - a * a); float sigma = 6.0 * (2.0 * a1 + a * a0)/(4.0 - a * a); vec3 v[4] = vec3[4] ( p0, p0 + (d / 18.0) * (6.0 * gama - 2.0 * ro * n + sigma * nNext), gl_in[nextInvID].gl_Position.xyz - (d / 18.0) * (6.0 * gama + ro * n - 2.0 * sigma * nNext), edge = gl_in[nextInvID].gl_Position.xyz ); vec3 w[3] = vec3[3] ( v[1] - v[0], v[2] - v[1], v[3] - v[2] ); vec3 A[3] = vec3[3] ( cross(n, normalize(w[0])), vec3(0.0), cross(nNext, normalize(w[2])) ); A[1] = normalize(A[0] + A[2]); vec3 l[5] = vec3[5] ( v[0], 0.25 * (v[0] + 3.0 * v[1]), 0.25 * (2.0 * v[1] + 2.0 * v[2]), 0.25 * (3.0 * v[2] + v[3]), v[3] ); vec3 p1 = tescOut[gl_InvocationID].p1 = l[1]; vec3 p2 = tescOut[gl_InvocationID].p2 = l[2]; vec3 p3 = tescOut[gl_InvocationID].p3 = l[3]; barrier(); const int previousInvID = gl_InvocationID > 0 ? gl_InvocationID - 1 : 2; vec3 D[4] = vec3[4] ( tescOut[previousInvID].p3 - 0.5 * (p0 + p1), vec3(0.0), vec3(0.0), tescOut[nextInvID].p1 - 0.5 * (p3 + tescOut[nextInvID].p0) ); float mi[2] = float[2](dot(D[0], A[0]), dot(D[3], A[2])); float lambda[2] = float[2](dot(D[0], w[0])/dot(w[0], w[0]), dot(D[3], w[2])/dot(w[2], w[2])); tescOut[gl_InvocationID].g0 = 0.5 * (l[1] + l[2]) + (2.0/3.0) * (lambda[0] * w[1] + mi[0] * A[1]) + (1.0/3.0) * (lambda[1] * w[0] + mi[1] * A[0]); tescOut[gl_InvocationID].g1 = 0.5 * (l[2] + l[3]) + (1.0/3.0) * (lambda[0] * w[2] + mi[1] * A[2]) + (2.0/3.0) * (lambda[1] * w[1] + mi[1] * A[1]); } #version 430 core layout(triangles, equal_spacing, ccw) in; in TescOut { vec3 p0; vec3 p1; vec3 p2; vec3 p3; vec3 g0; vec3 g1; vec3 n; } tescOut[]; out TeseOut { vec3 normal; vec3 viewPosition; } teseOut; uniform mat4 mvp; uniform mat4 modelView; uniform mat4 normalMatrix; uniform bool isNormalLinearlyInterpolated; #define uvw gl_TessCoord const float u = uvw.x, u2 = u * u, u3 = u2 * u, u4 = u2 * u2; const float v = uvw.y, v2 = v * v, v3 = v2 * v, v4 = v2 * v2; const float w = uvw.z, w2 = w * w, w3 = w2 * w, w4 = w2 * w2; #define p400 tescOut[0].p0 #define p310 tescOut[0].p1 #define p220 tescOut[0].p2 #define p130 tescOut[0].p3 #define G01 tescOut[0].g0 #define G02 tescOut[0].g1 #define p040 tescOut[1].p0 #define p031 tescOut[1].p1 #define p022 tescOut[1].p2 #define p013 tescOut[1].p3 #define G11 tescOut[1].g0 #define G12 tescOut[1].g1 #define p004 tescOut[2].p0 #define p103 tescOut[2].p1 #define p202 tescOut[2].p2 #define p301 tescOut[2].p3 #define G21 tescOut[2].g0 #define G22 tescOut[2].g1 #define B400 u4 #define B040 v4 #define B004 w4 #define B310 (4.0 * u3 * v) #define B031 (4.0 * v3 * w) #define B103 (4.0 * u * w3) #define B220 (6.0 * u2 * v2) #define B022 (6.0 * v2 * w2) #define B202 (6.0 * u2 * w2) #define B130 (4.0 * u * v3) #define B013 (4.0 * v * w3) #define B301 (4.0 * u3 * w) #define B211 (12.0 * u2 * v * w) #define B121 (12.0 * u * v2 * w) #define B112 (12.0 * u * v * w2) #define B300 u3 #define B030 v3 #define B003 w3 #define B210 (3.0 * u2 * v) #define B021 (3.0 * v2 * w) #define B102 (3.0 * w2 * u) #define B120 (3.0 * u * v2) #define B012 (3.0 * v * w2) #define B201 (3.0 * w * u2) #define B111 (6.0 * u * v * w) vec3 interpolate3D(vec3 p0, vec3 p1, vec3 p2) { return gl_TessCoord.x * p0 + gl_TessCoord.y * p1 + gl_TessCoord.z * p2; } void main() { vec4 pos = vec4(interpolate3D(tescOut[0].p0, tescOut[1].p0, tescOut[2].p0), 1.0); vec3 normal = normalize(interpolate3D(tescOut[0].n, tescOut[1].n, tescOut[2].n)); if (u != 1.0 && v != 1.0 && w != 1.0) { vec3 p211 = (v * G12 + w * G21)/(v + w); vec3 p121 = (w * G02 + u * G11)/(w + u); vec3 p112 = (u * G22 + v * G01)/(u + v); vec3 barPos = p400 * B400 + p040 * B040 + p004 * B004 + p310 * B310 + p031 * B031 + p103 * B103 + p220 * B220 + p022 * B022 + p202 * B202 + p130 * B130 + p013 * B013 + p301 * B301 + p211 * B211 + p121 * B121 + p112 * B112; pos = vec4(barPos, 1.0); vec3 dpdu = p400 * B300 + p130 * B030 + p103 * B003 + p310 * B210 + p121 * B021 + p202 * B102 + p220 * B120 + p112 * B012 + p301 * B201 + p211 * B111 ; vec3 dpdv = p310 * B300 + p040 * B030 + p013 * B003 + p220 * B210 + p031 * B021 + p112 * B102 + p130 * B120 + p022 * B012 + p211 * B201 + p121 * B111 ; normal = normalize(cross(dpdu, dpdv)); } gl_Position = mvp * pos; pos = modelView * pos; teseOut.viewPosition = pos.xyz / pos.w; teseOut.normal = (normalMatrix * vec4(normal, 0.0)).xyz; } There are also some screenshots attached of my current results. Please, take a look. In the "good ones" images, the normals are computed by linear interpolation, while in the bad ones the normals are computed through the equations I said previously and are shown in the code.
      So, how can I correctly compute the normals? Thanks in advance!







    • By Alundra
      Hi everybody!
      Should we use size_t for everything?

      That means:
      - size/length/capacity
      - function parameters for index
      - all variables related to array index
      - return of function when it's an index

      Thanks for your feedback on this topic!
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!