Sign in to follow this  

OpenGL Simulating a console program?

This topic is 1672 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Apparently standard C++ doesn't offer fullscreen consoles in Win7, so I've come to OpenGL with high hopes. I figure there's some way to make a virtual console in C++ with OpenGL somehow, so let's get to what I want to make:

 

1. I don't want to make an OS, I just want a fullscreen (no window or borders, just a black BG with text on it) application.

2. I want it to look like DOS (refer to 1).

3. I don't want to make a CLI, I want to make a small console program that takes up the whole screen.

 

Does anyone know how to achieve my goal? Do I even need to use OpenGL?

Share this post


Link to post
Share on other sites

SDL should be enough.  Just get a BMP of all the characters you can print and make an array you can "print" into.

Share this post


Link to post
Share on other sites

SDL should be enough.  Just get a BMP of all the characters you can print and make an array you can "print" into.

How am I supposed to get a BMP of every character? And by array you mean a BITMAP array?

Share this post


Link to post
Share on other sites
Could you tell us what this is for? There is lots of ways to get something that "looks" like a console but functions nothing like one. Is the functionality important?

Share this post


Link to post
Share on other sites

Make a console program that prints out each character and then take a screenshot?  

Share this post


Link to post
Share on other sites

 

 

SDL should be enough.  Just get a BMP of all the characters you can print and make an array you can "print" into.

How am I supposed to get a BMP of every character? And by array you mean a BITMAP array?

 

 

Oh look what I found on Google:

 

http://www.dafont.com/perfect-dos-vga-437.font

 

 

How am I supposed to use a TTF in SDL? (I know about SDL_ttf() and such.) I'm assuming I make a black BG that takes up the whole screen, then assign a letter in the TTF to each keystroke? That should work, correct?

Edited by lmsmi1

Share this post


Link to post
Share on other sites

 

 

 

SDL should be enough.  Just get a BMP of all the characters you can print and make an array you can "print" into.

How am I supposed to get a BMP of every character? And by array you mean a BITMAP array?

 

 

Oh look what I found on Google:

 

http://www.dafont.com/perfect-dos-vga-437.font

 

 

How am I supposed to use a TTF in SDL? (I know about SDL_ttf() and such.) I'm assuming I make a black BG that takes up the whole screen, then assign a letter in the TTF to each keystroke? That should work, correct?

 

If you know about SDL_ttf then what's the problem?

Share this post


Link to post
Share on other sites

@zacaj

 

The problem is getting individual characters from the TTF and assigning them each to a keystroke.

Share this post


Link to post
Share on other sites

For a monospaced console font just use a "spritesheet" Its fairly easy to then create a console by either instancing those or doing it via clever texture lookups on a fullscreen quad (see how libtcod does it for example).

Share this post


Link to post
Share on other sites

Why would you use a 3D api or a game api to create a black borderless window with some text on it?

It can be done with pretty much every language that can talk to Win32.. C++/Win32 for sure (as I've said, just create a borderless maximized window and put text on it).. C++/MFC (yuk).

In C# this is done with zero code in 3 minutes.

You can bazinga the planet and do it in a Windows Store App in C++ using DX11.. but what's the point?

 

Maybe I am missing the point.

Share this post


Link to post
Share on other sites

Actual console APIs (curses?) tend to be very restrictive and not very portable depending on your requirements (color...). As a result a lot of "ascii games" like roguelikes or Dwarf Fortress actually run their own console emulation in OpenGL. Also its most likely faster. I can run a full screen "console" in OpenGL on my crappy netbook (GMA3150) at a cost of less than a millisecond per frame while doing the same in a console API is significantly slower.

 

btw. modern style dx/ogl are really just general purpose graphics APIs and using them for non 3D isn't overkill at all.

Share this post


Link to post
Share on other sites

I have must missed the requirement that it needs to have a big black window with characters and run at 50000fps my bad :-S

 

So ya sure.. roll on your DX11/OpenGL font rendering system to do the same thing you can do in 3 minutes with C#.. it's your time mate.

Share this post


Link to post
Share on other sites

Actual console APIs (curses?) tend to be very restrictive and not very portable depending on your requirements (color...). As a result a lot of "ascii games" like roguelikes or Dwarf Fortress actually run their own console emulation in OpenGL. Also its most likely faster.


Are you certain about that? When I looked into it (admittedly, quite a while ago) the standard Dwarf Fortress client used pdcurses. However, pdcurses has several versions. One version uses the standard native OS console and one version uses SDL to render using a supplied bitmap font.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now