Sign in to follow this  

OpenGL What exactly is an "OpenGL Context"? How does it work?

This topic is 2412 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hello, im in the process of attempting to teach myself OpenGL programming but so far all i've managed to do is become extremely confused. I have a few questions, they are probably pretty silly and i apologize but still, any help would be appreciated!

From my experiments and what i've read before using any OpenGL function calls and such an "OpenGL context" is required. Im a bit puzzled as to what this actually means, my initial thought was that this would just be a window capable of displaying pixel data, but clearly there is something more to it.

What actually defines a window as being an OpenGL context?
How does OpenGL know if a window is a valid OpenGL context?

Lets say on operating system X i've used the native API to create a window, what steps would i then need to take to make it capable of displaying openGL data, do native api's include special openGL settings or something?

Currently im using SFML to abstract the creation of an OpenGL context and it works perfectly but i'd still like to have a at least vague idea of the inner working that are going on...

For example SFML 1.6 does not work with OpenGL 3.0+ whereas SFML 2.0 does, i don't understand this, why is displaying the pixels rendered by OpenGL so complicated that compatibility becomes an issue?

Thank you for any help, and im sorry that my writing is not very good!

Share this post


Link to post
Share on other sites
Your GPU draws stuff. Windows owns part of that, your game owns part of that, and your applications such as web browsers own stuff. Say you call glClear() to clear your current drawing frame. It does not clear windows desktop and other stuff. So each time you open a GL application, it needs its own stack of states. Say you call glColor3f(1,1,1) in one instance of your game and then open a 2nd copy of your game calling glColor3f(0,0,0) in the other. There are two contexts and two glColors being tracked separately.

Share this post


Link to post
Share on other sites
Unless you plan on become a programmer in a team that makes GL drivers and you want to work at nVidia, it doesn't matter what a GL context is.

You just need to 1. setup a pixelformat 2. make a GL context 3. make the GL context current.
SFML, GLUT, freeGLUT, QT, SDL and the many others do that for you.

and render away until you need to destroy resources and destroy the GL context and close the window.

Share this post


Link to post
Share on other sites
[quote name='V-man' timestamp='1307489840' post='4820735']<BR>Unless you plan on become a programmer in a team that makes GL drivers and you want to work at nVidia, it doesn't matter what a GL context is.<BR>[/quote]Quite right. ATI driver developers dont need to know what a opengl context is among other things.

Share this post


Link to post
Share on other sites
The OpenGL graphics driver has a resource management system, called the OpenGL context. You can think of it as a sandbox for your application to use; where your program stores textures, buffers, meshes, etc. Therefore, each application utilising OpenGL will communicate with the driver and requests their own context, so that the resources are not mixed up between applications. Sometimes you can allocate more than one contexts per application.

Anyway, when a context is created, it is tied to your application's process, usually to the main Window's paint area. Often creating Window paint surface and creating an OpenGL context goes hand in hand. Therefore, destroying the Window may also destroy the context.

1. Create window
2. Create OpenGL context
3. Attach context to window and current process
4. Switch to created context

5. Render stuff

6. Release context
7. Destroy window

This is the general process for setting up OpenGL on most systems. I'm not familiar with X specifically, but I assume the steps would be quite similar.

Share this post


Link to post
Share on other sites
[quote name='V-man' timestamp='1307622550' post='4821285']
LOL. You don't like ATI?
[/quote]

Its fine now but the past is haunting me!

Share this post


Link to post
Share on other sites
[left][left][size="3"][size="2"][i]If you can define something then you don’t understand it![/i][/size][/size][/left][/left]
[left][left] [/left][/left]
[left][left][size="3"][size="2"]Sadly, nobody before Tachikoma tried to answer to the question. Although it is not crucial for programmers to know how rendering context is implemented, it is important to understand what it is in order to know how to use it.[/size][/size][/left][/left]
[left][left] [/left][/left]
[left][left][size="3"][size="2"]An OpenGL [/size][i][size="2"]context[/size][i] [/i][/i][size="2"]is the data structure that collects all states needed by server to render an image. It contains references to buffers, textures, shaders, etc. OpenGL API doesn’t define how rendering context is defined, it is up to the native window system; but none of OpenGL commands can be executed until rendering context is created and made current.[/size][/size][/left][/left]
[left][left] [/left][/left]
[left][left][size="3"][size="2"]A rendering context should be compatible with a window it will render to. There can be several contexts per application, or even per window. On the other hand, the same context can be used for rendering in multiple windows, but in that case all windows using the same context must have the same pixel format. Now we have come to another important aspect of rendering context: its rendering surface must be adequate for appropriate window. In order to achieve that we have to choose and set appropriate pixel format of the window that have to be created. The rendering surface is the surface to which primitives are drawn. It defines types of buffers that are required for rendering such as a color buffer, depth buffer, and stencil buffer.[/size][/size][/left][/left]
[left][left] [/left][/left]
[left][left][size="3"][size="2"]Choosing pixel format must be performed in order to verify support for defined pixel format. The system would return ID of appropriate pixel format, or the closest match. Setting pixel format that is not previously returned from the system would almost certainly fail. It is not recommended to set pixel format for the window more than once.[/size][/size] [/left][/left]
[left][left] [/left][/left]
[left][left][size="3"][size="2"]In order to issue any OpenGL command we have to make OpenGL context active (i.e. “current”). By making active we assume binding OpenGL context to a window’s DC. It is done from a certain thread. Only one context can be made current for a single thread at a moment. Calling OpenGL commands from another thread (different from that made the context current) fails. If we want to make use of multithreading update of OpenGL resources, we have to make a share-group. All contexts from the share-group share resources like textures, buffers, etc. Although multithreading update can be useful in some cases, we have to be aware that most OpenGL drivers serialize access to the GPU. OpenGL guarantees that all commands in the single context will be executed in the order in which they are issued, but there is no guarantee for multiple contexts. Since OpenGL 3.2 synchronization is enabled through sync objects.[/size][/size] [/left][/left]
[left][left] [/left][/left]
[left][left][size="3"][size="2"]Prior to OpenGL 3.0, there was just one type of OpenGL rendering contexts; the context that contains full functionality. OpenGL3.0 introduced forward compatibility, and 3.2 introduced concept of profiles. If anyone is interested in further discussion, we could proceed, but I think it is a bit out of the scope of the OP’s question. I was already too comprehensive. [/size][/size] :rolleyes:[/left][/left]

Share this post


Link to post
Share on other sites
This isn't a direct answer to your question, but I would recommend the NeHe tutorial for learning openGL. It will take you through creating a Window, setting up openGL, drawing on the Window, and much more.

[url="http://nehe.gamedev.net/lesson.asp?index=01"]http://nehe.gamedev.net/lesson.asp?index=01[/url]

Share this post


Link to post
Share on other sites
[quote name='swilkewitz' timestamp='1307676925' post='4821581']
This isn't a direct answer to your question, but I would recommend the NeHe tutorial for learning openGL. It will take you through creating a Window, setting up openGL, drawing on the Window, and much more.

[url="http://nehe.gamedev.net/lesson.asp?index=01"]http://nehe.gamedev....on.asp?index=01[/url]
[/quote]

Just remember that following a tutorial is no substitute for reading the documentation. A tutorial shows you how to use a hammer to drive in a nail. If someone then gives you a screw you will think to yourself "this looks a lot like a nail, better use a hammer!". Read the documentation on nails, screws and hammers is all I'm saying :D

Share this post


Link to post
Share on other sites
For your review, I just added this section to the Wiki FAQ
http://www.opengl.org/wiki/FAQ#GL_context

I am keeping it short. I did not go into the actual calls needed since these are OS specific and you can get that info from other parts of the web.

You can give feedback. Modify the Wiki to your liking, etc.

Share this post


Link to post
Share on other sites
No offense V-man, but you didn't write about any important aspect of the rendering context except that we need it in order to execute any GL code. You didn't even answer on your own question (why do we need a window?). Anyway it is good that beginners can read something, although I don't think they really read wiki. It is easier to ask a question on the forum. :)

Share this post


Link to post
Share on other sites
You need a window because that is how it is designed. It is as simple as that. I have made a slight modification to that section.
You can of course modify it or make suggestions right here if you want things worded differently.

[quote]Anyway it is good that beginners can read something, although I don't think they really read wiki.[/quote]
You can say that about any other webpage.
The Wiki is not for stopping people from asking questions.

Before the Wiki was around, people asked questions and received many nice answers and that followed by "Why isn't this documented somewhere?". At some point, whoever manages opengl.org decided that a Wiki would be a good idea.

Share this post


Link to post
Share on other sites
[quote name='V-man' timestamp='1307720994' post='4821743'] You need a window because that is how it is designed. It is as simple as that.[/quote]

No comment. (Or to be more precise, I've already commented such answers)


[quote name='V-man' timestamp='1307720994' post='4821743'] You can of course modify it or make suggestions right here if you want things worded differently. [/quote]

I don't like writing wikis, because they are so impersonal. Although I have to admit that Wikipedia is really great.


Concerning OpenGL rendering context (RC), I think I've said what I have to say. The most important aspects are: what it is, how it can be created, what the scope of RC is, how resources can be shared, profiles and forward compatibility. But since OP hasn't reacted on our posts, we are probably uselessly fiddling. :D

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now