• Advertisement
Sign in to follow this  

OpenGL Game thread synchronization (display & game logic)

This topic is 1697 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am wondering how others handle this situation:

 

You have an application (e.g. a game) that calculates a state (e.g. the game logic) and also displays this one. From what I read it seems that the game logic is most of the time running in a different thread than the display. Which brings me to the question:

 

How is the display synchronized with the game logic? If there is no synchronization, we can have following situations:

 

- game is stepped forward, displayed, stepped forward twice, displayed, etc. --> the display will appear shacking!

- while the game is stepped forward a frame is displayed --> the display can appear "strange" (e.g. the bullet can appear as hitting the game character, but since its state was not yet updated, it will actually not hit it)

- A game character can be removed from the scene during the game logic calculation. If the display happens at that time, there might be a crash.

 

We can synchronize to some extent the 2 threads by locking resources (the last example above can be handled by deferring object destruction). But in order to avoid all above mentionned problems, one should run the 2 threads in alternance (or similar, e.g.  step the game twice, render, step the game twice, render, etc.). Doing so makes the use of 2 threads not interesting anymore, since a single thread would be running at the same speed (more or less) and all the resource locking synchronization would not have to be taken care of: a single thread would be easier and have the same result. No?

 

I guess that the game state must be demultiplied in some way (e.g. all positions would be stored as "current" and "forDisplay", and at the end of a game step, "current" would be copied to "forDisplay", so that the rendering thread would be able to run concurrently).

 

And what happens if the game logic needs to use some OpenGL commands? e.g. to render to FBO and do some simple image processing on that? Then there will be again the need to synchronize the 2 threads in order to be able to correctly switch the OpenGL contexts!

 

Just curious how things are done usually ;)

 

 

Share this post


Link to post
Share on other sites
Advertisement

From what I read it seems that the game logic is most of the time running in a different thread than the display.

In my experience, it doesn't make much sense to dedicate a whole thread to one small task, such as communicating with the graphics API.

In the engine's I've used, threading is not done in this way; their usage of threads is rotated 90 degrees to this design ;) There is one thread for each CPU core, and every thread contributes to task #1, then they all contribute to task #2, and so on... Usually this is achieved via a shared queue of "jobs" that need to be executed. The threads simply consume work out of this queue, and add collections of work back into it.

 

- while the game is stepped forward a frame is displayed --> the display can appear "strange" (e.g. the bullet can appear as hitting the game character, but since its state was not yet updated, it will actually not hit it)
- A game character can be removed from the scene during the game logic calculation. If the display happens at that time, there might be a crash.

These things should obviously not happen -- they're symptoms from two threads both using the same data set at the same time, which is a race condition!

 

Seeing that your update and render threads solving completely different problems, they don't even need to share much state because the data required by each is different -- render functions don't need "hitpoints" and update functions don't need "triangle count"s. There's nothing wrong with having a "NPC instance" with a position member used by the update thread, which owns a "model instance" that also owns a duplicated position member used by the render thread -- two different problems are best solved with two different data layouts. Don't try and represent everything in one big ball of sphagetti, and then have two completely different processes try and weave their way through it.

The update thread should produce a big blob of data that is consumed by the render thread, containing just the information required for rendering. The update thread should not have access to any data that is only used for rendering, and the render thread should not have access to any data that is only used for updating.

And what happens if the game logic needs to use some OpenGL commands?

Then it should ask the render thread to issue those commands, in the same way that it asks it to issue all the other rendering GL commands. There shouldn't be any real difference in this use case vs 'normal' rendering.

- game is stepped forward, displayed, stepped forward twice, displayed, etc. --> the display will appear shacking!

Often your rendering tasks are designed to run at some fixed display rate, e.g. 30Hz, 60Hz, etc. If so, you've got a time-based target for your updates -- a 60Hz game should try and advance the simulation by 16.6ms worth of 'ticks' before each render. If you're using vsync, then you can make a pretty accurate guess as to when each image will be displayed to the user (1/refresh-rate seconds after the last one), so you want to advance the simulation that far into the future, reliably, to avoid jitters.

This is one reason why I see absolutely no point on putting update/render on their own threads and leaving it up to the OS to make sure each one runs for an appropriate amount of time... You can determine how many updates are optimal for each render, and then perform them serially -- N-cores performing your updates, and then N-cores performing your rendering.

Share this post


Link to post
Share on other sites

Your analysis is correct. However you're exaggerating how bad it is.

game is stepped forward, displayed, stepped forward twice, displayed, etc. --> the display will appear shacking!

If you read the preferred way of updating the simulation and rendering in Fix your timestep, even in single threaded scenarios, it is possible that if the rendering took too long, the physics will start updating more often than rendering.
In other words this is a problem that appears in single threaded programs as well. It's not shacking, it's frame skipping.
 
However, I agree that without proper care, the update order can be pretty chaotic, and indeed it will look like it's shacking; which is exclusively a multithreading problem. However, let's see it in more detail:
 
 
First, Rendering only needs 4 elements from Logic, if you need more, you should rethink the design:

  • Transformation state of every object: Position, Quaternion and scale. That's 40 bytes (64 bytes if you choose a matrix 4x4 representation)
  • The playback state of the animation (if animation needs to be sync'ed from Logic). That's anywhere from 0 to 32 bytes
  • A list of Entities created in a frame
  • A list of Entities destroyed in a frame

Second, forget the idea that you need to render exactly what the simulation has. If your game can avoid that restriction (99% chance it can), you can relax the synchronization.
 
Third, locks aren't expensive, lock contention is.
 
Now, creation can be handled without invasive locks: Logic creates a list of entities and at the end of the frame, it locks a lightweight mutex, updates Graphic's list and releases the lock. Chances are, Graphic thread wasn't accessing that list because it has a lot to do. At the end of Graphic's update... it locks, clones the list, and releases the lock.
In both cases, it takes almost no time to work inside the locked resource and it consists of a little fraction of all the work they both have to do, so lock contention is extremely low. (Furthermore you can avoid mutexes entirely using a preallocated space and interlocked instructions, and only lock if the preallocated space got full, but I won't go there)
 
There's a catch here, and remember my second advise. You don't care that you're rendering exactly what is in the simulation. Suppose Frame A is simulated, and created 3 objects, but Graphics was too fast and looked into the list. Then loops again, uses renders frame A but without those 3 new objects. Do you really care? those 3 will get added in the next frame. It's a 16ms difference. And not a big difference because the user doesn't even know those 3 objects should've been there.
 
Same happens when destroying objects. Note that a pointer shouldn't be deleted until Graphics has marked that object as "I know you killed the foe, I'm done rendering it"; so that you're sure both threads aren't using the pointer. Only then you can delete the pointer. In other words, you've retired the object from scene, but delayed deleting the ptr.
Otherwise, as you say, a crash will happen.
So in this case, an object may be rendered one more frame that it should. Big deal (sarcasm).
 
Now we're left into updating position & animation data. You have two choices:

  • You really don't care about consistency. Read transformations without any locking at all. Don't care about race conditions. The chance that logic is updating the transform at the same time graphics is reading it is minimal (you should be copying the position from your physics engine to a copy, all inside Logic thread; then reading that copy from graphics thread). If memory is aligned, you won't get NaNs or awkward stuff. But you may get very rare states (it's race conditions after all) for example a position in a very far place from where it actually should be..... but it only lasts for a frame! Chances of this happening often is extremely rare because cloning that transform is very fast even for thousands of objects. So just a flickered frame. Mass Effect 3 is a very bad example of this flickering getting really noticed. They must be updating position from the physics engine data directly, instead of cloning it into a list or they use a memory representation other than a std::vector or plain old array (thus increasing cache misses and time spent iterating), which increase the chances of reading data in an invalid state (I'm telling you an example of an acclaimed AAA game doing this and royally screwing it up).
  • You do care about consistency. Use a lightweight mutex when copying the physics transform to another place inside Logic thread, and from Graphics Thread do the same. In other words, is the same as above but with locks. Lock contention is again very low.

I've tried both and #1 really works ok if done properly (don't take my word, try for yourself! it's easy to switch between both, just disable/reenabled the mutexes!).
Note that #1 isn't a holy grail of scalability, because it can still slowdown your loop a lot due to cache sharing and forcing them to flush too often (which only happens if both threads are accessing the same data at the same time with and one of them writes to it).
 
Same happens with animation, but it's a bit more complex because you really don't want time going backwards in some cases (i.e. when spawning a particle effect at a given keyframe it could spawn twice), I won't go into detail. Getting that one right and scalable is actually hard (but again, solutions rely on the assumption that lock contention will be minimum).

Remember, you don't care that you're rendering the exact thing, but 99% of the time you will, and when it screws up it often gets unnoticed and fixes itself in the next frame.
 
And remember, synchronizing points 1 to 4 should only be a tiny fraction of what your threads do. Logic thread spends most of it's time integrating physics, a lesser part updating the logic side, and only then syncing.
Graphics spends most of it's time issuing culling, updating derived transform of complex node setups, sorting the render queue, and sending commands to the GPU; and only then syncing.
 
Note if you read transform state directly from the the Physics engine data you'll have terrible cache miss rates or will have to put a mutex to protect the physics integration step, and that does have a lot of lock contention.
 
All of this works if there are at least two cores. If the threads struggle for CPU time, then the "quirckiness" when rendering becomes awfully noticeable. Personally, I just switch to a single threaded loop when I detect single core machines. If you've designed your system well, creating both loops shouldn't give you any effort, at all. Just a couple of lines.
 
And last but not least there's the case where you really care about consistency, and you absolutely care that you're rendering exactly what is in the simulation.
In that case you can only resort to using a barrier at the end of the frame, clone the state from logic to graphics, and continue. If both threads take a similar amount of time to finish the frame, multi-threading will really improve your game's performance, otherwise one thread will stall and must wait for the other one to reach the barrier, and hence the difference between single-threaded version of your game and a multithreaded one will be minimal.
 
 
You asked how this is dealt with, and there is no simple answer, because multithreading can be quite complex and there's no simple answer. There are numerous way to deal with it.
A game may put locks for adding & removing objects & updating the transform, another game may not use locks for transforms. Another engine may use interlocked functions to add & remove objects without mutexes.
Another game may just use barriers. Another game may not use this render-split model at all and rely on tasks instead*. There's a lot of ways to address the problem; and it boils down to trading "visual correctness" vs scalability.

 

*Edit: And it can be like Hodgman described (all cores contribute to task #1, then to task #2, etc) or they may issue commands using older data and process independently (i.e. process physics in one task, process AI independently in another task using results from a previous frame, etc)

Edited by Matias Goldberg

Share this post


Link to post
Share on other sites

Hodgman and Matias,

 

Thanks a lot for the very clear and exhaustive explanations. The links you mentionned were also helpful.

To give a little bit more background: I used to get inspired a lot on the gamedev forums, I however am more in the field of simulation. There, it is of importance if something gets rendered wrongly, or if a frame gets skipped without specific reason. Interpolation between two states could work, but might lead to some unrealistic renderings too and confusion, specially when stepping generated videos later on (usually there is one frame per simulation step, which helps debugging certain set-ups). Finally, the simulation (or game logic) uses the openGL functionality, in order to generate virtual images, operate on them (e.g. image processing) and create an output. The time at which the "internal" or FBO rendering occures depends on the simulation loop and how it is programmed. So there I get another heavy restriction regarding multithreading: the rendering thread and "game logic" thread can both generate OpenGL commands, and need to switch contexts every time. In that case, locking (or rather blocking) the other thread is the only option.

Given the many constraints and limitations, I concluded that having an additional thread in charge of rendering would not give me much speed increase, but would complicate drastically the architecture.

My application uses basically one single thread (of course it also uses worker threads for specific tasks) that handles the "game logic" and the visualization. But I wanted to evaluate the benefits of spliting the task into 2 different threads, and maybe even offering the 2 alternatives.

 

Again thanks for the insightful replies!

Share this post


Link to post
Share on other sites

I see you intent issueing OpenGL calls from multiple threads. As you said, this is a very bad idea, and I personally avoid them due to lots of issues in the past; unless you keep 100% independent GL context for each thread.

Otherwise switching contexts is so error-prone (and driver bugs may appear, to be honest) that any performance gain you intend to get from going multithreading is going to be nullified, or worse; just leave it single threaded.

 

If your logic needs to issue rendering calls, you're not abstracting enough rendering from logic. If you're on a tight schedule, well ok; but if you've got the time, take your time to rethink how the systems relate; and whenever logic needs something from OpenGL, it requests to the render thread, and periodically check for results arrived.

 

For what you describe, your project appears to involve a lot of image processing from what has been rendered already (am I right?) in that case, since your game logic is rendering, and could not be decoupled, you should go single-threaded and rely on a method more like what Hodgman described (map buffer from GPU to CPU, then issue N threads to work on the received image, wait for all of them to finish) for multithreaded approaches.

Edited by Matias Goldberg

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By reenigne
      For those that don't know me. I am the individual who's two videos are listed here under setup for https://wiki.libsdl.org/Tutorials
      I also run grhmedia.com where I host the projects and code for the tutorials I have online.
      Recently, I received a notice from youtube they will be implementing their new policy in protecting video content as of which I won't be monetized till I meat there required number of viewers and views each month.

      Frankly, I'm pretty sick of youtube. I put up a video and someone else learns from it and puts up another video and because of the way youtube does their placement they end up with more views.
      Even guys that clearly post false information such as one individual who said GLEW 2.0 was broken because he didn't know how to compile it. He in short didn't know how to modify the script he used because he didn't understand make files and how the requirements of the compiler and library changes needed some different flags.

      At the end of the month when they implement this I will take down the content and host on my own server purely and it will be a paid system and or patreon. 

      I get my videos may be a bit dry, I generally figure people are there to learn how to do something and I rather not waste their time. 
      I used to also help people for free even those coming from the other videos. That won't be the case any more. I used to just take anyone emails and work with them my email is posted on the site.

      I don't expect to get the required number of subscribers in that time or increased views. Even if I did well it wouldn't take care of each reoccurring month.
      I figure this is simpler and I don't plan on putting some sort of exorbitant fee for a monthly subscription or the like.
      I was thinking on the lines of a few dollars 1,2, and 3 and the larger subscription gets you assistance with the content in the tutorials if needed that month.
      Maybe another fee if it is related but not directly in the content. 
      The fees would serve to cut down on the number of people who ask for help and maybe encourage some of the people to actually pay attention to what is said rather than do their own thing. That actually turns out to be 90% of the issues. I spent 6 hours helping one individual last week I must have asked him 20 times did you do exactly like I said in the video even pointed directly to the section. When he finally sent me a copy of the what he entered I knew then and there he had not. I circled it and I pointed out that wasn't what I said to do in the video. I didn't tell him what was wrong and how I knew that way he would go back and actually follow what it said to do. He then reported it worked. Yea, no kidding following directions works. But hey isn't alone and well its part of the learning process.

      So the point of this isn't to be a gripe session. I'm just looking for a bit of feed back. Do you think the fees are unreasonable?
      Should I keep the youtube channel and do just the fees with patreon or do you think locking the content to my site and require a subscription is an idea.

      I'm just looking at the fact it is unrealistic to think youtube/google will actually get stuff right or that youtube viewers will actually bother to start looking for more accurate videos. 
    • By Balma Alparisi
      i got error 1282 in my code.
      sf::ContextSettings settings; settings.majorVersion = 4; settings.minorVersion = 5; settings.attributeFlags = settings.Core; sf::Window window; window.create(sf::VideoMode(1600, 900), "Texture Unit Rectangle", sf::Style::Close, settings); window.setActive(true); window.setVerticalSyncEnabled(true); glewInit(); GLuint shaderProgram = createShaderProgram("FX/Rectangle.vss", "FX/Rectangle.fss"); float vertex[] = { -0.5f,0.5f,0.0f, 0.0f,0.0f, -0.5f,-0.5f,0.0f, 0.0f,1.0f, 0.5f,0.5f,0.0f, 1.0f,0.0f, 0.5,-0.5f,0.0f, 1.0f,1.0f, }; GLuint indices[] = { 0,1,2, 1,2,3, }; GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex), vertex, GL_STATIC_DRAW); GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices,GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(float) * 5, (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, false, sizeof(float) * 5, (void*)(sizeof(float) * 3)); glEnableVertexAttribArray(1); GLuint texture[2]; glGenTextures(2, texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageOne = new sf::Image; bool isImageOneLoaded = imageOne->loadFromFile("Texture/container.jpg"); if (isImageOneLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageOne->getSize().x, imageOne->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageOne->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageOne; glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageTwo = new sf::Image; bool isImageTwoLoaded = imageTwo->loadFromFile("Texture/awesomeface.png"); if (isImageTwoLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageTwo->getSize().x, imageTwo->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageTwo->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageTwo; glUniform1i(glGetUniformLocation(shaderProgram, "inTextureOne"), 0); glUniform1i(glGetUniformLocation(shaderProgram, "inTextureTwo"), 1); GLenum error = glGetError(); std::cout << error << std::endl; sf::Event event; bool isRunning = true; while (isRunning) { while (window.pollEvent(event)) { if (event.type == event.Closed) { isRunning = false; } } glClear(GL_COLOR_BUFFER_BIT); if (isImageOneLoaded && isImageTwoLoaded) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glUseProgram(shaderProgram); } glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr); glBindVertexArray(0); window.display(); } glDeleteVertexArrays(1, &vao); glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteProgram(shaderProgram); glDeleteTextures(2,texture); return 0; } and this is the vertex shader
      #version 450 core layout(location=0) in vec3 inPos; layout(location=1) in vec2 inTexCoord; out vec2 TexCoord; void main() { gl_Position=vec4(inPos,1.0); TexCoord=inTexCoord; } and the fragment shader
      #version 450 core in vec2 TexCoord; uniform sampler2D inTextureOne; uniform sampler2D inTextureTwo; out vec4 FragmentColor; void main() { FragmentColor=mix(texture(inTextureOne,TexCoord),texture(inTextureTwo,TexCoord),0.2); } I was expecting awesomeface.png on top of container.jpg

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
  • Advertisement