• Advertisement
Sign in to follow this  

OpenGL Yet another Deferred Shading / Anti-aliasing discussion...

This topic is 1661 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi. My apologies if this discussion has been played out already. This topic seems to come up a lot, but I did a quick search and did not quite find the information I was looking for. I'm interested in knowing what is considered the best practice these days, with respect to deferred rendering and anti-aliasing. These are the options, as I understand them:

 

Use some post-processed blur like FXAA.

I've tried enabling NVidia's built-in FXAA support, but the results were not nearly acceptable. Maybe there is another technique that can do a better job?

 

Use a multi-sampled MRT, and then handle your own MSAA resolve.

I've never done this before, and I'm anxious to try it for the sake of learning how, but it is difficult for me to understand how this is much better than super-sampling. If I understand MSAA correctly, the memory requirements are the same as for super-sampling. The only difference is that your shader is called fewer times. However, with deferred shading, this really only seems to help save a few material shader fragments, which don't seem very expensive in the first place. Unless I'm missing something, you still have to do your lighting calculations once per sample, even if all of the samples have the same exact data in them. Are the material shader savings (meager, I'm guessing) really worth all of the hassle?

 

Use Deferred Lighting instead of Deferred Shading.

You'll still have aliased lighting, though, and it comes at the expense of an extra pass (albeit depth-only, if I understand the technique correctly). Is anybody taking this option these days?

 

Use TXAA

NVidia is touting some TXAA technique on their website, although details seem slim. It seems to combine 2X MSAA with some sort of post-process technique. Judging from their videos, the results look quite acceptable, unlike FXAA. I'm guessing that the 2X MSAA would be handled using your own custom MSAA resolve, as described above, but I don't know what processing happens after that.

 

These all seem like valid options to try, although none of them seem to be from the proverbial Book. It seems to me, though, that forward rendering is a thing of the past, and I would love to be able to fill my scene with lights. I could try implementing all of these techniques as an experiment, but since they each come with a major time investment and learning curve, I was hoping that someone could help point a lost soul in the right direction.

 

Bonus questions: Is there a generally agreed-upon way to lay out your G-Buffer? I'd like to use this in conjunction with HDR, and so memory/bandwidth could start to become a problem, I would imagine. Is it still best practice to try to reconstruct position from depth? Are half-float textures typically used? Are any of the material properties packed or compressed in a particular way? Are normals stored as x/y coordinates, with the z-coord calculated in the shader?

 

I'm using OpenGL, if it matters.

Share this post


Link to post
Share on other sites
Advertisement

I use FXAA but I don't use whats built in with the graphics driver. What you can do for better results is download the FXAA 3.9 shader (used to be on Timothy Lottes blog but I can't find it anymore), it has some conditional compilation setup in it which you can use to tweak the settings. This method is far better then using the graphics driver because you can apply it at a more optimal spot in your rendering pipeline (preventing some unwanted blur). Amazingly the same shader works for both hlsl and glsl and it will work on Intel and AMD gpus as well as consoles. It is important to note that you must have some method to generate luminosity before running the fxaa shader (this is pretty trivial).

Edited by ic0de

Share this post


Link to post
Share on other sites

Thanks so much for your replies.

 

If I download the FXAA 3.9 shader and integrate it in my pipeline, will it help beyond allowing me to avoid blurring HUD/text elements? The reason I ask is that I have a lot of objects in my scene with long, straight edges -- particularly buildings and chain link fences, but also some vehicles as well -- with which FXAA seems to work particularly poorly. At a distance, these objects create wild, shimmering jaggies that are very distracting. Will downloading the shader actually improve this? Here are a couple examples:

 

W81rCSo.png

 

zCL7hBQ.png

 

As you move the viewpoint around, those broken lines crawl and shimmer.

 

I've spent the last couple hours reading through the links that you provided, MJP, and it has given me a lot of food for thought. 

 

I'm particularly intrigued by the Forward+ idea, because the idea of using an MRT with HDR and MSAA is starting to sound prohibitive. Let's say I use the G-Buffer layout that you mentioned in your March 2012 blog post on Light-Indexed Deferred rendering, except the albedo buffers need to be bumped up to 64bpp to accommodate HDR rendering (right?). Then, multiply the whole thing by 4 for 4x MSAA, and I have a seriously fat buffer. And what do I do about reflection textures? If I want to do planar reflections or refractions, for example. That seems like it'd be another big fat g-buffer. Am I thinking about this correctly? Plus, you have the lack of flexibility with material parameters that comes with deferred rendering. 

 

Edit: On the other hand, isn't it somewhat expensive the loop through a runtime-determined number of lights inside a fragment shader? If it isn't, then why did the old forward-renderers bother compiling different shaders for different numbers of lights? Why did they not, instead, just allow 8 lights per shader (say), and use a uniform (numLights) to determine how many to actually loop through? Sure, you only get per-object light lists that way, which is imprecise, but is it really slower than having a separate compute shader step that determines the light list on a per-pixel basis?

 

But if I do end up going the Deferred w/ MSAA route (which I'm kind of leaning toward), using edge detection to find the pixels I actually want to do per-sample lighting on, and doing per-pixel lighting for all others, sounds like it will be a huge time-saver, even if I have to eat a huge amount of memory.

 

And in any case, the information you provided helped me discover all sorts of inefficiencies with the way we're doing deferred shading, so it looks like I can probably gain some speed just with a few optimizations.

Edited by CDProp

Share this post


Link to post
Share on other sites

If I download the FXAA 3.9 shader and integrate it in my pipeline, will it help beyond allowing me to avoid blurring HUD/text elements?

There's a lot of options to the FXAA shader that you can tweak yourself.
Also, if you don't actually integrate it into your pipeline, then it's not actually in your game. It's pretty unkind to your users to say "to make my game look best, go into your nVidia driver panel (sorry ATI/intel users) and enable these hacks".

I have a lot of objects in my scene with long, straight edges -- particularly buildings and chain link fences, but also some vehicles as well -- with which FXAA seems to work particularly poorly. At a distance, these objects create wild, shimmering jaggies that are very distracting. Will downloading the shader actually improve this? Here are a couple examples:

In those example pictures, there's a lot of "information" missing -- e.g. lines that have pixels missing from them, so they've become dashed lines rather than solid lines.
Post-processing AA techniques (like FXAA) will not be able to repair these lines.
MSAA / super-sampling techniques will mitigate this issue, making the threshold for a solid line turning into a dashed line smaller.
Another solution is to fix the data going in to the renderer -- if you've got tiny bits of geometry that are going to end up being thinner than a pixel, they should be replaced with some kind of low-LOD version of themselves.  e.g. if those fences were drawn with as a textured plane with alpha blending, you'd get soft anti-aliased lines by default, even with no anti-aliasing technique used.

Edit: On the other hand, isn't it somewhat expensive the loop through a runtime-determined number of lights inside a fragment shader?

On D3D9-era GPUs, yes, the branch instructions are quite costly. They'll also compile the loop to something like:
for( int i=0; i!=255; ++i ) { if( i >= g_lightCount ) break; ..... }
On more modern cards, branching is less costly. The biggest worry is when nearby pixels take different branches (e.g. one pixel wants to loop through 8 lights, but it's neighbour wants to loop through 32 lights) -- but most of these Forward+-ish techniques mitigate this issue by clustering/tiling pixels together, so that neighbouring pixels are likely to use the same loop counters and data. Edited by Hodgman

Share this post


Link to post
Share on other sites


Also, if you don't actually integrate it into your pipeline, then it's not actually in your game. It's pretty unkind to your users to say "to make my game look best, go into your nVidia driver panel (sorry ATI/intel users) and enable these hacks".

 

Hah, good point. This isn't a product that I plan on releasing into the wild, so that's not even something that I considered.

 


On more modern cards, branching is less costly. The biggest worry is when nearby pixels take different branches (e.g. one pixel wants to loop through 8 lights, but it's neighbour wants to loop through 32 lights) -- but most of these Forward+-ish techniques mitigate this issue by clustering/tiling pixels together, so that neighbouring pixels are likely to use the same loop counters and data.

 

Could I trouble you for more information about this? Why is it that nearby pixels like to have the same number of pixels, and what is meant by "nearby"? Would an 8x8 tile suffice? (that seems to be the common size, from what I've been reading)

Share this post


Link to post
Share on other sites
Secondly, a different, but related concept is that pixel shaders are generally executed on a "quad" of 2x2 pixels, this is so that mipmapping/gradient instructions (e.g. ddx/ddy) can be implemented. If a triangle only covers a single pixel, the shader will likely still be computed for a whole 2x2 quad, with 3 of the pixels being thrown away.

 

Not relevant to OP, but that just answered a question I've had for a while. I assumed that was how derivative functions worked, but for some reason I'd always wondered how mipmapping worked in shaders. It never occurred to me that that would be all it took.

Share this post


Link to post
Share on other sites

There is a document that deals with that issue of jaggedly lines for fences and power cables and such.  

 

Anti-aliasing alone can't help you since the lines off in the distance end up being much less than one pixel.    Look for the section titled "Phone-wire Anti-Aliasing"

 

http://www.humus.name/Articles/Persson_GraphicsGemsForGames.pdf

Share this post


Link to post
Share on other sites

The power cables in our scene are basically just rectangular strips, onto which a texture of a catenary-shaped wire is mapped. This was done by a modeler long before my time, so I don't know why he decided to do it that way, but one fortunate side-effect is that the texture filtering takes care of everything for me. The fence poles and light posts are a different story, but because they never end up being much smaller than a pixel, the 4x MSAA seems to do an okay job on them. Thanks for the PDF, though, it has a lot of useful information.

 

Hodgman, thanks again for your help. I'm much obliged to everyone here. Although I must say, you guys are only encouraging me to ask more questions. =)

Share this post


Link to post
Share on other sites

If I download the FXAA 3.9 shader and integrate it in my pipeline, will it help beyond allowing me to avoid blurring HUD/text elements? The reason I ask is that I have a lot of objects in my scene with long, straight edges -- particularly buildings and chain link fences, but also some vehicles as well -- with which FXAA seems to work particularly poorly. At a distance, these objects create wild, shimmering jaggies that are very distracting. Will downloading the shader actually improve this? Here are a couple examples:

 

This is one of the cases that post-processing AA solutions like FXAA have a lot of difficulty with. You really need to rasterize at a higher resolution to make high-frequency geometry look better, and that's exactly what MSAA does. Something like FXAA is fundamentally limited in terms of the information it has available to it, which makes it unable to fix these sorts of situations. Some sort of temporal solution that looks at data from the previous frame can help, but is still usually less effective than MSAA.

 

I'm particularly intrigued by the Forward+ idea, because the idea of using an MRT with HDR and MSAA is starting to sound prohibitive. Let's say I use the G-Buffer layout that you mentioned in your March 2012 blog post on Light-Indexed Deferred rendering, except the albedo buffers need to be bumped up to 64bpp to accommodate HDR rendering (right?). Then, multiply the whole thing by 4 for 4x MSAA, and I have a seriously fat buffer. And what do I do about reflection textures? If I want to do planar reflections or refractions, for example. That seems like it'd be another big fat g-buffer. Am I thinking about this correctly? Plus, you have the lack of flexibility with material parameters that comes with deferred rendering. 

 

Albedo values should always be [0, 1], since they're essentially the ratio of light reflecting off a surface. With HDR the input lighting values are often > 1 and the same goes for the output lighting value, but albedo is always [0,1]. But even with that it's true that a G-Buffer with 4xMSAA enabled can use up quite a bit of memory, which is definitely a disadvantage. Material parameters can also potentially be an issue. If you require a lot of input parameters to your lighting, then you need a lot of G-Buffer textures which increases memory usage and bandwidth. With forward rendering you don't necessarily need to always think about what parameters need to be packed into your G-Buffer, which can potentially make it easier for experimenting with new lighting models

 

Edit: On the other hand, isn't it somewhat expensive the loop through a runtime-determined number of lights inside a fragment shader? If it isn't, then why did the old forward-renderers bother compiling different shaders for different numbers of lights? Why did they not, instead, just allow 8 lights per shader (say), and use a uniform (numLights) to determine how many to actually loop through? Sure, you only get per-object light lists that way, which is imprecise, but is it really slower than having a separate compute shader step that determines the light list on a per-pixel basis?

 

Sure it can be expensive to loop through lights in a shader, but this is essentially what you do in any deferred renderer if you have multiple lights overlapping any given pixel. However with traditional deferred rendering you end up sampling your G-Buffer and blending the fragment shader output for each light, which can consume quite a bit of bandwidth. With forward rendering or tiled deferred rendering you only need to sample your material parameters once and the summing of light contributions happens in registers, which avoids excessive bandwidth usage. The main problem with older forward renderers is that older GPU's and shading languages lacked the flexibility needed to build per-tile lists and dynamically loop over them in a fragment shader. Shaders did not have support for reading from generic buffers, and fragment shaders couldn't dynamically read indexed data from shader constants. You also didn't have compute shaders with shared memory, which is currently the best way to build per-tile lists of lights. But it's true that determining a set of lights per-object is fundamentally the same thing, the main difference is that the level of granularity is different. Also you typically do per-object association on the CPU, while with tiled forward or deferred you do the association on the GPU using the depth buffer to determine if a light affects a given tile.

Edited by MJP

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Now

  • Advertisement
  • Similar Content

    • By reenigne
      For those that don't know me. I am the individual who's two videos are listed here under setup for https://wiki.libsdl.org/Tutorials
      I also run grhmedia.com where I host the projects and code for the tutorials I have online.
      Recently, I received a notice from youtube they will be implementing their new policy in protecting video content as of which I won't be monetized till I meat there required number of viewers and views each month.

      Frankly, I'm pretty sick of youtube. I put up a video and someone else learns from it and puts up another video and because of the way youtube does their placement they end up with more views.
      Even guys that clearly post false information such as one individual who said GLEW 2.0 was broken because he didn't know how to compile it. He in short didn't know how to modify the script he used because he didn't understand make files and how the requirements of the compiler and library changes needed some different flags.

      At the end of the month when they implement this I will take down the content and host on my own server purely and it will be a paid system and or patreon. 

      I get my videos may be a bit dry, I generally figure people are there to learn how to do something and I rather not waste their time. 
      I used to also help people for free even those coming from the other videos. That won't be the case any more. I used to just take anyone emails and work with them my email is posted on the site.

      I don't expect to get the required number of subscribers in that time or increased views. Even if I did well it wouldn't take care of each reoccurring month.
      I figure this is simpler and I don't plan on putting some sort of exorbitant fee for a monthly subscription or the like.
      I was thinking on the lines of a few dollars 1,2, and 3 and the larger subscription gets you assistance with the content in the tutorials if needed that month.
      Maybe another fee if it is related but not directly in the content. 
      The fees would serve to cut down on the number of people who ask for help and maybe encourage some of the people to actually pay attention to what is said rather than do their own thing. That actually turns out to be 90% of the issues. I spent 6 hours helping one individual last week I must have asked him 20 times did you do exactly like I said in the video even pointed directly to the section. When he finally sent me a copy of the what he entered I knew then and there he had not. I circled it and I pointed out that wasn't what I said to do in the video. I didn't tell him what was wrong and how I knew that way he would go back and actually follow what it said to do. He then reported it worked. Yea, no kidding following directions works. But hey isn't alone and well its part of the learning process.

      So the point of this isn't to be a gripe session. I'm just looking for a bit of feed back. Do you think the fees are unreasonable?
      Should I keep the youtube channel and do just the fees with patreon or do you think locking the content to my site and require a subscription is an idea.

      I'm just looking at the fact it is unrealistic to think youtube/google will actually get stuff right or that youtube viewers will actually bother to start looking for more accurate videos. 
    • By Balma Alparisi
      i got error 1282 in my code.
      sf::ContextSettings settings; settings.majorVersion = 4; settings.minorVersion = 5; settings.attributeFlags = settings.Core; sf::Window window; window.create(sf::VideoMode(1600, 900), "Texture Unit Rectangle", sf::Style::Close, settings); window.setActive(true); window.setVerticalSyncEnabled(true); glewInit(); GLuint shaderProgram = createShaderProgram("FX/Rectangle.vss", "FX/Rectangle.fss"); float vertex[] = { -0.5f,0.5f,0.0f, 0.0f,0.0f, -0.5f,-0.5f,0.0f, 0.0f,1.0f, 0.5f,0.5f,0.0f, 1.0f,0.0f, 0.5,-0.5f,0.0f, 1.0f,1.0f, }; GLuint indices[] = { 0,1,2, 1,2,3, }; GLuint vao; glGenVertexArrays(1, &vao); glBindVertexArray(vao); GLuint vbo; glGenBuffers(1, &vbo); glBindBuffer(GL_ARRAY_BUFFER, vbo); glBufferData(GL_ARRAY_BUFFER, sizeof(vertex), vertex, GL_STATIC_DRAW); GLuint ebo; glGenBuffers(1, &ebo); glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, ebo); glBufferData(GL_ELEMENT_ARRAY_BUFFER, sizeof(indices), indices,GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, false, sizeof(float) * 5, (void*)0); glEnableVertexAttribArray(0); glVertexAttribPointer(1, 2, GL_FLOAT, false, sizeof(float) * 5, (void*)(sizeof(float) * 3)); glEnableVertexAttribArray(1); GLuint texture[2]; glGenTextures(2, texture); glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageOne = new sf::Image; bool isImageOneLoaded = imageOne->loadFromFile("Texture/container.jpg"); if (isImageOneLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageOne->getSize().x, imageOne->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageOne->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageOne; glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); sf::Image* imageTwo = new sf::Image; bool isImageTwoLoaded = imageTwo->loadFromFile("Texture/awesomeface.png"); if (isImageTwoLoaded) { glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, imageTwo->getSize().x, imageTwo->getSize().y, 0, GL_RGBA, GL_UNSIGNED_BYTE, imageTwo->getPixelsPtr()); glGenerateMipmap(GL_TEXTURE_2D); } delete imageTwo; glUniform1i(glGetUniformLocation(shaderProgram, "inTextureOne"), 0); glUniform1i(glGetUniformLocation(shaderProgram, "inTextureTwo"), 1); GLenum error = glGetError(); std::cout << error << std::endl; sf::Event event; bool isRunning = true; while (isRunning) { while (window.pollEvent(event)) { if (event.type == event.Closed) { isRunning = false; } } glClear(GL_COLOR_BUFFER_BIT); if (isImageOneLoaded && isImageTwoLoaded) { glActiveTexture(GL_TEXTURE0); glBindTexture(GL_TEXTURE_2D, texture[0]); glActiveTexture(GL_TEXTURE1); glBindTexture(GL_TEXTURE_2D, texture[1]); glUseProgram(shaderProgram); } glBindVertexArray(vao); glDrawElements(GL_TRIANGLES, 6, GL_UNSIGNED_INT, nullptr); glBindVertexArray(0); window.display(); } glDeleteVertexArrays(1, &vao); glDeleteBuffers(1, &vbo); glDeleteBuffers(1, &ebo); glDeleteProgram(shaderProgram); glDeleteTextures(2,texture); return 0; } and this is the vertex shader
      #version 450 core layout(location=0) in vec3 inPos; layout(location=1) in vec2 inTexCoord; out vec2 TexCoord; void main() { gl_Position=vec4(inPos,1.0); TexCoord=inTexCoord; } and the fragment shader
      #version 450 core in vec2 TexCoord; uniform sampler2D inTextureOne; uniform sampler2D inTextureTwo; out vec4 FragmentColor; void main() { FragmentColor=mix(texture(inTextureOne,TexCoord),texture(inTextureTwo,TexCoord),0.2); } I was expecting awesomeface.png on top of container.jpg

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
  • Advertisement