Sign in to follow this  

OpenGL question about deferred rendering

This topic is 2286 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi,

I'm trying to implement deferred rendering in a modern way using OGL 4.x core. For this goal I've read several articles pdfs, tutorials etc. about this topic, however I still have some questions.
First of all, when I create a G-buffer what do I really do?
I mean I surely want to create a FBO and an RBO and attach the RBO to the FBO so that I can render to a texture which I attach to the FBO.
But then how do I attach other stuff to the FBO?
As I've read I definitely will need at least an albedo a depth and a normal buffer. I suppose these will all be textures. However there are many formats to choose from, and it depends on the article (or rather game engine) which is used.
Which are the most common, or rather the most efficient, and nicest ones?
(to add I definitely want to go with the full HDR pipeline)
Another question is: how do I fill these buffers?
Especially when there are object in my scene which doesn't have textures or normals. And there is the cube mapping which should be completely untouched, since I only need the colors of it.
Finally, I ran into an article from Intel in which the whole shading part was solved within a compute shader. So how can I connect OpenCL and OpenGL so that I can get the same results with open source APIs?

Best regards,
Yours3!f

Share this post


Link to post
Share on other sites
[quote name='Yours3!f' timestamp='1313242976' post='4848629']
Hi,

I'm trying to implement deferred rendering in a modern way using OGL 4.x core. For this goal I've read several articles pdfs, tutorials etc. about this topic, however I still have some questions.
First of all, when I create a G-buffer what do I really do?
I mean I surely want to create a FBO and an RBO and attach the RBO to the FBO so that I can render to a texture which I attach to the FBO.
But then how do I attach other stuff to the FBO?
As I've read I definitely will need at least an albedo a depth and a normal buffer. I suppose these will all be textures. However there are many formats to choose from, and it depends on the article (or rather game engine) which is used.
Which are the most common, or rather the most efficient, and nicest ones?
(to add I definitely want to go with the full HDR pipeline)
Another question is: how do I fill these buffers?
Especially when there are object in my scene which doesn't have textures or normals. And there is the cube mapping which should be completely untouched, since I only need the colors of it.
Finally, I ran into an article from Intel in which the whole shading part was solved within a compute shader. So how can I connect OpenCL and OpenGL so that I can get the same results with open source APIs?

Best regards,
Yours3!f
[/quote]

Well I am trying to work on the same thing and I can only attempt to try and answer one of your questions, "How do I fill these buffers?". And I believe to fill them you bind them before drawing and then your shader should output its final data to your buffer. I am not 100% sure if that is right, but I think it is.

Share this post


Link to post
Share on other sites
something like this?

[code]
glActiveTexture(GL_TEXTURE0);
texture0.bind();
glActiveTexture(GL_TEXTURE0 + 1);
albedo_texture.bind();
glActiveTexture(GL_TEXTURE0 + 2);
normal_texture.bind();
glActiveTexture(GL_TEXTURE0 + 3);
depth_texture.bind();
glUniform1i(texture0_location, GL_TEXTURE0);
glUniform1i(albedo_location, GL_TEXTURE0 + 1);
glUniform1i(normal_location, GL_TEXTURE0 + 2);
glUniform1i(depth_location, GL_TEXTURE0 + 3);
//render stuff...

(pixel shader)
#version 410
uniform sampler2D texture0;
uniform sampler2D albedo;
uniform sampler2D normal;
uniform sampler2D depth;

in vec3 normals;
in vec4 position;
smooth in vec2 texture_coordinates;

out vec4 fragment_color;

void main()
{
depth = position.z;
normal = vec4(normals, 1.0);
fragment_color = albedo = texture(texture0, texture_coordinates);
}
[/code]

Share this post


Link to post
Share on other sites
About the texture formats:

I've looked at the possible formats found here:
[url="http://www.opengl.org/wiki/Image_Format"]http://www.opengl.or...ki/Image_Format[/url]
and I've read about them a lot.

I've found an nvidia HDR sample that has an FPS and a timer as well with it, which features some of these formats:

From best performance to worst:
[code]
HDR format RT format FPS time (ms)
RGB9_E5 R11F_G11F_B10F 750 1.31
R11F_G11F_B10F R11F_G11F_B10F 740 1.32
RGBA16F R11F_G11F_B10F 725 1.40
RGBA32F R11F_G11F_B10F 610 1.63
RGB9_E5 RGBA16F 523 1.91
R11F_G11F_B10F RGBA16F 523 1.91
RGBA16F RGBA16F 512 1.95
RGBA32F RGBA16F 470 2.12
RGB9_E5 RGBA32F 292 3.43
R11F_G11F_B10F RGBA32F 291 3.43
RGBA16F RGBA32F 288 3.46
RGBA32F RGBA32F 277 3.6
[/code]
These were achieved with a Core i3 540, a HD 5770 with 1GB GDDR5, and on 1440x900.

I didn't notice any visual difference only when using a RGBA32F HDR format. However this might be because of some bug.
To add when using high performance formats like RGB9_E5 some visual glitches might appear if we combine this with other effects.
Another thing to notice is that if we want to make the game go with 30 FPS then in the worst case HDR rendering took 10% of the rendering time, in the best it only took 4.36%. Using an easily implementable format like the RGBA16F + RGBA16F would be the best way in my opinion because it takes 6.5% of the rendering time which is nice, and we still have enough precision.

On the other hand I've read the deferred rendering tutorial on Catalina's XNA blog, in which several combinations are mentioned for the G-buffer, from which the best seemed to be using a RGBA16F for albedo, R16G16F (I don't know if there's such format in OGL) for normal data (or maybe RGB10_A2), and R32F for position data. This would use about 33 MB of memory and would leave 1 channel of 16 bit data free to use for other purposes such as storing material ID. I think there would be need for another G-buffer component for other stuff like specular intensity and exponent material ID etc. For this a RGBA, or RGB format could be used.

Share this post


Link to post
Share on other sites
Did you already understand how it works? Add multiple out variables to the shader, and use glBindFragDataLocation before linking shader. And when rendering, call glDrawBuffers to tell what buffers you are using.

[code]
out vec4 depthFrag;
out vec4 normalFrag;
// ... whatever you want

void main() {
depthFrag = vec4(...);
normalFrag = vec4(...);
}
[/code]

[code]
// Create shader program and attach shaders here

// Bind frag data locations
glBindFragDataLocation(shaderProgram, 0, "depthFrag");
glBindFragDataLocation(shaderProgram, 1, "normalFrag");

glLinkShader(shaderProgram);

// And when rendering, tell what buffers we will be using

GLenum buffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, buffers);
[/code]

Hope this helps.

Share this post


Link to post
Share on other sites
No actually I didn't understand it until this point :) I just thought about it on a theoretical basis, but yeah that's what I was looking for, thank you.

Share this post


Link to post
Share on other sites
Ok, I tried to implement it, but in the final lighting stage I seem to be getting no normals or depth, and I also think that depth isn't calculated properly. I do get the albedo, which is great, but due to the lack of the other two components I can't do lighting.

So here's my initialization:
[code]
void deferred::init()
{
//load the lighting shader
objs::get()->shader_loader.load_shader_control_file ( "../shaders/deferred/fs_quad.sc", &fs_quad );

//load the full-screen quad
objs::get()->obj.load_obj_file ( "../resources/fs_quad.obj", &quad, &fs_quad );

//set texture parameters

float w = objs::get()->conf.SCREEN_WIDTH;
float h = objs::get()->conf.SCREEN_HEIGHT;

fbo.create(); //generate a fbo
fbo.bind(); //bind it

GLenum modes[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2 };
//set which draw buffers will it use
glDrawBuffers ( 3, modes );

//create render buffers
rb_albedo.create();
rb_normal.create();
rb_position.create();
rb_depth.create(); //this is not a g-buffer component it is just a depth attachment

//bind render buffers, set the storage format, and attach them to the fbo
rb_albedo.bind();
rb_albedo.set_storage_format ( GL_RGBA16F, w, h );
rb_albedo.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, &fbo );
rb_normal.bind();
rb_normal.set_storage_format ( GL_RG16F, w, h );
rb_normal.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, &fbo );
rb_position.bind();
rb_position.set_storage_format ( GL_R32F, w, h );
rb_position.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, &fbo );
rb_depth.bind();
rb_depth.set_storage_format ( GL_DEPTH_COMPONENT32, w, h );
rb_depth.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, &fbo );

//create textures
albedo.create();
normals.create();
position.create();

//set the albedo as the 5th (4) texture
glActiveTexture ( GL_TEXTURE4 );
albedo.bind(); //bind it
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
//use rgba16f format
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA16F, w, h, 0, GL_RGBA, GL_FLOAT, 0 );
albedo.set_dimensions ( w, h ); //store the size of the texture for future use

glActiveTexture ( GL_TEXTURE5 );
normals.bind();
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RG16F, w, h, 0, GL_RGBA, GL_FLOAT, 0 );
normals.set_dimensions ( w, h );

glActiveTexture ( GL_TEXTURE6 );
position.bind();
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_R32F, w, h, 0, GL_RGBA, GL_FLOAT, 0 );
position.set_dimensions ( w, h );

//reset the active texture
glActiveTexture ( GL_TEXTURE0 );

//attach textures to fbo
albedo.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, &fbo );
normals.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, &fbo );
position.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, &fbo );

fbo.unbind();
}
[/code]

here's the rendering code
[code]
void deferred::render()
{
//set orthographic view w, h, near, far
event::get()->get_resize()->set_orthographic ( 1.0f, 1.0f, -1.0f, 1.0f );
//disable depth testing
glDisable(GL_DEPTH_TEST);
//bind the lighting shader
fs_quad.bind();
//pass the matrices and scene information
fs_quad.pass_m4x4 ( objs::get()->ppl.get_projection_matrix(), "m4_p" );
fs_quad.pass_m4x4 ( objs::get()->ppl.get_model_view_matrix(), "m4_mv" );
//reset perspective mode
event::get()->get_resize()->set_perspective ();
//inverse projection matrix for depth to position reconstruction
fs_quad.pass_m4x4 ( objs::get()->ppl.get_projection_matrix().invert(), "inv_proj" );
//pass camera position
fs_quad.pass_vec4 ( mymath::vec4f ( objs::get()->cam.pos[0], objs::get()->cam.pos[1], objs::get()->cam.pos[2], 1.0f ), "cam_pos" );
//pass the textures
fs_quad.pass_int ( 4, "texture4" );
fs_quad.pass_int ( 5, "texture5" );
fs_quad.pass_int ( 6, "texture6" );
//draw full screen quad
quad.render();
//unbind the lighting shader
fs_quad.unbind();
//enable depth testing
glEnable(GL_DEPTH_TEST);
}
[/code]

the shader that fills the G-buffer
[code]
//vertex shader
#version 410

//projection, modelview matrices
uniform mat4 m4_p, m4_mv;
uniform mat3 m3_n;

//the vertex position
in vec4 v4_vertex;
//the texture coordinates
in vec2 v2_texture;
in vec3 v3_normal;

smooth out vec2 v2_texture_coords;
out vec2 normal;
out float depth;

vec2 encode_normal_x_y_reconstruct_z(vec3 in_normal)
{
return vec2(in_normal.xy * 0.5 + 0.5);
}

void main()
{
normal = encode_normal_x_y_reconstruct_z(m3_n * v3_normal);
v2_texture_coords = v2_texture;

gl_Position = m4_p * m4_mv * v4_vertex;
depth = gl_Position.z / gl_Position.w;
}

//////////////////////////////////////////-------------------------------------------------------------

//pixel shader
#version 410

uniform sampler2D texture0;

smooth in vec2 v2_texture_coords;
in vec2 normal;
in float depth;

out vec4 v4_color; //color attachment0
out vec4 v4_normal; //c. a. 1
out vec4 v4_depth; //c. a. 2

void main()
{
v4_normal.xy = normal;
v4_depth.x = depth;
v4_color = texture(texture0, v2_texture_coords);
}
[/code]

Using this shader the normals here seem to be good when I draw them: v4_color = vec4(v2_normal, 0.0, 1.0);
however when I draw the depth the mesh get all black, instead of that grayscale depth look. v4_color = vec4(vec3(depth), 1.0);
I also use the glBindFragDataLocation before linking as you suggested, although I don't know if it was bound

the lighting shader:
[code]

//vertex shader
#version 410

//projection, modelview matrices
uniform mat4 m4_p, m4_mv;

//the vertex position
in vec4 v4_vertex;
//the texture coordinates
in vec2 v2_texture;

smooth out vec2 v2_texture_coords;

void main()
{
v2_texture_coords = v2_texture;
gl_Position = m4_p * m4_mv * v4_vertex;
}


///////////////////////////////////---------------------------------------------------------------------------

//pixel shader
#version 410

uniform sampler2D texture4; //albedo RGBA16F
uniform sampler2D texture5; //normal RG16F
uniform sampler2D texture6; //depth R32F

smooth in vec2 v2_texture_coords; //texture coordinates for the G-buffer
uniform mat4 inv_proj; //inverse projection matrix
uniform vec4 cam_pos; //camera position

out vec4 v4_color; //the outgoing color

vec3 decode_normal_x_y_reconstruct_z(vec2 in_normal)
{
vec3 out_normal;
out_normal.xy = in_normal * 2 - 1; //convert from range [0.0, 1.0] to [-1.0, 1.0]
out_normal.z = sqrt(1 - dot(out_normal.xy, out_normal.xy)); //since it is perpendicular to x and y we can calculate it
return out_normal;
}

void main()
{
vec4 albedo = texture(texture4, v2_texture_coords);
vec3 normal = decode_normal_x_y_reconstruct_z(texture(texture5, v2_texture_coords).xy);
float depth = texture(texture6, v2_texture_coords).x;

//get texture coords from [0, 1] to [-1, 1], add them as x and y, add depth as z, and multiply by inverse projection matrix
vec4 position = vec4(v2_texture_coords.x * 2.0 - 1.0, -(v2_texture_coords.y * 2.0 - 1.0), depth, 1.0) * inv_proj;
position /= position.w;

//blinn lighting with a light placed at [0, 5, -2]
vec3 light = vec3( 0.0, 5.0, -2.0);
vec3 light_dir = normalize(light - position.xyz);
vec3 eye_dir = normalize(cam_pos.xyz - position.xyz);
vec3 half_vec = normalize(light_dir + eye_dir);
v4_color = max(dot(normal, light_dir), 0.0) * albedo + pow(max(dot(normal, half_vec), 0.0), 9.0) * 10.0;
}
[/code]

When I try to draw the normals with v4_color = vec4(normal, 1.0); I get a black screen, this also happens with depth. Please help I'm struggling to get this working.

Best regards,
Yours3!f

Share this post


Link to post
Share on other sites
I think you have to call glDrawBuffers every time you want to render something on the buffers. It's probably using only the first buffer now (which is albedo), because you don't tell it what to use when rendering.

Share this post


Link to post
Share on other sites
[quote name='Sponji' timestamp='1313523107' post='4849998']
I think you have to call glDrawBuffers every time you want to render something on the buffers. It's probably using only the first buffer now (which is albedo), because you don't tell it what to use when rendering.
[/quote]

Thanks, but you don't have to do this every frame. I know on Codinglabs in the tutorial it is done every frame, but you simply don't have to, because it only sets up the frame buffer object, so that it actually recieves the color input.

I updated the code (see the post above), and now I do get the color information, depth and normals as well.

Now I suppose the lighting equation or the decoding function is wrong, since I have all the data.

Share this post


Link to post
Share on other sites
ok so I tried to figure out why my ligthing doesn't work, so I tried some things.

first of all if in the lighting shader I gather the normal and depth and convert them the conversion works.
[code]
void main()
{
vec3 normal = decode_normal_x_y_reconstruct_z(texture(texture5, v2_texture_coords).xy);
float depth = texture(texture6, v2_texture_coords).x;

//get texture coords from [0, 1] to [-1, 1], add them as x and y, add depth as z, and multiply by inverse projection matrix
vec4 position = vec4(v2_texture_coords.x * 2.0 - 1.0, -(v2_texture_coords.y * 2.0 - 1.0), depth, 1.0) * inv_proj;
position /= position.w;

v4_color = vec4(normal, 1.0); //gives me a normal looking scene
//v4_color = vec4(vec3(depth), 1.0); //gives me a depth looking scene
//v4_color = position; //gives me a position scene (well it is red green and blue :) )
}
[/code]

However when I do this I get a black screen:
[code]
void main()
{
v4_color = texture(texture4, v2_texture_coords);
vec3 normal = decode_normal_x_y_reconstruct_z(texture(texture5, v2_texture_coords).xy);

v4_color = vec4(normal, 1.0); //this should give me a normal looking scene
}
[/code]

I have no idea why that happens, and I guess this is why my lighting doesn't work too...

Share this post


Link to post
Share on other sites
OMG finally after 3 days I seem to be solving it, I just didn't need to set the glTexParameteri(...); stuff before rendering (see updated code above), but i did need it at texture creation. My lighting still sucks though :) But I guess I can solve it now.

EDIT: don't forget that you have to pass the inverse of the projection matrix which you use at rendering the scene (the perspective), not the one (the orthographic) that you use when doing lighting

Share this post


Link to post
Share on other sites
Hey[size="2"][color="#1c2837"] Yours3!f,

Could you post a picture of what your 'depth' view looks like? Mine is a white screen :( and I want the grey scale kinda scene, and I noticed that you display the depth like this:[/color][/size]

[size="2"][color="#1c2837"][code]v4_color = vec4(vec3(depth), 1.0); [/code][/color][/size]

[color="#1C2837"][size="2"]I do this:

[code]depth.rgb = saturate(depth);
depth.a = 1.0f;[/code]
[/size][/color]

[color="#1C2837"][size="2"]I'm not at the computer at the moment, so can't try it your way, but yeah do you get a 'grey scale' view, indicating 'white = far away' and 'black = near'? Which is what I'm after instead of plain white :([/size][/color]

Share this post


Link to post
Share on other sites
@Daniel_RT

see this topic of mine:
http://www.gamedev.net/topic/609271-depth-reconstruction-not-working/

(it includes the source code)

To answer your question, depth can look like a grayscale image with black = near and white = far, and it can look all black except getting dark at the end of the frustrum.

The first one (grayscale image) indicates that you are storing non-linearized depth so values will range from 0.0 ... far clip distance, so from black to white.

The second one indicates that you are storing values that range from 0.0 ... 1.0, which means that it will be black near you and it will only be brighter from very far from you (in my case 10000 units from me). you achieve this by dividing the depth value with the far clip plane's distance. This way of storing depth is called storing normalized depth.

I recommend this tutorial on the theoretical basis of this topic:
http://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/

for the GLSL implementation take a look at my topic (the last but one post). I have to add that the source there is not the best because it doesn't do depth testing, which means that your scene will look ankward, but after the initial implementation of depth reconstruction you can easily implement depth testing by using method 3 from MJP's tutorial (+ this adds some optimization).

oh and dont resurrect old topics :)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Similar Content

    • By khawk
      We've just released all of the source code for the NeHe OpenGL lessons on our Github page at https://github.com/gamedev-net/nehe-opengl. code - 43 total platforms, configurations, and languages are included.
      Now operated by GameDev.net, NeHe is located at http://nehe.gamedev.net where it has been a valuable resource for developers wanting to learn OpenGL and graphics programming.

      View full story
    • By TheChubu
      The Khronos™ Group, an open consortium of leading hardware and software companies, announces from the SIGGRAPH 2017 Conference the immediate public availability of the OpenGL® 4.6 specification. OpenGL 4.6 integrates the functionality of numerous ARB and EXT extensions created by Khronos members AMD, Intel, and NVIDIA into core, including the capability to ingest SPIR-V™ shaders.
      SPIR-V is a Khronos-defined standard intermediate language for parallel compute and graphics, which enables content creators to simplify their shader authoring and management pipelines while providing significant source shading language flexibility. OpenGL 4.6 adds support for ingesting SPIR-V shaders to the core specification, guaranteeing that SPIR-V shaders will be widely supported by OpenGL implementations.
      OpenGL 4.6 adds the functionality of these ARB extensions to OpenGL’s core specification:
      GL_ARB_gl_spirv and GL_ARB_spirv_extensions to standardize SPIR-V support for OpenGL GL_ARB_indirect_parameters and GL_ARB_shader_draw_parameters for reducing the CPU overhead associated with rendering batches of geometry GL_ARB_pipeline_statistics_query and GL_ARB_transform_feedback_overflow_querystandardize OpenGL support for features available in Direct3D GL_ARB_texture_filter_anisotropic (based on GL_EXT_texture_filter_anisotropic) brings previously IP encumbered functionality into OpenGL to improve the visual quality of textured scenes GL_ARB_polygon_offset_clamp (based on GL_EXT_polygon_offset_clamp) suppresses a common visual artifact known as a “light leak” associated with rendering shadows GL_ARB_shader_atomic_counter_ops and GL_ARB_shader_group_vote add shader intrinsics supported by all desktop vendors to improve functionality and performance GL_KHR_no_error reduces driver overhead by allowing the application to indicate that it expects error-free operation so errors need not be generated In addition to the above features being added to OpenGL 4.6, the following are being released as extensions:
      GL_KHR_parallel_shader_compile allows applications to launch multiple shader compile threads to improve shader compile throughput WGL_ARB_create_context_no_error and GXL_ARB_create_context_no_error allow no error contexts to be created with WGL or GLX that support the GL_KHR_no_error extension “I’m proud to announce OpenGL 4.6 as the most feature-rich version of OpenGL yet. We've brought together the most popular, widely-supported extensions into a new core specification to give OpenGL developers and end users an improved baseline feature set. This includes resolving previous intellectual property roadblocks to bringing anisotropic texture filtering and polygon offset clamping into the core specification to enable widespread implementation and usage,” said Piers Daniell, chair of the OpenGL Working Group at Khronos. “The OpenGL working group will continue to respond to market needs and work with GPU vendors to ensure OpenGL remains a viable and evolving graphics API for all its customers and users across many vital industries.“
      The OpenGL 4.6 specification can be found at https://khronos.org/registry/OpenGL/index_gl.php. The GLSL to SPIR-V compiler glslang has been updated with GLSL 4.60 support, and can be found at https://github.com/KhronosGroup/glslang.
      Sophisticated graphics applications will also benefit from a set of newly released extensions for both OpenGL and OpenGL ES to enable interoperability with Vulkan and Direct3D. These extensions are named:
      GL_EXT_memory_object GL_EXT_memory_object_fd GL_EXT_memory_object_win32 GL_EXT_semaphore GL_EXT_semaphore_fd GL_EXT_semaphore_win32 GL_EXT_win32_keyed_mutex They can be found at: https://khronos.org/registry/OpenGL/index_gl.php
      Industry Support for OpenGL 4.6
      “With OpenGL 4.6 our customers have an improved set of core features available on our full range of OpenGL 4.x capable GPUs. These features provide improved rendering quality, performance and functionality. As the graphics industry’s most popular API, we fully support OpenGL and will continue to work closely with the Khronos Group on the development of new OpenGL specifications and extensions for our customers. NVIDIA has released beta OpenGL 4.6 drivers today at https://developer.nvidia.com/opengl-driver so developers can use these new features right away,” said Bob Pette, vice president, Professional Graphics at NVIDIA.
      "OpenGL 4.6 will be the first OpenGL release where conformant open source implementations based on the Mesa project will be deliverable in a reasonable timeframe after release. The open sourcing of the OpenGL conformance test suite and ongoing work between Khronos and X.org will also allow for non-vendor led open source implementations to achieve conformance in the near future," said David Airlie, senior principal engineer at Red Hat, and developer on Mesa/X.org projects.

      View full story
    • By _OskaR
      Hi,
      I have an OpenGL application but without possibility to wite own shaders.
      I need to perform small VS modification - is possible to do it in an alternative way? Do we have apps or driver modifictions which will catch the shader sent to GPU and override it?
    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
  • Popular Now