Sign in to follow this  
Yours3!f

OpenGL question about deferred rendering

Recommended Posts

Hi,

I'm trying to implement deferred rendering in a modern way using OGL 4.x core. For this goal I've read several articles pdfs, tutorials etc. about this topic, however I still have some questions.
First of all, when I create a G-buffer what do I really do?
I mean I surely want to create a FBO and an RBO and attach the RBO to the FBO so that I can render to a texture which I attach to the FBO.
But then how do I attach other stuff to the FBO?
As I've read I definitely will need at least an albedo a depth and a normal buffer. I suppose these will all be textures. However there are many formats to choose from, and it depends on the article (or rather game engine) which is used.
Which are the most common, or rather the most efficient, and nicest ones?
(to add I definitely want to go with the full HDR pipeline)
Another question is: how do I fill these buffers?
Especially when there are object in my scene which doesn't have textures or normals. And there is the cube mapping which should be completely untouched, since I only need the colors of it.
Finally, I ran into an article from Intel in which the whole shading part was solved within a compute shader. So how can I connect OpenCL and OpenGL so that I can get the same results with open source APIs?

Best regards,
Yours3!f

Share this post


Link to post
Share on other sites
[quote name='Yours3!f' timestamp='1313242976' post='4848629']
Hi,

I'm trying to implement deferred rendering in a modern way using OGL 4.x core. For this goal I've read several articles pdfs, tutorials etc. about this topic, however I still have some questions.
First of all, when I create a G-buffer what do I really do?
I mean I surely want to create a FBO and an RBO and attach the RBO to the FBO so that I can render to a texture which I attach to the FBO.
But then how do I attach other stuff to the FBO?
As I've read I definitely will need at least an albedo a depth and a normal buffer. I suppose these will all be textures. However there are many formats to choose from, and it depends on the article (or rather game engine) which is used.
Which are the most common, or rather the most efficient, and nicest ones?
(to add I definitely want to go with the full HDR pipeline)
Another question is: how do I fill these buffers?
Especially when there are object in my scene which doesn't have textures or normals. And there is the cube mapping which should be completely untouched, since I only need the colors of it.
Finally, I ran into an article from Intel in which the whole shading part was solved within a compute shader. So how can I connect OpenCL and OpenGL so that I can get the same results with open source APIs?

Best regards,
Yours3!f
[/quote]

Well I am trying to work on the same thing and I can only attempt to try and answer one of your questions, "How do I fill these buffers?". And I believe to fill them you bind them before drawing and then your shader should output its final data to your buffer. I am not 100% sure if that is right, but I think it is.

Share this post


Link to post
Share on other sites
something like this?

[code]
glActiveTexture(GL_TEXTURE0);
texture0.bind();
glActiveTexture(GL_TEXTURE0 + 1);
albedo_texture.bind();
glActiveTexture(GL_TEXTURE0 + 2);
normal_texture.bind();
glActiveTexture(GL_TEXTURE0 + 3);
depth_texture.bind();
glUniform1i(texture0_location, GL_TEXTURE0);
glUniform1i(albedo_location, GL_TEXTURE0 + 1);
glUniform1i(normal_location, GL_TEXTURE0 + 2);
glUniform1i(depth_location, GL_TEXTURE0 + 3);
//render stuff...

(pixel shader)
#version 410
uniform sampler2D texture0;
uniform sampler2D albedo;
uniform sampler2D normal;
uniform sampler2D depth;

in vec3 normals;
in vec4 position;
smooth in vec2 texture_coordinates;

out vec4 fragment_color;

void main()
{
depth = position.z;
normal = vec4(normals, 1.0);
fragment_color = albedo = texture(texture0, texture_coordinates);
}
[/code]

Share this post


Link to post
Share on other sites
About the texture formats:

I've looked at the possible formats found here:
[url="http://www.opengl.org/wiki/Image_Format"]http://www.opengl.or...ki/Image_Format[/url]
and I've read about them a lot.

I've found an nvidia HDR sample that has an FPS and a timer as well with it, which features some of these formats:

From best performance to worst:
[code]
HDR format RT format FPS time (ms)
RGB9_E5 R11F_G11F_B10F 750 1.31
R11F_G11F_B10F R11F_G11F_B10F 740 1.32
RGBA16F R11F_G11F_B10F 725 1.40
RGBA32F R11F_G11F_B10F 610 1.63
RGB9_E5 RGBA16F 523 1.91
R11F_G11F_B10F RGBA16F 523 1.91
RGBA16F RGBA16F 512 1.95
RGBA32F RGBA16F 470 2.12
RGB9_E5 RGBA32F 292 3.43
R11F_G11F_B10F RGBA32F 291 3.43
RGBA16F RGBA32F 288 3.46
RGBA32F RGBA32F 277 3.6
[/code]
These were achieved with a Core i3 540, a HD 5770 with 1GB GDDR5, and on 1440x900.

I didn't notice any visual difference only when using a RGBA32F HDR format. However this might be because of some bug.
To add when using high performance formats like RGB9_E5 some visual glitches might appear if we combine this with other effects.
Another thing to notice is that if we want to make the game go with 30 FPS then in the worst case HDR rendering took 10% of the rendering time, in the best it only took 4.36%. Using an easily implementable format like the RGBA16F + RGBA16F would be the best way in my opinion because it takes 6.5% of the rendering time which is nice, and we still have enough precision.

On the other hand I've read the deferred rendering tutorial on Catalina's XNA blog, in which several combinations are mentioned for the G-buffer, from which the best seemed to be using a RGBA16F for albedo, R16G16F (I don't know if there's such format in OGL) for normal data (or maybe RGB10_A2), and R32F for position data. This would use about 33 MB of memory and would leave 1 channel of 16 bit data free to use for other purposes such as storing material ID. I think there would be need for another G-buffer component for other stuff like specular intensity and exponent material ID etc. For this a RGBA, or RGB format could be used.

Share this post


Link to post
Share on other sites
Did you already understand how it works? Add multiple out variables to the shader, and use glBindFragDataLocation before linking shader. And when rendering, call glDrawBuffers to tell what buffers you are using.

[code]
out vec4 depthFrag;
out vec4 normalFrag;
// ... whatever you want

void main() {
depthFrag = vec4(...);
normalFrag = vec4(...);
}
[/code]

[code]
// Create shader program and attach shaders here

// Bind frag data locations
glBindFragDataLocation(shaderProgram, 0, "depthFrag");
glBindFragDataLocation(shaderProgram, 1, "normalFrag");

glLinkShader(shaderProgram);

// And when rendering, tell what buffers we will be using

GLenum buffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, buffers);
[/code]

Hope this helps.

Share this post


Link to post
Share on other sites
Ok, I tried to implement it, but in the final lighting stage I seem to be getting no normals or depth, and I also think that depth isn't calculated properly. I do get the albedo, which is great, but due to the lack of the other two components I can't do lighting.

So here's my initialization:
[code]
void deferred::init()
{
//load the lighting shader
objs::get()->shader_loader.load_shader_control_file ( "../shaders/deferred/fs_quad.sc", &fs_quad );

//load the full-screen quad
objs::get()->obj.load_obj_file ( "../resources/fs_quad.obj", &quad, &fs_quad );

//set texture parameters

float w = objs::get()->conf.SCREEN_WIDTH;
float h = objs::get()->conf.SCREEN_HEIGHT;

fbo.create(); //generate a fbo
fbo.bind(); //bind it

GLenum modes[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2 };
//set which draw buffers will it use
glDrawBuffers ( 3, modes );

//create render buffers
rb_albedo.create();
rb_normal.create();
rb_position.create();
rb_depth.create(); //this is not a g-buffer component it is just a depth attachment

//bind render buffers, set the storage format, and attach them to the fbo
rb_albedo.bind();
rb_albedo.set_storage_format ( GL_RGBA16F, w, h );
rb_albedo.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, &fbo );
rb_normal.bind();
rb_normal.set_storage_format ( GL_RG16F, w, h );
rb_normal.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, &fbo );
rb_position.bind();
rb_position.set_storage_format ( GL_R32F, w, h );
rb_position.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, &fbo );
rb_depth.bind();
rb_depth.set_storage_format ( GL_DEPTH_COMPONENT32, w, h );
rb_depth.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, &fbo );

//create textures
albedo.create();
normals.create();
position.create();

//set the albedo as the 5th (4) texture
glActiveTexture ( GL_TEXTURE4 );
albedo.bind(); //bind it
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
//use rgba16f format
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA16F, w, h, 0, GL_RGBA, GL_FLOAT, 0 );
albedo.set_dimensions ( w, h ); //store the size of the texture for future use

glActiveTexture ( GL_TEXTURE5 );
normals.bind();
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RG16F, w, h, 0, GL_RGBA, GL_FLOAT, 0 );
normals.set_dimensions ( w, h );

glActiveTexture ( GL_TEXTURE6 );
position.bind();
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_R32F, w, h, 0, GL_RGBA, GL_FLOAT, 0 );
position.set_dimensions ( w, h );

//reset the active texture
glActiveTexture ( GL_TEXTURE0 );

//attach textures to fbo
albedo.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, &fbo );
normals.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, &fbo );
position.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, &fbo );

fbo.unbind();
}
[/code]

here's the rendering code
[code]
void deferred::render()
{
//set orthographic view w, h, near, far
event::get()->get_resize()->set_orthographic ( 1.0f, 1.0f, -1.0f, 1.0f );
//disable depth testing
glDisable(GL_DEPTH_TEST);
//bind the lighting shader
fs_quad.bind();
//pass the matrices and scene information
fs_quad.pass_m4x4 ( objs::get()->ppl.get_projection_matrix(), "m4_p" );
fs_quad.pass_m4x4 ( objs::get()->ppl.get_model_view_matrix(), "m4_mv" );
//reset perspective mode
event::get()->get_resize()->set_perspective ();
//inverse projection matrix for depth to position reconstruction
fs_quad.pass_m4x4 ( objs::get()->ppl.get_projection_matrix().invert(), "inv_proj" );
//pass camera position
fs_quad.pass_vec4 ( mymath::vec4f ( objs::get()->cam.pos[0], objs::get()->cam.pos[1], objs::get()->cam.pos[2], 1.0f ), "cam_pos" );
//pass the textures
fs_quad.pass_int ( 4, "texture4" );
fs_quad.pass_int ( 5, "texture5" );
fs_quad.pass_int ( 6, "texture6" );
//draw full screen quad
quad.render();
//unbind the lighting shader
fs_quad.unbind();
//enable depth testing
glEnable(GL_DEPTH_TEST);
}
[/code]

the shader that fills the G-buffer
[code]
//vertex shader
#version 410

//projection, modelview matrices
uniform mat4 m4_p, m4_mv;
uniform mat3 m3_n;

//the vertex position
in vec4 v4_vertex;
//the texture coordinates
in vec2 v2_texture;
in vec3 v3_normal;

smooth out vec2 v2_texture_coords;
out vec2 normal;
out float depth;

vec2 encode_normal_x_y_reconstruct_z(vec3 in_normal)
{
return vec2(in_normal.xy * 0.5 + 0.5);
}

void main()
{
normal = encode_normal_x_y_reconstruct_z(m3_n * v3_normal);
v2_texture_coords = v2_texture;

gl_Position = m4_p * m4_mv * v4_vertex;
depth = gl_Position.z / gl_Position.w;
}

//////////////////////////////////////////-------------------------------------------------------------

//pixel shader
#version 410

uniform sampler2D texture0;

smooth in vec2 v2_texture_coords;
in vec2 normal;
in float depth;

out vec4 v4_color; //color attachment0
out vec4 v4_normal; //c. a. 1
out vec4 v4_depth; //c. a. 2

void main()
{
v4_normal.xy = normal;
v4_depth.x = depth;
v4_color = texture(texture0, v2_texture_coords);
}
[/code]

Using this shader the normals here seem to be good when I draw them: v4_color = vec4(v2_normal, 0.0, 1.0);
however when I draw the depth the mesh get all black, instead of that grayscale depth look. v4_color = vec4(vec3(depth), 1.0);
I also use the glBindFragDataLocation before linking as you suggested, although I don't know if it was bound

the lighting shader:
[code]

//vertex shader
#version 410

//projection, modelview matrices
uniform mat4 m4_p, m4_mv;

//the vertex position
in vec4 v4_vertex;
//the texture coordinates
in vec2 v2_texture;

smooth out vec2 v2_texture_coords;

void main()
{
v2_texture_coords = v2_texture;
gl_Position = m4_p * m4_mv * v4_vertex;
}


///////////////////////////////////---------------------------------------------------------------------------

//pixel shader
#version 410

uniform sampler2D texture4; //albedo RGBA16F
uniform sampler2D texture5; //normal RG16F
uniform sampler2D texture6; //depth R32F

smooth in vec2 v2_texture_coords; //texture coordinates for the G-buffer
uniform mat4 inv_proj; //inverse projection matrix
uniform vec4 cam_pos; //camera position

out vec4 v4_color; //the outgoing color

vec3 decode_normal_x_y_reconstruct_z(vec2 in_normal)
{
vec3 out_normal;
out_normal.xy = in_normal * 2 - 1; //convert from range [0.0, 1.0] to [-1.0, 1.0]
out_normal.z = sqrt(1 - dot(out_normal.xy, out_normal.xy)); //since it is perpendicular to x and y we can calculate it
return out_normal;
}

void main()
{
vec4 albedo = texture(texture4, v2_texture_coords);
vec3 normal = decode_normal_x_y_reconstruct_z(texture(texture5, v2_texture_coords).xy);
float depth = texture(texture6, v2_texture_coords).x;

//get texture coords from [0, 1] to [-1, 1], add them as x and y, add depth as z, and multiply by inverse projection matrix
vec4 position = vec4(v2_texture_coords.x * 2.0 - 1.0, -(v2_texture_coords.y * 2.0 - 1.0), depth, 1.0) * inv_proj;
position /= position.w;

//blinn lighting with a light placed at [0, 5, -2]
vec3 light = vec3( 0.0, 5.0, -2.0);
vec3 light_dir = normalize(light - position.xyz);
vec3 eye_dir = normalize(cam_pos.xyz - position.xyz);
vec3 half_vec = normalize(light_dir + eye_dir);
v4_color = max(dot(normal, light_dir), 0.0) * albedo + pow(max(dot(normal, half_vec), 0.0), 9.0) * 10.0;
}
[/code]

When I try to draw the normals with v4_color = vec4(normal, 1.0); I get a black screen, this also happens with depth. Please help I'm struggling to get this working.

Best regards,
Yours3!f

Share this post


Link to post
Share on other sites
I think you have to call glDrawBuffers every time you want to render something on the buffers. It's probably using only the first buffer now (which is albedo), because you don't tell it what to use when rendering.

Share this post


Link to post
Share on other sites
[quote name='Sponji' timestamp='1313523107' post='4849998']
I think you have to call glDrawBuffers every time you want to render something on the buffers. It's probably using only the first buffer now (which is albedo), because you don't tell it what to use when rendering.
[/quote]

Thanks, but you don't have to do this every frame. I know on Codinglabs in the tutorial it is done every frame, but you simply don't have to, because it only sets up the frame buffer object, so that it actually recieves the color input.

I updated the code (see the post above), and now I do get the color information, depth and normals as well.

Now I suppose the lighting equation or the decoding function is wrong, since I have all the data.

Share this post


Link to post
Share on other sites
ok so I tried to figure out why my ligthing doesn't work, so I tried some things.

first of all if in the lighting shader I gather the normal and depth and convert them the conversion works.
[code]
void main()
{
vec3 normal = decode_normal_x_y_reconstruct_z(texture(texture5, v2_texture_coords).xy);
float depth = texture(texture6, v2_texture_coords).x;

//get texture coords from [0, 1] to [-1, 1], add them as x and y, add depth as z, and multiply by inverse projection matrix
vec4 position = vec4(v2_texture_coords.x * 2.0 - 1.0, -(v2_texture_coords.y * 2.0 - 1.0), depth, 1.0) * inv_proj;
position /= position.w;

v4_color = vec4(normal, 1.0); //gives me a normal looking scene
//v4_color = vec4(vec3(depth), 1.0); //gives me a depth looking scene
//v4_color = position; //gives me a position scene (well it is red green and blue :) )
}
[/code]

However when I do this I get a black screen:
[code]
void main()
{
v4_color = texture(texture4, v2_texture_coords);
vec3 normal = decode_normal_x_y_reconstruct_z(texture(texture5, v2_texture_coords).xy);

v4_color = vec4(normal, 1.0); //this should give me a normal looking scene
}
[/code]

I have no idea why that happens, and I guess this is why my lighting doesn't work too...

Share this post


Link to post
Share on other sites
OMG finally after 3 days I seem to be solving it, I just didn't need to set the glTexParameteri(...); stuff before rendering (see updated code above), but i did need it at texture creation. My lighting still sucks though :) But I guess I can solve it now.

EDIT: don't forget that you have to pass the inverse of the projection matrix which you use at rendering the scene (the perspective), not the one (the orthographic) that you use when doing lighting

Share this post


Link to post
Share on other sites
Hey[size="2"][color="#1c2837"] Yours3!f,

Could you post a picture of what your 'depth' view looks like? Mine is a white screen :( and I want the grey scale kinda scene, and I noticed that you display the depth like this:[/color][/size]

[size="2"][color="#1c2837"][code]v4_color = vec4(vec3(depth), 1.0); [/code][/color][/size]

[color="#1C2837"][size="2"]I do this:

[code]depth.rgb = saturate(depth);
depth.a = 1.0f;[/code]
[/size][/color]

[color="#1C2837"][size="2"]I'm not at the computer at the moment, so can't try it your way, but yeah do you get a 'grey scale' view, indicating 'white = far away' and 'black = near'? Which is what I'm after instead of plain white :([/size][/color]

Share this post


Link to post
Share on other sites
@Daniel_RT

see this topic of mine:
http://www.gamedev.net/topic/609271-depth-reconstruction-not-working/

(it includes the source code)

To answer your question, depth can look like a grayscale image with black = near and white = far, and it can look all black except getting dark at the end of the frustrum.

The first one (grayscale image) indicates that you are storing non-linearized depth so values will range from 0.0 ... far clip distance, so from black to white.

The second one indicates that you are storing values that range from 0.0 ... 1.0, which means that it will be black near you and it will only be brighter from very far from you (in my case 10000 units from me). you achieve this by dividing the depth value with the far clip plane's distance. This way of storing depth is called storing normalized depth.

I recommend this tutorial on the theoretical basis of this topic:
http://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/

for the GLSL implementation take a look at my topic (the last but one post). I have to add that the source there is not the best because it doesn't do depth testing, which means that your scene will look ankward, but after the initial implementation of depth reconstruction you can easily implement depth testing by using method 3 from MJP's tutorial (+ this adds some optimization).

oh and dont resurrect old topics :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Forum Statistics

    • Total Topics
      628281
    • Total Posts
      2981800
  • Similar Content

    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
    • By Abecederia
      So I've recently started learning some GLSL and now I'm toying with a POM shader. I'm trying to optimize it and notice that it starts having issues at high texture sizes, especially with self-shadowing.
      Now I know POM is expensive either way, but would pulling the heightmap out of the normalmap alpha channel and in it's own 8bit texture make doing all those dozens of texture fetches more cheap? Or is everything in the cache aligned to 32bit anyway? I haven't implemented texture compression yet, I think that would help? But regardless, should there be a performance boost from decoupling the heightmap? I could also keep it in a lower resolution than the normalmap if that would improve performance.
      Any help is much appreciated, please keep in mind I'm somewhat of a newbie. Thanks!
    • By test opty
      Hi,
      I'm trying to learn OpenGL through a website and have proceeded until this page of it. The output is a simple triangle. The problem is the complexity.
      I have read that page several times and tried to analyse the code but I haven't understood the code properly and completely yet. This is the code:
       
      #include <glad/glad.h> #include <GLFW/glfw3.h> #include <C:\Users\Abbasi\Desktop\std_lib_facilities_4.h> using namespace std; //****************************************************************************** void framebuffer_size_callback(GLFWwindow* window, int width, int height); void processInput(GLFWwindow *window); // settings const unsigned int SCR_WIDTH = 800; const unsigned int SCR_HEIGHT = 600; const char *vertexShaderSource = "#version 330 core\n" "layout (location = 0) in vec3 aPos;\n" "void main()\n" "{\n" " gl_Position = vec4(aPos.x, aPos.y, aPos.z, 1.0);\n" "}\0"; const char *fragmentShaderSource = "#version 330 core\n" "out vec4 FragColor;\n" "void main()\n" "{\n" " FragColor = vec4(1.0f, 0.5f, 0.2f, 1.0f);\n" "}\n\0"; //******************************* int main() { // glfw: initialize and configure // ------------------------------ glfwInit(); glfwWindowHint(GLFW_CONTEXT_VERSION_MAJOR, 3); glfwWindowHint(GLFW_CONTEXT_VERSION_MINOR, 3); glfwWindowHint(GLFW_OPENGL_PROFILE, GLFW_OPENGL_CORE_PROFILE); // glfw window creation GLFWwindow* window = glfwCreateWindow(SCR_WIDTH, SCR_HEIGHT, "My First Triangle", nullptr, nullptr); if (window == nullptr) { cout << "Failed to create GLFW window" << endl; glfwTerminate(); return -1; } glfwMakeContextCurrent(window); glfwSetFramebufferSizeCallback(window, framebuffer_size_callback); // glad: load all OpenGL function pointers if (!gladLoadGLLoader((GLADloadproc)glfwGetProcAddress)) { cout << "Failed to initialize GLAD" << endl; return -1; } // build and compile our shader program // vertex shader int vertexShader = glCreateShader(GL_VERTEX_SHADER); glShaderSource(vertexShader, 1, &vertexShaderSource, nullptr); glCompileShader(vertexShader); // check for shader compile errors int success; char infoLog[512]; glGetShaderiv(vertexShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(vertexShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::VERTEX::COMPILATION_FAILED\n" << infoLog << endl; } // fragment shader int fragmentShader = glCreateShader(GL_FRAGMENT_SHADER); glShaderSource(fragmentShader, 1, &fragmentShaderSource, nullptr); glCompileShader(fragmentShader); // check for shader compile errors glGetShaderiv(fragmentShader, GL_COMPILE_STATUS, &success); if (!success) { glGetShaderInfoLog(fragmentShader, 512, nullptr, infoLog); cout << "ERROR::SHADER::FRAGMENT::COMPILATION_FAILED\n" << infoLog << endl; } // link shaders int shaderProgram = glCreateProgram(); glAttachShader(shaderProgram, vertexShader); glAttachShader(shaderProgram, fragmentShader); glLinkProgram(shaderProgram); // check for linking errors glGetProgramiv(shaderProgram, GL_LINK_STATUS, &success); if (!success) { glGetProgramInfoLog(shaderProgram, 512, nullptr, infoLog); cout << "ERROR::SHADER::PROGRAM::LINKING_FAILED\n" << infoLog << endl; } glDeleteShader(vertexShader); glDeleteShader(fragmentShader); // set up vertex data (and buffer(s)) and configure vertex attributes float vertices[] = { -0.5f, -0.5f, 0.0f, // left 0.5f, -0.5f, 0.0f, // right 0.0f, 0.5f, 0.0f // top }; unsigned int VBO, VAO; glGenVertexArrays(1, &VAO); glGenBuffers(1, &VBO); // bind the Vertex Array Object first, then bind and set vertex buffer(s), //and then configure vertex attributes(s). glBindVertexArray(VAO); glBindBuffer(GL_ARRAY_BUFFER, VBO); glBufferData(GL_ARRAY_BUFFER, sizeof(vertices), vertices, GL_STATIC_DRAW); glVertexAttribPointer(0, 3, GL_FLOAT, GL_FALSE, 3 * sizeof(float), (void*)0); glEnableVertexAttribArray(0); // note that this is allowed, the call to glVertexAttribPointer registered VBO // as the vertex attribute's bound vertex buffer object so afterwards we can safely unbind glBindBuffer(GL_ARRAY_BUFFER, 0); // You can unbind the VAO afterwards so other VAO calls won't accidentally // modify this VAO, but this rarely happens. Modifying other // VAOs requires a call to glBindVertexArray anyways so we generally don't unbind // VAOs (nor VBOs) when it's not directly necessary. glBindVertexArray(0); // uncomment this call to draw in wireframe polygons. //glPolygonMode(GL_FRONT_AND_BACK, GL_LINE); // render loop while (!glfwWindowShouldClose(window)) { // input // ----- processInput(window); // render // ------ glClearColor(0.2f, 0.3f, 0.3f, 1.0f); glClear(GL_COLOR_BUFFER_BIT); // draw our first triangle glUseProgram(shaderProgram); glBindVertexArray(VAO); // seeing as we only have a single VAO there's no need to // bind it every time, but we'll do so to keep things a bit more organized glDrawArrays(GL_TRIANGLES, 0, 3); // glBindVertexArray(0); // no need to unbind it every time // glfw: swap buffers and poll IO events (keys pressed/released, mouse moved etc.) glfwSwapBuffers(window); glfwPollEvents(); } // optional: de-allocate all resources once they've outlived their purpose: glDeleteVertexArrays(1, &VAO); glDeleteBuffers(1, &VBO); // glfw: terminate, clearing all previously allocated GLFW resources. glfwTerminate(); return 0; } //************************************************** // process all input: query GLFW whether relevant keys are pressed/released // this frame and react accordingly void processInput(GLFWwindow *window) { if (glfwGetKey(window, GLFW_KEY_ESCAPE) == GLFW_PRESS) glfwSetWindowShouldClose(window, true); } //******************************************************************** // glfw: whenever the window size changed (by OS or user resize) this callback function executes void framebuffer_size_callback(GLFWwindow* window, int width, int height) { // make sure the viewport matches the new window dimensions; note that width and // height will be significantly larger than specified on retina displays. glViewport(0, 0, width, height); } As you see, about 200 lines of complicated code only for a simple triangle. 
      I don't know what parts are necessary for that output. And also, what the correct order of instructions for such an output or programs is, generally. That start point is too complex for a beginner of OpenGL like me and I don't know how to make the issue solved. What are your ideas please? What is the way to figure both the code and the whole program out correctly please?
      I wish I'd read a reference that would teach me OpenGL through a step-by-step method. 
  • Popular Now