Sign in to follow this  
Yours3!f

OpenGL question about deferred rendering

Recommended Posts

Yours3!f    1532
Hi,

I'm trying to implement deferred rendering in a modern way using OGL 4.x core. For this goal I've read several articles pdfs, tutorials etc. about this topic, however I still have some questions.
First of all, when I create a G-buffer what do I really do?
I mean I surely want to create a FBO and an RBO and attach the RBO to the FBO so that I can render to a texture which I attach to the FBO.
But then how do I attach other stuff to the FBO?
As I've read I definitely will need at least an albedo a depth and a normal buffer. I suppose these will all be textures. However there are many formats to choose from, and it depends on the article (or rather game engine) which is used.
Which are the most common, or rather the most efficient, and nicest ones?
(to add I definitely want to go with the full HDR pipeline)
Another question is: how do I fill these buffers?
Especially when there are object in my scene which doesn't have textures or normals. And there is the cube mapping which should be completely untouched, since I only need the colors of it.
Finally, I ran into an article from Intel in which the whole shading part was solved within a compute shader. So how can I connect OpenCL and OpenGL so that I can get the same results with open source APIs?

Best regards,
Yours3!f

Share this post


Link to post
Share on other sites
TTT_Dutch    240
[quote name='Yours3!f' timestamp='1313242976' post='4848629']
Hi,

I'm trying to implement deferred rendering in a modern way using OGL 4.x core. For this goal I've read several articles pdfs, tutorials etc. about this topic, however I still have some questions.
First of all, when I create a G-buffer what do I really do?
I mean I surely want to create a FBO and an RBO and attach the RBO to the FBO so that I can render to a texture which I attach to the FBO.
But then how do I attach other stuff to the FBO?
As I've read I definitely will need at least an albedo a depth and a normal buffer. I suppose these will all be textures. However there are many formats to choose from, and it depends on the article (or rather game engine) which is used.
Which are the most common, or rather the most efficient, and nicest ones?
(to add I definitely want to go with the full HDR pipeline)
Another question is: how do I fill these buffers?
Especially when there are object in my scene which doesn't have textures or normals. And there is the cube mapping which should be completely untouched, since I only need the colors of it.
Finally, I ran into an article from Intel in which the whole shading part was solved within a compute shader. So how can I connect OpenCL and OpenGL so that I can get the same results with open source APIs?

Best regards,
Yours3!f
[/quote]

Well I am trying to work on the same thing and I can only attempt to try and answer one of your questions, "How do I fill these buffers?". And I believe to fill them you bind them before drawing and then your shader should output its final data to your buffer. I am not 100% sure if that is right, but I think it is.

Share this post


Link to post
Share on other sites
Yours3!f    1532
something like this?

[code]
glActiveTexture(GL_TEXTURE0);
texture0.bind();
glActiveTexture(GL_TEXTURE0 + 1);
albedo_texture.bind();
glActiveTexture(GL_TEXTURE0 + 2);
normal_texture.bind();
glActiveTexture(GL_TEXTURE0 + 3);
depth_texture.bind();
glUniform1i(texture0_location, GL_TEXTURE0);
glUniform1i(albedo_location, GL_TEXTURE0 + 1);
glUniform1i(normal_location, GL_TEXTURE0 + 2);
glUniform1i(depth_location, GL_TEXTURE0 + 3);
//render stuff...

(pixel shader)
#version 410
uniform sampler2D texture0;
uniform sampler2D albedo;
uniform sampler2D normal;
uniform sampler2D depth;

in vec3 normals;
in vec4 position;
smooth in vec2 texture_coordinates;

out vec4 fragment_color;

void main()
{
depth = position.z;
normal = vec4(normals, 1.0);
fragment_color = albedo = texture(texture0, texture_coordinates);
}
[/code]

Share this post


Link to post
Share on other sites
Yours3!f    1532
About the texture formats:

I've looked at the possible formats found here:
[url="http://www.opengl.org/wiki/Image_Format"]http://www.opengl.or...ki/Image_Format[/url]
and I've read about them a lot.

I've found an nvidia HDR sample that has an FPS and a timer as well with it, which features some of these formats:

From best performance to worst:
[code]
HDR format RT format FPS time (ms)
RGB9_E5 R11F_G11F_B10F 750 1.31
R11F_G11F_B10F R11F_G11F_B10F 740 1.32
RGBA16F R11F_G11F_B10F 725 1.40
RGBA32F R11F_G11F_B10F 610 1.63
RGB9_E5 RGBA16F 523 1.91
R11F_G11F_B10F RGBA16F 523 1.91
RGBA16F RGBA16F 512 1.95
RGBA32F RGBA16F 470 2.12
RGB9_E5 RGBA32F 292 3.43
R11F_G11F_B10F RGBA32F 291 3.43
RGBA16F RGBA32F 288 3.46
RGBA32F RGBA32F 277 3.6
[/code]
These were achieved with a Core i3 540, a HD 5770 with 1GB GDDR5, and on 1440x900.

I didn't notice any visual difference only when using a RGBA32F HDR format. However this might be because of some bug.
To add when using high performance formats like RGB9_E5 some visual glitches might appear if we combine this with other effects.
Another thing to notice is that if we want to make the game go with 30 FPS then in the worst case HDR rendering took 10% of the rendering time, in the best it only took 4.36%. Using an easily implementable format like the RGBA16F + RGBA16F would be the best way in my opinion because it takes 6.5% of the rendering time which is nice, and we still have enough precision.

On the other hand I've read the deferred rendering tutorial on Catalina's XNA blog, in which several combinations are mentioned for the G-buffer, from which the best seemed to be using a RGBA16F for albedo, R16G16F (I don't know if there's such format in OGL) for normal data (or maybe RGB10_A2), and R32F for position data. This would use about 33 MB of memory and would leave 1 channel of 16 bit data free to use for other purposes such as storing material ID. I think there would be need for another G-buffer component for other stuff like specular intensity and exponent material ID etc. For this a RGBA, or RGB format could be used.

Share this post


Link to post
Share on other sites
Sponji    2503
Did you already understand how it works? Add multiple out variables to the shader, and use glBindFragDataLocation before linking shader. And when rendering, call glDrawBuffers to tell what buffers you are using.

[code]
out vec4 depthFrag;
out vec4 normalFrag;
// ... whatever you want

void main() {
depthFrag = vec4(...);
normalFrag = vec4(...);
}
[/code]

[code]
// Create shader program and attach shaders here

// Bind frag data locations
glBindFragDataLocation(shaderProgram, 0, "depthFrag");
glBindFragDataLocation(shaderProgram, 1, "normalFrag");

glLinkShader(shaderProgram);

// And when rendering, tell what buffers we will be using

GLenum buffers[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1};
glDrawBuffers(2, buffers);
[/code]

Hope this helps.

Share this post


Link to post
Share on other sites
Yours3!f    1532
No actually I didn't understand it until this point :) I just thought about it on a theoretical basis, but yeah that's what I was looking for, thank you.

Share this post


Link to post
Share on other sites
Yours3!f    1532
Ok, I tried to implement it, but in the final lighting stage I seem to be getting no normals or depth, and I also think that depth isn't calculated properly. I do get the albedo, which is great, but due to the lack of the other two components I can't do lighting.

So here's my initialization:
[code]
void deferred::init()
{
//load the lighting shader
objs::get()->shader_loader.load_shader_control_file ( "../shaders/deferred/fs_quad.sc", &fs_quad );

//load the full-screen quad
objs::get()->obj.load_obj_file ( "../resources/fs_quad.obj", &quad, &fs_quad );

//set texture parameters

float w = objs::get()->conf.SCREEN_WIDTH;
float h = objs::get()->conf.SCREEN_HEIGHT;

fbo.create(); //generate a fbo
fbo.bind(); //bind it

GLenum modes[] = { GL_COLOR_ATTACHMENT0, GL_COLOR_ATTACHMENT1, GL_COLOR_ATTACHMENT2 };
//set which draw buffers will it use
glDrawBuffers ( 3, modes );

//create render buffers
rb_albedo.create();
rb_normal.create();
rb_position.create();
rb_depth.create(); //this is not a g-buffer component it is just a depth attachment

//bind render buffers, set the storage format, and attach them to the fbo
rb_albedo.bind();
rb_albedo.set_storage_format ( GL_RGBA16F, w, h );
rb_albedo.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, &fbo );
rb_normal.bind();
rb_normal.set_storage_format ( GL_RG16F, w, h );
rb_normal.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, &fbo );
rb_position.bind();
rb_position.set_storage_format ( GL_R32F, w, h );
rb_position.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, &fbo );
rb_depth.bind();
rb_depth.set_storage_format ( GL_DEPTH_COMPONENT32, w, h );
rb_depth.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, &fbo );

//create textures
albedo.create();
normals.create();
position.create();

//set the albedo as the 5th (4) texture
glActiveTexture ( GL_TEXTURE4 );
albedo.bind(); //bind it
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
//use rgba16f format
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RGBA16F, w, h, 0, GL_RGBA, GL_FLOAT, 0 );
albedo.set_dimensions ( w, h ); //store the size of the texture for future use

glActiveTexture ( GL_TEXTURE5 );
normals.bind();
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_RG16F, w, h, 0, GL_RGBA, GL_FLOAT, 0 );
normals.set_dimensions ( w, h );

glActiveTexture ( GL_TEXTURE6 );
position.bind();
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR );
glTexParameteri ( GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR );
glTexImage2D ( GL_TEXTURE_2D, 0, GL_R32F, w, h, 0, GL_RGBA, GL_FLOAT, 0 );
position.set_dimensions ( w, h );

//reset the active texture
glActiveTexture ( GL_TEXTURE0 );

//attach textures to fbo
albedo.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, &fbo );
normals.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT1, &fbo );
position.attach_to_frame_buffer ( GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT2, &fbo );

fbo.unbind();
}
[/code]

here's the rendering code
[code]
void deferred::render()
{
//set orthographic view w, h, near, far
event::get()->get_resize()->set_orthographic ( 1.0f, 1.0f, -1.0f, 1.0f );
//disable depth testing
glDisable(GL_DEPTH_TEST);
//bind the lighting shader
fs_quad.bind();
//pass the matrices and scene information
fs_quad.pass_m4x4 ( objs::get()->ppl.get_projection_matrix(), "m4_p" );
fs_quad.pass_m4x4 ( objs::get()->ppl.get_model_view_matrix(), "m4_mv" );
//reset perspective mode
event::get()->get_resize()->set_perspective ();
//inverse projection matrix for depth to position reconstruction
fs_quad.pass_m4x4 ( objs::get()->ppl.get_projection_matrix().invert(), "inv_proj" );
//pass camera position
fs_quad.pass_vec4 ( mymath::vec4f ( objs::get()->cam.pos[0], objs::get()->cam.pos[1], objs::get()->cam.pos[2], 1.0f ), "cam_pos" );
//pass the textures
fs_quad.pass_int ( 4, "texture4" );
fs_quad.pass_int ( 5, "texture5" );
fs_quad.pass_int ( 6, "texture6" );
//draw full screen quad
quad.render();
//unbind the lighting shader
fs_quad.unbind();
//enable depth testing
glEnable(GL_DEPTH_TEST);
}
[/code]

the shader that fills the G-buffer
[code]
//vertex shader
#version 410

//projection, modelview matrices
uniform mat4 m4_p, m4_mv;
uniform mat3 m3_n;

//the vertex position
in vec4 v4_vertex;
//the texture coordinates
in vec2 v2_texture;
in vec3 v3_normal;

smooth out vec2 v2_texture_coords;
out vec2 normal;
out float depth;

vec2 encode_normal_x_y_reconstruct_z(vec3 in_normal)
{
return vec2(in_normal.xy * 0.5 + 0.5);
}

void main()
{
normal = encode_normal_x_y_reconstruct_z(m3_n * v3_normal);
v2_texture_coords = v2_texture;

gl_Position = m4_p * m4_mv * v4_vertex;
depth = gl_Position.z / gl_Position.w;
}

//////////////////////////////////////////-------------------------------------------------------------

//pixel shader
#version 410

uniform sampler2D texture0;

smooth in vec2 v2_texture_coords;
in vec2 normal;
in float depth;

out vec4 v4_color; //color attachment0
out vec4 v4_normal; //c. a. 1
out vec4 v4_depth; //c. a. 2

void main()
{
v4_normal.xy = normal;
v4_depth.x = depth;
v4_color = texture(texture0, v2_texture_coords);
}
[/code]

Using this shader the normals here seem to be good when I draw them: v4_color = vec4(v2_normal, 0.0, 1.0);
however when I draw the depth the mesh get all black, instead of that grayscale depth look. v4_color = vec4(vec3(depth), 1.0);
I also use the glBindFragDataLocation before linking as you suggested, although I don't know if it was bound

the lighting shader:
[code]

//vertex shader
#version 410

//projection, modelview matrices
uniform mat4 m4_p, m4_mv;

//the vertex position
in vec4 v4_vertex;
//the texture coordinates
in vec2 v2_texture;

smooth out vec2 v2_texture_coords;

void main()
{
v2_texture_coords = v2_texture;
gl_Position = m4_p * m4_mv * v4_vertex;
}


///////////////////////////////////---------------------------------------------------------------------------

//pixel shader
#version 410

uniform sampler2D texture4; //albedo RGBA16F
uniform sampler2D texture5; //normal RG16F
uniform sampler2D texture6; //depth R32F

smooth in vec2 v2_texture_coords; //texture coordinates for the G-buffer
uniform mat4 inv_proj; //inverse projection matrix
uniform vec4 cam_pos; //camera position

out vec4 v4_color; //the outgoing color

vec3 decode_normal_x_y_reconstruct_z(vec2 in_normal)
{
vec3 out_normal;
out_normal.xy = in_normal * 2 - 1; //convert from range [0.0, 1.0] to [-1.0, 1.0]
out_normal.z = sqrt(1 - dot(out_normal.xy, out_normal.xy)); //since it is perpendicular to x and y we can calculate it
return out_normal;
}

void main()
{
vec4 albedo = texture(texture4, v2_texture_coords);
vec3 normal = decode_normal_x_y_reconstruct_z(texture(texture5, v2_texture_coords).xy);
float depth = texture(texture6, v2_texture_coords).x;

//get texture coords from [0, 1] to [-1, 1], add them as x and y, add depth as z, and multiply by inverse projection matrix
vec4 position = vec4(v2_texture_coords.x * 2.0 - 1.0, -(v2_texture_coords.y * 2.0 - 1.0), depth, 1.0) * inv_proj;
position /= position.w;

//blinn lighting with a light placed at [0, 5, -2]
vec3 light = vec3( 0.0, 5.0, -2.0);
vec3 light_dir = normalize(light - position.xyz);
vec3 eye_dir = normalize(cam_pos.xyz - position.xyz);
vec3 half_vec = normalize(light_dir + eye_dir);
v4_color = max(dot(normal, light_dir), 0.0) * albedo + pow(max(dot(normal, half_vec), 0.0), 9.0) * 10.0;
}
[/code]

When I try to draw the normals with v4_color = vec4(normal, 1.0); I get a black screen, this also happens with depth. Please help I'm struggling to get this working.

Best regards,
Yours3!f

Share this post


Link to post
Share on other sites
Sponji    2503
I think you have to call glDrawBuffers every time you want to render something on the buffers. It's probably using only the first buffer now (which is albedo), because you don't tell it what to use when rendering.

Share this post


Link to post
Share on other sites
Yours3!f    1532
[quote name='Sponji' timestamp='1313523107' post='4849998']
I think you have to call glDrawBuffers every time you want to render something on the buffers. It's probably using only the first buffer now (which is albedo), because you don't tell it what to use when rendering.
[/quote]

Thanks, but you don't have to do this every frame. I know on Codinglabs in the tutorial it is done every frame, but you simply don't have to, because it only sets up the frame buffer object, so that it actually recieves the color input.

I updated the code (see the post above), and now I do get the color information, depth and normals as well.

Now I suppose the lighting equation or the decoding function is wrong, since I have all the data.

Share this post


Link to post
Share on other sites
Yours3!f    1532
ok so I tried to figure out why my ligthing doesn't work, so I tried some things.

first of all if in the lighting shader I gather the normal and depth and convert them the conversion works.
[code]
void main()
{
vec3 normal = decode_normal_x_y_reconstruct_z(texture(texture5, v2_texture_coords).xy);
float depth = texture(texture6, v2_texture_coords).x;

//get texture coords from [0, 1] to [-1, 1], add them as x and y, add depth as z, and multiply by inverse projection matrix
vec4 position = vec4(v2_texture_coords.x * 2.0 - 1.0, -(v2_texture_coords.y * 2.0 - 1.0), depth, 1.0) * inv_proj;
position /= position.w;

v4_color = vec4(normal, 1.0); //gives me a normal looking scene
//v4_color = vec4(vec3(depth), 1.0); //gives me a depth looking scene
//v4_color = position; //gives me a position scene (well it is red green and blue :) )
}
[/code]

However when I do this I get a black screen:
[code]
void main()
{
v4_color = texture(texture4, v2_texture_coords);
vec3 normal = decode_normal_x_y_reconstruct_z(texture(texture5, v2_texture_coords).xy);

v4_color = vec4(normal, 1.0); //this should give me a normal looking scene
}
[/code]

I have no idea why that happens, and I guess this is why my lighting doesn't work too...

Share this post


Link to post
Share on other sites
Yours3!f    1532
OMG finally after 3 days I seem to be solving it, I just didn't need to set the glTexParameteri(...); stuff before rendering (see updated code above), but i did need it at texture creation. My lighting still sucks though :) But I guess I can solve it now.

EDIT: don't forget that you have to pass the inverse of the projection matrix which you use at rendering the scene (the perspective), not the one (the orthographic) that you use when doing lighting

Share this post


Link to post
Share on other sites
dfcollinson    484
Hey[size="2"][color="#1c2837"] Yours3!f,

Could you post a picture of what your 'depth' view looks like? Mine is a white screen :( and I want the grey scale kinda scene, and I noticed that you display the depth like this:[/color][/size]

[size="2"][color="#1c2837"][code]v4_color = vec4(vec3(depth), 1.0); [/code][/color][/size]

[color="#1C2837"][size="2"]I do this:

[code]depth.rgb = saturate(depth);
depth.a = 1.0f;[/code]
[/size][/color]

[color="#1C2837"][size="2"]I'm not at the computer at the moment, so can't try it your way, but yeah do you get a 'grey scale' view, indicating 'white = far away' and 'black = near'? Which is what I'm after instead of plain white :([/size][/color]

Share this post


Link to post
Share on other sites
Yours3!f    1532
@Daniel_RT

see this topic of mine:
http://www.gamedev.net/topic/609271-depth-reconstruction-not-working/

(it includes the source code)

To answer your question, depth can look like a grayscale image with black = near and white = far, and it can look all black except getting dark at the end of the frustrum.

The first one (grayscale image) indicates that you are storing non-linearized depth so values will range from 0.0 ... far clip distance, so from black to white.

The second one indicates that you are storing values that range from 0.0 ... 1.0, which means that it will be black near you and it will only be brighter from very far from you (in my case 10000 units from me). you achieve this by dividing the depth value with the far clip plane's distance. This way of storing depth is called storing normalized depth.

I recommend this tutorial on the theoretical basis of this topic:
http://mynameismjp.wordpress.com/2010/09/05/position-from-depth-3/

for the GLSL implementation take a look at my topic (the last but one post). I have to add that the source there is not the best because it doesn't do depth testing, which means that your scene will look ankward, but after the initial implementation of depth reconstruction you can easily implement depth testing by using method 3 from MJP's tutorial (+ this adds some optimization).

oh and dont resurrect old topics :)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Partner Spotlight

  • Similar Content

    • By pseudomarvin
      I assumed that if a shader is computationally expensive then the execution is just slower. But running the following GLSL FS instead just crashes
      void main() { float x = 0; float y = 0; int sum = 0; for (float x = 0; x < 10; x += 0.00005) { for (float y = 0; y < 10; y += 0.00005) { sum++; } } fragColor = vec4(1, 1, 1 , 1.0); } with unhandled exception in nvoglv32.dll. Are there any hard limits on the number of steps/time that a shader can take before it is shut down? I was thinking about implementing some time intensive computation in shaders where it would take on the order of seconds to compute a frame, is that possible? Thanks.
    • By Arulbabu Donbosco
      There are studios selling applications which is just copying any 3Dgraphic content and regenerating into another new window. especially for CAVE Virtual reality experience. so that the user opens REvite or CAD or any other 3D applications and opens a model. then when the user selects the rendered window the VR application copies the 3D model information from the OpenGL window. 
      I got the clue that the VR application replaces the windows opengl32.dll file. how this is possible ... how can we copy the 3d content from the current OpenGL window.
      anyone, please help me .. how to go further... to create an application like VR CAVE. 
       
      Thanks
    • By cebugdev
      hi all,

      i am trying to build an OpenGL 2D GUI system, (yeah yeah, i know i should not be re inventing the wheel, but this is for educational and some other purpose only),
      i have built GUI system before using 2D systems such as that of HTML/JS canvas, but in 2D system, i can directly match a mouse coordinates to the actual graphic coordinates with additional computation for screen size/ratio/scale ofcourse.
      now i want to port it to OpenGL, i know that to render a 2D object in OpenGL we specify coordiantes in Clip space or use the orthographic projection, now heres what i need help about.
      1. what is the right way of rendering the GUI? is it thru drawing in clip space or switching to ortho projection?
      2. from screen coordinates (top left is 0,0 nd bottom right is width height), how can i map the mouse coordinates to OpenGL 2D so that mouse events such as button click works? In consideration ofcourse to the current screen/size dimension.
      3. when let say if the screen size/dimension is different, how to handle this? in my previous javascript 2D engine using canvas, i just have my working coordinates and then just perform the bitblk or copying my working canvas to screen canvas and scale the mouse coordinates from there, in OpenGL how to work on a multiple screen sizes (more like an OpenGL ES question).
      lastly, if you guys know any books, resources, links or tutorials that handle or discuss this, i found one with marekknows opengl game engine website but its not free,
      Just let me know. Did not have any luck finding resource in google for writing our own OpenGL GUI framework.
      IF there are no any available online, just let me know, what things do i need to look into for OpenGL and i will study them one by one to make it work.
      thank you, and looking forward to positive replies.
    • By fllwr0491
      I have a few beginner questions about tesselation that I really have no clue.
      The opengl wiki doesn't seem to talk anything about the details.
       
      What is the relationship between TCS layout out and TES layout in?
      How does the tesselator know how control points are organized?
          e.g. If TES input requests triangles, but TCS can output N vertices.
             What happens in this case?
      In this article,
      http://www.informit.com/articles/article.aspx?p=2120983
      the isoline example TCS out=4, but TES in=isoline.
      And gl_TessCoord is only a single one.
      So which ones are the control points?
      How are tesselator building primitives?
    • By Orella
      I've been developing a 2D Engine using SFML + ImGui.
      Here you can see an image
      The editor is rendered using ImGui and the scene window is a sf::RenderTexture where I draw the GameObjects and then is converted to ImGui::Image to render it in the editor.
      Now I need to create a 3D Engine during this year in my Bachelor Degree but using SDL2 + ImGui and I want to recreate what I did with the 2D Engine. 
      I've managed to render the editor like I did in the 2D Engine using this example that comes with ImGui. 
      3D Editor preview
      But I don't know how to create an equivalent of sf::RenderTexture in SDL2, so I can draw the 3D scene there and convert it to ImGui::Image to show it in the editor.
      If you can provide code will be better. And if you want me to provide any specific code tell me.
      Thanks!
  • Popular Now