Jump to content
  • Advertisement
Sign in to follow this  
maxgpgpu

OpenGL help: OpenGL/GLSL v2.10/v1.10 -- to -- v3.20/v1.50

This topic is 3086 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I need to update a 3D simulation/graphics/game engine from OpenGL v2.10 plus GLSL v1.10 to OpenGL v3.20 plus GLSL v1.50 - and it seems a bit overwhelming. I have the latest OpenGL/GLSL specifications from www.opengl.org/registry, and installed the latest drivers for my GT285 GPU card. Has anyone written a guide or checklist for performing such conversion? I see there is some kind of "compatibility profile" system that seems like it should help, but I don't see how that works in practice (in the code). Does that let me update one feature at a time, and be able to compile/execute/test the code containing a mix of "old and new"? If not, what does this do. How do I enable this? Everything in the current version is pretty-much state-of-the-art v2.10 techniques - nothing "old". For example, all index and vertex data is contained in IBOs and VBOs - no begin-end, no display-lists, etc. Every vertex attribute is a generic attribute (no fixed built-in attributes). Anyone gone through this and have tips, or know where tips are posted? Thanks. [Edited by - maxgpgpu on January 3, 2010 11:52:25 PM]

Share this post


Link to post
Share on other sites
Advertisement
The Quick Reference Card might be of help to you. It makes it pretty clear which functions and constants are now deprecated (and thus unusable in 3.2). That said, as long as you're already doing everything in shaders (no fixed function) and using generic attributes/uniforms as you say, you shouldn't have too much work ahead of you.

Share this post


Link to post
Share on other sites
It'll be easy for you, then:
1) load core ext-funcs (i.e glGenFramebuffers instead of glGenFramebuffersEXT)
2) in shaders, replace keywords varying/attrib with in/out; a replacement for gl_FragColor has to be defined manually; all built-in uniforms are gone; gl_Position stays (in almost all cases)
3) LATC-> RGtc textures

With the compatibility profile does what you expect it to do, so the transition can be done smoothly.
Wide-lines are not supported in forward-compatible profile.
Lots of simplifications in the API, less strict FBOs, lots of extra GLSL features.

Share this post


Link to post
Share on other sites
shrinkage:
Thanks, I printed the quick reference sheets, and they are helpful.

idinev:
I see where GLSL supports "#version 150 compatibility", but I haven't yet noticed what I need to do in my OpenGL code to request "compatibility" with both new and old features.

-----

I am confused to read that gl_FragDepth, gl_FragColor, gl_FragData[4] are depracated. Unless --- perhaps all this means is, the "automatic inherent declaration" of these fragment output variables will be removed, so we'll be forced to declare those we need explicitly in our fragment shaders. But if they intend to remove those output variables entirely... then how does a fragment shader write a depth value, or 1~4+ color values to the default framebuffer (or 1~4+ attached framebuffer objects)? Surely they are not saying we can simply invent our own names for gl_FragDepth, gl_FragColor and gl_FragData[4+]... or are they? That seems silly, since the GPU hardware requires those values to perform the rest of its hardware implemented operations (like store the color(s) into 1~4+ framebuffers if the new depth is less-than the existing depth).

-----

I've been meaning to put all my matrix, light and material information into uniform variables for quite some time. Now it seems like "uniform blocks" are just perfect for that. So that'll be a fair chunk of alteration, but hopefully not too complex.

-----

I worry most about setting up one or more VAOs to reduce rendering overhead. No matter how many times I read discussions of VAOs, I never fully understand (though I always "almost" understand). Since all my vertices have identical attributes, perhaps all I need is one VAO, into which I jam the appropriate IBO before I call glDrawElements() or glDrawInstancedElements(). Oh, I guess I also need to enable the appropriate VBO before calling these functions. OTOH, sometimes I think I need a separate VAO for every VBO... but I just dunno.

-----

I'm planning to send 4 colors (RGBA), 2 tcoords (u,v), 1 texture ID, and a bunch of flag bits into the shader as a single uvec4 vertex attribute. Currently they are all converted from u16 integers to floats automatically, but I figure sending them all in one uvec4 will be more efficient on CPU code. This means I need to add code to my vertex shader to unpack the u16 elements from the u32 elements of the uvec4 variable, then convert them to floats with something like "vf32 = vu16 / 65535". I figure this will be faster overall by taking load off the CPU. On second thought, this work can't be done by the CPU, since this process happens when the GPU fetches the elements from my vertices, which are inside the VBOs, which are in GPU memory. Hmmm? Strange. Does anyone know how much overhead exists for performing all these integer to float conversions when vertex elements are loaded into the vertex shader? If that's efficient, I should leave the setup as is, except for two variables that I need in their native integer form (texture-# and flag bits).

-----

I guess texture coordinates can be passed from vertex to fragment shader as normal/generic output variables (formerly varying AKA interpolating).

-----

As long as I can change and test one item at a time, I should be okay. To change everything at once would surely create havoc. So again, how does my OpenGL code tell OpenGL to allow a mix of new and old features?

-----

Thanks for the tips.

Share this post


Link to post
Share on other sites
For Gl3.2, you need to request a GL3.2 context.

wglMakeCurrent(hDC,hRC);


PFNWGLCREATECONTEXTATTRIBSARBPROC wglCreateContextAttribsARB = (PFNWGLCREATECONTEXTATTRIBSARBPROC)wglGetProcAddress("wglCreateContextAttribsARB");
if(!wglCreateContextAttribsARB){
MessageBoxA(0,"OpenGL3.2 not supported or enabled in NVemulate",0,0);
}else{
int attribs[] = {
WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
WGL_CONTEXT_MINOR_VERSION_ARB, 2,
WGL_CONTEXT_FLAGS_ARB, 0,//WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB, <--------- ADD COMPATIBLE-FLAG HERE
0,0
};

HGLRC gl3Ctx = wglCreateContextAttribsARB(hDC, 0, attribs);
if(gl3Ctx){
wglDeleteContext(hRC);
hRC =gl3Ctx;
wglMakeCurrent(hDC,hRC);
//prints("opengl3.2 success");
//prints((char*)glGetString(GL_VERSION));
}else{
MessageBoxA(0,"OpenGL3.2 context did not init, staying in 2.1",0,0);
}
}



gl_FragDepth stays. You just explicitly define the frag-outputs:
"out vec4 glFragColor;" or "out vec4 glFragData[3];" - notice the dropped '_' .




Perhaps this will make the usage of VAOs clear:


static void DrawVBO_Indexed(ILI_VBO* me){
if(me->isDwordIndex){
glDrawElements(me->PrimitiveType,me->iboSize/4,GL_UNSIGNED_INT, NULL);
}else{
glDrawElements(me->PrimitiveType,me->iboSize/2,GL_UNSIGNED_SHORT, NULL);
}
}
static void DrawVBO_NonIndexed(ILI_VBO* me){
glDrawArrays(me->PrimitiveType,0,me->numVerts);
}

#define USE_VAO

void ilDrawVBO(ILVBO vbo){

if(vbo->glVAO){
glBindVertexArray(vbo->glVAO);
if(vbo->glIBO){ // we have an index
DrawVBO_Indexed(vbo);
}else{
DrawVBO_NonIndexed(vbo);
}
glBindVertexArray(0);
return; // done drawing it
}

if(vbo->numVerts==0)return;
if(vbo->vtxSize==0)return;
ClearCurClientArrays(); // clears VAs
BindVBO(vbo); // binds VAs (with their VBO) and IBO; flags them as to-be-enabled
EnableCurClientArrays(); //enables the flagged VAs


if(vbo->glIBO){ // we have an index
DrawVBO_Indexed(vbo);
}else{
DrawVBO_NonIndexed(vbo);
}

}

Share this post


Link to post
Share on other sites
Quote:
Original post by maxgpgpu
I am confused to read that gl_FragDepth, gl_FragColor, gl_FragData[4] are deprecated...


Read Section 7.2 of the GLSL 1.5 spec.

Quote:
Original post by maxgpgpu
I worry most about setting up one or more VAOs to reduce rendering overhead. No matter how many times I read discussions of VAOs, I never fully understand (though I always "almost" understand).


VAOs just allow you to "collect" all the state associated with VBOs into a single object that can be reused. When you have a VAO bound, it will store all the state changes from any VertexAttribPointer and EnableVertexAttribArray, and apply that state whenever it is bound. Further, all rendering calls will draw their state from the bound VAO. A newly created VAO provide a default state as defined in Section 6.2 of the spec.

Quote:
Original post by maxgpgpu
This means I need to add code to my vertex shader to unpack the u16 elements from the u32 elements of the uvec4 variable, then convert them to floats with something like "vf32 = vu16 / 65535".


Whatever format you're going to need them to be when you use them should really be the format you send them to the GPU as, unless there's a really good reason not to. Why would you need to convert flag bits to floats? Not really clear on what's going on here.

Quote:
Original post by maxgpgpu
I guess texture coordinates can be passed from vertex to fragment shader as normal/generic output variables (formerly varying AKA interpolating).


Yes, the varying and attribute storage qualifiers are deprecated. You can specify the type of interpolation using smooth, flat, and noperspective qualifiers.

Quote:
Original post by maxgpgpu
As long as I can change and test one item at a time, I should be okay. To change everything at once would surely create havoc. So again, how does my OpenGL code tell OpenGL to allow a mix of new and old features?


See: WGL_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB

Share this post


Link to post
Share on other sites
Quote:
Original post by Shinkage
VAOs just allow you to "collect" all the state associated with VBOs into a single object that can be reused. When you have a VAO bound, it will store all the state changes from any VertexAttribPointer and EnableVertexAttribArray, and apply that state whenever it is bound. Further, all rendering calls will draw their state from the bound VAO. A newly created VAO provide a default state as defined in Section 6.2 of the spec.
Since every vertex in my application has the same layout, I take this to mean I can simply create and bind one VAO, and just leave it permanently bound. Then, to draw the primitives in each VBO, I simply bind that VBO, then bind the IBO that contains indices into it - then call glDrawElements(). That would be very nice... no redefining attribute offsets and reenabling attributes before every glDrawElements().

Quote:
Shinkage
Whatever format you're going to need them to be when you use them should really be the format you send them to the GPU as, unless there's a really good reason not to. Why would you need to convert flag bits to floats? Not really clear on what's going on here.
Here is what my vertices look like (with each 32-bit element between :colons:.

position.x : position.y : position.z : east.x
zenith.x : zenith.y : zenith.z : east.y
north.x : north.y : north.z : east.z
color.rg : color.ba : tcoord.xy : tcoord.z + flagbits

Note that "zenith", "north", "east" are just descriptive names I give to the surface vectors (more often called "normal", "tangent", "bitangent"). Note how I put the "east" vector into the .w component of the other vectors, so I can tell OpenGL/GLSL to load all four vec3 elements I need in only three vec4 attributes. This leaves the final row of elements. In my OpenGL code, each of the "color" and "tcoord" elements are u16 variables in the CPU version of my vertices, to get the most dynamic range and precision possible in 16-bits. But "color.rgba" and "tcoord.xy" must be converted to f32 variables IN the shader, or ON THE WAY to the shader, because that's most natural for shader code. However, what I call "tcoord.z" needs to stay an integer in the shader, because that value selects the desired texture --- from the one texture array that my program keeps all textures in (and normal-maps, and potentially lookup-tables and more). The flagbit integer also needs to stay an integer, because those bits tell the shader how to combine color.rgba with texture-color.rgba, how to handle alpha, whether to perform bump-mapping, horizon-mapping, etc.

In my OpenGL v2.10 code those tcoord.z and flagbits are floating point values, which I've managed to make work via kludgery, but obviously the code will become clean the moment they become the integers they should be.

Okay, now finally to my point and question.

I could define one attributes to make the GPU load and convert the u16 color.rgba values into a vec4 color.rgba variable, and another attribute to make the GPU load and convert the u16 tcoord.xy values into a vec2 tcoord.xy variable, and another attribute to make the GPU load the u16 tcoord.z value into a u16 variable, and another attribute to make the GPU load the u16 flagbits value into a u16 variable.

OR

I could define one attribute to make the GPU load the entire final row of values (color.rgba, tcoord.xy, tcoord.z, flagbits) into a single uvec4 variable, then convert them to the desired types with my own vertex shader code. My code would need to isolate the 8 u16 values, convert the first 6 of those values into f32 variables by multiplying them by (1/65535). The last 2 u16 values are perfectly fine as u16 values (in most paths through my shaders).

I have not been able to decide whether my application will be faster if I define those 4 separate attributes, and let the GPU convert and transfer them separately... or whether loading them all into the shader as one attribute, then extracting the elements myself will be faster. That's my question, I guess.

Without the VAO (which is where I am before converting to v3.20/v1.50), the extra code to define and enable those extra attributes before each glDrawElements() call convinced me it was better to send that last row of the vertex structure as one attribute - to reduce overhead on the CPU. Once the VAO is in place, and the CPU need not define and enable those attributes, the choice is not so clear. Any idea which is faster with VAO?

Share this post


Link to post
Share on other sites
idinev:
I can't get an OpenGL v3.20 context. In fact, I can't successfully compile a program with the wglCreateContextAttribs() function in it (function not recognized), and ditto for the new constants like WGL_CONTEXT_MAJOR_VERSION_ARB and so forth.

While brings me back to a question I forgot (or never knew)... where are the WGL functions located (in what .lib file), and where are the WGL declarations (in what .h file). My program includes the glee.h and glee.c files, but they don't seem to contain those symbols... so what do I need to do to create a compilable OpenGL v3.20 program?

NOTE: When I create a normal context with wglCreateContext(), the following line returns a "3.2.0" string in the "version" variable... thus I suppose a v3.20 version is being created.


const u08* version = glGetString (GL_VERSION);


However, some constants and functions I would expect to be defined in a v3.20 context are not defined, including:

GL_MAJOR_VERSION
GL_MINOR_VERSION
WGL_CONTEXT_MAJOR_VERSION
WGL_CONTEXT_MINOR_VERSION
WGL_CONTEXT_MAJOR_VERSION_ARB
WGL_CONTEXT_MINOR_VERSION_ARB
wglCreateContextAttribs()
wglCreateContextAttribsARB()

Also, to make the glGetAttribIPointer() function to work in my program, I had to change the name to glGetAttribIPointerEXT(). However, the EXT should not be necessary in v3.20, correct?

So it seems like I need to do something to make the later declarations available. Perhaps GLEE and GLEW have simply fallen behind the times, and this explains my problems... I'm not sure. Perhaps I should try to find the nvidia headers and switch from GLEE to the nvidia stuff (thoug I had problems doing that last year when I last attempted this, and I finally gave up and continued on with GLEE). Advice?


[Edited by - maxgpgpu on December 28, 2009 6:00:03 AM]

Share this post


Link to post
Share on other sites
GLee has not yet been upgraded to support GL3.2. Afaik it supports only 3.1. For 3.2 support you could try glew source code from repository.

Or, if you dont want to use glee or glew, then WGL_... defines are in wglext.h file (google for it). And wgl... functions must be loaded dynamically as rest of non-v1.1 gl/wgl functions - through wglGetProcAddress function.

Share this post


Link to post
Share on other sites
So, I guess you mean "PFNWGLCREATECONTEXTATTRIBSARBPROC" was an unknown symbol?
Anyway, instead of waiting for Glee, Glew and whatnot to update/etc, I suggest you try my way:

Visit http://www.opengl.org/registry/ and get the (latest versions of) header-files glext.h and wglext.h . They're always at:
http://www.opengl.org/registry/api/glext.h
http://www.opengl.org/registry/api/wglext.h

Download my http://dl.dropbox.com/u/1969613/openglForum/gl_extensions.h

In your main.cpp or wherever, put this code:

#define OPENGL_MACRO(proc,proctype) PFN##proctype##PROC proc
#define OPTIONAL_EXT(proc,proctype) PFN##proctype##PROC proc
#include "gl_extensions.h" // this defines the variables (function-pointers)

static PROC uglGetProcAddress(const char* pName,bool IsOptional){
PROC res = wglGetProcAddress(pName);
if(res || IsOptional)return res;
MessageBoxA(0,pName,"Missing OpenGL extention proc!",0);
ExitProcess(0);
}
void InitGLExtentions(){ // this loads all function-pointers
#define OPENGL_MACRO(proc,proctype) proc = (PFN##proctype##PROC)uglGetProcAddress(#proc,false)
#define OPTIONAL_EXT(proc,proctype) proc = (PFN##proctype##PROC)uglGetProcAddress(#proc,true)
#include "gl_extensions.h"


}



In the source files, where you need GL calls do:

#include <gl/gl.h>
#include <gl/glext.h>
#include <gl/wglext.h>
#include "gl_extensions.h"



Voila, you're ready to use all GL symbols. And without waiting for glee/glew, you can add more funcs in the gl_extensions.h file when they're available (after a new extension comes-out and a driver-update supports it).

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Similar Content

    • By owenjr
      Hi, I'm a Multimedia Engineering student. I am about to finish my dergree and I'm already thinking about what topic to cover in my final college project.
      I'm interested in the procedural animation with c++ and OpenGL of creatures, something like a spider for example. Can someone tell me what are the issues I should investigate to carry it out? I understand that it has some dependence on artificial intelligence but I do not know to what extent. Can someone help me to find information about it? Thank you very much.
       
      Examples: 
      - Procedural multi-legged walking animation
      - Procedural Locomotion of Multi-Legged Characters in Dynamic Environments
    • By Lewa
      So, i'm still on my quest to unterstanding the intricacies of HDR and implementing this into my engine. Currently i'm at the step to implementing tonemapping. I stumbled upon this blogposts:
      http://filmicworlds.com/blog/filmic-tonemapping-operators/
      http://frictionalgames.blogspot.com/2012/09/tech-feature-hdr-lightning.html
      and tried to implement some of those mentioned tonemapping methods into my postprocessing shader.
      The issue is that none of them creates the same results as shown in the blogpost which definitely has to do with the initial range in which the values are stored in the HDR buffer. For simplicity sake i store the values between 0 and 1 in the HDR buffer (ambient light is 0.3, directional light is 0.7)
      This is the tonemapping code:
      vec3 Uncharted2Tonemap(vec3 x) { float A = 0.15; float B = 0.50; float C = 0.10; float D = 0.20; float E = 0.02; float F = 0.30; return ((x*(A*x+C*B)+D*E)/(x*(A*x+B)+D*F))-E/F; } This is without the uncharted tonemapping:
      This is with the uncharted tonemapping:
      Which makes the image a lot darker.
      The shader code looks like this:
      void main() { vec3 color = texture2D(texture_diffuse, vTexcoord).rgb; color = Uncharted2Tonemap(color); //gamma correction (use only if not done in tonemapping code) color = gammaCorrection(color); outputF = vec4(color,1.0f); } Now, from my understanding is that tonemapping should bring the range down from HDR to 0-1.
      But the output of the tonemapping function heavily depends on the initial range of the values in the HDR buffer. (You can't expect to set the sun intensity the first time to 10 and the second time to 1000 and excpect the same result if you feed that into the tonemapper.) So i suppose that this also depends on the exposure which i have to implement?
      To check this i plotted the tonemapping curve:
      You can see that the curve goes only up to around to a value of 0.21 (while being fed a value of 1) and then basically flattens out. (which would explain why the image got darker.)
       
      My guestion is: In what range should the values in the HDR buffer be which then get tonemapped? Do i have to bring them down to a range of 0-1 by multiplying with the exposure?
      For example, if i increase the values of the light by 10 (directional light would be 7 and ambient light 3) then i would need to divide HDR values by 10 in order to get a value range of 0-1 which then could be fed into the tonemapping curve. Is that correct?
    • By nOoNEE
      i am reading this book : link
      in the OpenGL Rendering Pipeline section there is a picture like this: link
      but the question is this i dont really understand why it is necessary to turn pixel data in to fragment and then fragment into pixel could please give me a source or a clear Explanation that why it is necessary ? thank you so mu
       
       
    • By Inbar_xz
      I'm using the OPENGL with eclipse+JOGL.
      My goal is to create movement of the camera and the player.
      I create main class, which create some box in 3D and hold 
      an object of PlayerAxis.
      I create PlayerAxis class which hold the axis of the player.
      If we want to move the camera, then in the main class I call to 
      the func "cameraMove"(from PlayerAxis) and it update the player axis.
      That's work good.
      The problem start if I move the camera on 2 axis, 
      for example if I move with the camera right(that's on the y axis)
      and then down(on the x axis) -
      in some point the move front is not to the front anymore..
      In order to move to the front, I do
      player.playerMoving(0, 0, 1);
      And I learn that in order to keep the front move, 
      I need to convert (0, 0, 1) to the player axis, and then add this.
      I think I dont do the convert right.. 
      I will be glad for help!

      Here is part of my PlayerAxis class:
       
      //player coordinate float x[] = new float[3]; float y[] = new float[3]; float z[] = new float[3]; public PlayerAxis(float move_step, float angle_move) { x[0] = 1; y[1] = 1; z[2] = -1; step = move_step; angle = angle_move; setTransMatrix(); } public void cameraMoving(float angle_step, String axis) { float[] new_x = x; float[] new_y = y; float[] new_z = z; float alfa = angle_step * angle; switch(axis) { case "x": new_z = addVectors(multScalar(z, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(z, SIN(alfa))); break; case "y": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(z, SIN(alfa))); new_z = subVectors(multScalar(z, COS(alfa)), multScalar(x, SIN(alfa))); break; case "z": new_x = addVectors(multScalar(x, COS(alfa)), multScalar(y, SIN(alfa))); new_y = subVectors(multScalar(y, COS(alfa)), multScalar(x, SIN(alfa))); } x = new_x; y = new_y; z = new_z; normalization(); } public void playerMoving(float x_move, float y_move, float z_move) { float[] move = new float[3]; move[0] = x_move; move[1] = y_move; move[2] = z_move; setTransMatrix(); float[] trans_move = transVector(move); position[0] = position[0] + step*trans_move[0]; position[1] = position[1] + step*trans_move[1]; position[2] = position[2] + step*trans_move[2]; } public void setTransMatrix() { for (int i = 0; i < 3; i++) { coordiTrans[0][i] = x[i]; coordiTrans[1][i] = y[i]; coordiTrans[2][i] = z[i]; } } public float[] transVector(float[] v) { return multiplyMatrixInVector(coordiTrans, v); }  
      and in the main class i have this:
       
      public void keyPressed(KeyEvent e) { if (e.getKeyCode()== KeyEvent.VK_ESCAPE) { System.exit(0); //player move } else if (e.getKeyCode()== KeyEvent.VK_W) { //front //moveAmount[2] += -0.1f; player.playerMoving(0, 0, 1); } else if (e.getKeyCode()== KeyEvent.VK_S) { //back //moveAmount[2] += 0.1f; player.playerMoving(0, 0, -1); } else if (e.getKeyCode()== KeyEvent.VK_A) { //left //moveAmount[0] += -0.1f; player.playerMoving(-1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_D) { //right //moveAmount[0] += 0.1f; player.playerMoving(1, 0, 0); } else if (e.getKeyCode()== KeyEvent.VK_E) { //moveAmount[0] += 0.1f; player.playerMoving(0, 1, 0); } else if (e.getKeyCode()== KeyEvent.VK_Q) { //moveAmount[0] += 0.1f; player.playerMoving(0, -1, 0); //camera move } else if (e.getKeyCode()== KeyEvent.VK_I) { //up player.cameraMoving(1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_K) { //down player.cameraMoving(-1, "x"); } else if (e.getKeyCode()== KeyEvent.VK_L) { //right player.cameraMoving(-1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_J) { //left player.cameraMoving(1, "y"); } else if (e.getKeyCode()== KeyEvent.VK_O) { //right round player.cameraMoving(-1, "z"); } else if (e.getKeyCode()== KeyEvent.VK_U) { //left round player.cameraMoving(1, "z"); } }  
      finallt found it.... i confused with the transformation matrix row and col. thanks anyway!
    • By Lewa
      So, i'm currently trying to implement an SSAO shader from THIS tutorial and i'm running into a few issues here.
      Now, this SSAO method requires view space positions and normals. I'm storing the normals in my deferred renderer in world-space so i had to do a conversion and reconstruct the position from the depth buffer.
      And something there goes horribly wrong (which has probably to do with worldspace to viewspace transformations).
      (here is the full shader source code if someone wants to take a look at it)
      Now, i suspect that the normals are the culprit.
      vec3 normal = ((uNormalViewMatrix*vec4(normalize(texture2D(sNormals, vTexcoord).rgb),1.0)).xyz); "sNormals" is a 2D texture which stores the normals in world space in a RGB FP16 buffer.
      Now i can't use the camera viewspace matrix to transform the normals into viewspace as the cameras position isn't set at (0,0,0), thus skewing the result.
      So what i did is to create a new viewmatrix specifically for this normal without the position at vec3(0,0,0);
      //"camera" is the camera which was used for rendering the normal buffer renderer.setUniform4m(ressources->shaderSSAO->getUniform("uNormalViewMatrix"), glmExt::createViewMatrix(glm::vec3(0,0,0),camera.getForward(),camera.getUp())//parameters are (position,forwardVector,upVector) ); Though i have the feeling this is the wrong approach. Is this right or is there a better/correct way of transforming a world space normal into viewspace?
    • By HawkDeath
      Hi,
      I'm trying mix two textures using own shader system, but I have a problem (I think) with uniforms.
      Code: https://github.com/HawkDeath/shader/tree/test
      To debug I use RenderDocs, but I did not receive good results. In the first attachment is my result, in the second attachment is what should be.
      PS. I base on this tutorial https://learnopengl.com/Getting-started/Textures.


    • By norman784
      I'm having issues loading textures, as I'm clueless on how to handle / load images maybe I missing something, but the past few days I just google a lot to try to find a solution. Well theres two issues I think, one I'm using Kotlin Native (EAP) and OpenGL wrapper / STB image, so I'm not quite sure wheres the issue, if someone with more experience could give me some hints on how to solve this issue?
      The code is here, if I'm not mistaken the workflow is pretty straight forward, stbi_load returns the pixels of the image (as char array or byte array) and you need to pass those pixels directly to glTexImage2D, so a I'm missing something here it seems.
      Regards
    • By Hashbrown
      I've noticed in most post processing tutorials several shaders are used one after another: one for bloom, another for contrast, and so on. For example: 
      postprocessing.quad.bind() // Effect 1 effect1.shader.bind(); postprocessing.texture.bind(); postprocessing.quad.draw(); postprocessing.texture.unbind(); effect1.shader.unbind(); // Effect 2 effect2.shader.bind(); // ...and so on postprocessing.quad.unbind() Is this good practice, how many shaders can I bind and unbind before I hit performance issues? I'm afraid I don't know what the good practices are in open/webGL regarding binding and unbinding resources. 
      I'm guessing binding many shaders at post processing is okay since the scene has already been updated and I'm just working on a quad and texture at that moment. Or is it more optimal to put shader code in chunks and bind less frequently? I'd love to use several shaders at post though. 
      Another example of what I'm doing at the moment:
      1) Loop through GameObjects, bind its phong shader (send color, shadow, spec, normal samplers), unbind all.
      2) At post: bind post processor quad, and loop/bind through different shader effects, and so on ...
      Thanks all! 
    • By phil67rpg
      void collision(int v) { collision_bug_one(0.0f, 10.0f); glutPostRedisplay(); glutTimerFunc(1000, collision, 0); } void coll_sprite() { if (board[0][0] == 1) { collision(0); flag[0][0] = 1; } } void erase_sprite() { if (flag[0][0] == 1) { glColor3f(0.0f, 0.0f, 0.0f); glBegin(GL_POLYGON); glVertex3f(0.0f, 10.0f, 0.0f); glVertex3f(0.0f, 9.0f, 0.0f); glVertex3f(1.0f, 9.0f, 0.0f); glVertex3f(1.0f, 10.0f, 0.0f); glEnd(); } } I am using glutTimerFunc to wait a small amount of time to display a collision sprite before I black out the sprite. unfortunately my code only blacks out the said sprite without drawing the collision sprite, I have done a great deal of research on the glutTimerFunc and  animation.
    • By Lewa
      So, i stumbled upon the topic of gamma correction.
      https://learnopengl.com/Advanced-Lighting/Gamma-Correction
      So from what i've been able to gather: (Please correct me if i'm wrong)
      Old CRT monitors couldn't display color linearly, that's why gamma correction was nessecary. Modern LCD/LED monitors don't have this issue anymore but apply gamma correction anyway. (For compatibility reasons? Can this be disabled?) All games have to apply gamma correction? (unsure about that) All textures stored in file formats (.png for example) are essentially stored in SRGB color space (as what we see on the monitor is skewed due to gamma correction. So the pixel information is the same, the percieved colors are just wrong.) This makes textures loaded into the GL_RGB format non linear, thus all lighting calculations are wrong You have to always use the GL_SRGB format to gamma correct/linearise textures which are in SRGB format  
      Now, i'm kinda confused how to proceed with applying gamma correction in OpenGL.
      First of, how can i check if my Monitor is applying gamma correction? I noticed in my monitor settings that my color format is set to "RGB" (can't modify it though.) I'm connected to my PC via a HDMI cable. I'm also using the full RGB range (0-255, not the 16 to ~240 range)
       
      What i tried to do is to apply a gamma correction shader shown in the tutorial above which looks essentially like this: (it's a postprocess shader which is applied at the end of the renderpipeline)
      vec3 gammaCorrection(vec3 color){ // gamma correction color = pow(color, vec3(1.0/2.2)); return color; } void main() { vec3 color; vec3 tex = texture2D(texture_diffuse, vTexcoord).rgb; color = gammaCorrection(tex); outputF = vec4(color,1.0f); } The results look like this:
      No gamma correction:
      With gamma correction:
       
      The colors in the gamma corrected image look really wased out. (To the point that it's damn ugly. As if someone overlayed a white half transparent texture. I want the colors to pop.)
      Do i have to change the textures from GL_RGB to GL_SRGB in order to gamma correct them in addition to applying the post process gamma correction shader? Do i have to do the same thing with all FBOs? Or is this washed out look the intended behaviour?
  • Advertisement
  • Popular Now

  • Forum Statistics

    • Total Topics
      631373
    • Total Posts
      2999638
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

Participate in the game development conversation and more when you create an account on GameDev.net!

Sign me up!