Sign in to follow this  
maxgpgpu

OpenGL help: OpenGL/GLSL v2.10/v1.10 -- to -- v3.20/v1.50

Recommended Posts

I need to update a 3D simulation/graphics/game engine from OpenGL v2.10 plus GLSL v1.10 to OpenGL v3.20 plus GLSL v1.50 - and it seems a bit overwhelming. I have the latest OpenGL/GLSL specifications from www.opengl.org/registry, and installed the latest drivers for my GT285 GPU card. Has anyone written a guide or checklist for performing such conversion? I see there is some kind of "compatibility profile" system that seems like it should help, but I don't see how that works in practice (in the code). Does that let me update one feature at a time, and be able to compile/execute/test the code containing a mix of "old and new"? If not, what does this do. How do I enable this? Everything in the current version is pretty-much state-of-the-art v2.10 techniques - nothing "old". For example, all index and vertex data is contained in IBOs and VBOs - no begin-end, no display-lists, etc. Every vertex attribute is a generic attribute (no fixed built-in attributes). Anyone gone through this and have tips, or know where tips are posted? Thanks. [Edited by - maxgpgpu on January 3, 2010 11:52:25 PM]

Share this post


Link to post
Share on other sites
The Quick Reference Card might be of help to you. It makes it pretty clear which functions and constants are now deprecated (and thus unusable in 3.2). That said, as long as you're already doing everything in shaders (no fixed function) and using generic attributes/uniforms as you say, you shouldn't have too much work ahead of you.

Share this post


Link to post
Share on other sites
It'll be easy for you, then:
1) load core ext-funcs (i.e glGenFramebuffers instead of glGenFramebuffersEXT)
2) in shaders, replace keywords varying/attrib with in/out; a replacement for gl_FragColor has to be defined manually; all built-in uniforms are gone; gl_Position stays (in almost all cases)
3) LATC-> RGtc textures

With the compatibility profile does what you expect it to do, so the transition can be done smoothly.
Wide-lines are not supported in forward-compatible profile.
Lots of simplifications in the API, less strict FBOs, lots of extra GLSL features.

Share this post


Link to post
Share on other sites
shrinkage:
Thanks, I printed the quick reference sheets, and they are helpful.

idinev:
I see where GLSL supports "#version 150 compatibility", but I haven't yet noticed what I need to do in my OpenGL code to request "compatibility" with both new and old features.

-----

I am confused to read that gl_FragDepth, gl_FragColor, gl_FragData[4] are depracated. Unless --- perhaps all this means is, the "automatic inherent declaration" of these fragment output variables will be removed, so we'll be forced to declare those we need explicitly in our fragment shaders. But if they intend to remove those output variables entirely... then how does a fragment shader write a depth value, or 1~4+ color values to the default framebuffer (or 1~4+ attached framebuffer objects)? Surely they are not saying we can simply invent our own names for gl_FragDepth, gl_FragColor and gl_FragData[4+]... or are they? That seems silly, since the GPU hardware requires those values to perform the rest of its hardware implemented operations (like store the color(s) into 1~4+ framebuffers if the new depth is less-than the existing depth).

-----

I've been meaning to put all my matrix, light and material information into uniform variables for quite some time. Now it seems like "uniform blocks" are just perfect for that. So that'll be a fair chunk of alteration, but hopefully not too complex.

-----

I worry most about setting up one or more VAOs to reduce rendering overhead. No matter how many times I read discussions of VAOs, I never fully understand (though I always "almost" understand). Since all my vertices have identical attributes, perhaps all I need is one VAO, into which I jam the appropriate IBO before I call glDrawElements() or glDrawInstancedElements(). Oh, I guess I also need to enable the appropriate VBO before calling these functions. OTOH, sometimes I think I need a separate VAO for every VBO... but I just dunno.

-----

I'm planning to send 4 colors (RGBA), 2 tcoords (u,v), 1 texture ID, and a bunch of flag bits into the shader as a single uvec4 vertex attribute. Currently they are all converted from u16 integers to floats automatically, but I figure sending them all in one uvec4 will be more efficient on CPU code. This means I need to add code to my vertex shader to unpack the u16 elements from the u32 elements of the uvec4 variable, then convert them to floats with something like "vf32 = vu16 / 65535". I figure this will be faster overall by taking load off the CPU. On second thought, this work can't be done by the CPU, since this process happens when the GPU fetches the elements from my vertices, which are inside the VBOs, which are in GPU memory. Hmmm? Strange. Does anyone know how much overhead exists for performing all these integer to float conversions when vertex elements are loaded into the vertex shader? If that's efficient, I should leave the setup as is, except for two variables that I need in their native integer form (texture-# and flag bits).

-----

I guess texture coordinates can be passed from vertex to fragment shader as normal/generic output variables (formerly varying AKA interpolating).

-----

As long as I can change and test one item at a time, I should be okay. To change everything at once would surely create havoc. So again, how does my OpenGL code tell OpenGL to allow a mix of new and old features?

-----

Thanks for the tips.

Share this post


Link to post
Share on other sites
For Gl3.2, you need to request a GL3.2 context.

wglMakeCurrent(hDC,hRC);


PFNWGLCREATECONTEXTATTRIBSARBPROC wglCreateContextAttribsARB = (PFNWGLCREATECONTEXTATTRIBSARBPROC)wglGetProcAddress("wglCreateContextAttribsARB");
if(!wglCreateContextAttribsARB){
MessageBoxA(0,"OpenGL3.2 not supported or enabled in NVemulate",0,0);
}else{
int attribs[] = {
WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
WGL_CONTEXT_MINOR_VERSION_ARB, 2,
WGL_CONTEXT_FLAGS_ARB, 0,//WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB, <--------- ADD COMPATIBLE-FLAG HERE
0,0
};

HGLRC gl3Ctx = wglCreateContextAttribsARB(hDC, 0, attribs);
if(gl3Ctx){
wglDeleteContext(hRC);
hRC =gl3Ctx;
wglMakeCurrent(hDC,hRC);
//prints("opengl3.2 success");
//prints((char*)glGetString(GL_VERSION));
}else{
MessageBoxA(0,"OpenGL3.2 context did not init, staying in 2.1",0,0);
}
}



gl_FragDepth stays. You just explicitly define the frag-outputs:
"out vec4 glFragColor;" or "out vec4 glFragData[3];" - notice the dropped '_' .




Perhaps this will make the usage of VAOs clear:


static void DrawVBO_Indexed(ILI_VBO* me){
if(me->isDwordIndex){
glDrawElements(me->PrimitiveType,me->iboSize/4,GL_UNSIGNED_INT, NULL);
}else{
glDrawElements(me->PrimitiveType,me->iboSize/2,GL_UNSIGNED_SHORT, NULL);
}
}
static void DrawVBO_NonIndexed(ILI_VBO* me){
glDrawArrays(me->PrimitiveType,0,me->numVerts);
}

#define USE_VAO

void ilDrawVBO(ILVBO vbo){

if(vbo->glVAO){
glBindVertexArray(vbo->glVAO);
if(vbo->glIBO){ // we have an index
DrawVBO_Indexed(vbo);
}else{
DrawVBO_NonIndexed(vbo);
}
glBindVertexArray(0);
return; // done drawing it
}

if(vbo->numVerts==0)return;
if(vbo->vtxSize==0)return;
ClearCurClientArrays(); // clears VAs
BindVBO(vbo); // binds VAs (with their VBO) and IBO; flags them as to-be-enabled
EnableCurClientArrays(); //enables the flagged VAs


if(vbo->glIBO){ // we have an index
DrawVBO_Indexed(vbo);
}else{
DrawVBO_NonIndexed(vbo);
}

}

Share this post


Link to post
Share on other sites
Quote:
Original post by maxgpgpu
I am confused to read that gl_FragDepth, gl_FragColor, gl_FragData[4] are deprecated...


Read Section 7.2 of the GLSL 1.5 spec.

Quote:
Original post by maxgpgpu
I worry most about setting up one or more VAOs to reduce rendering overhead. No matter how many times I read discussions of VAOs, I never fully understand (though I always "almost" understand).


VAOs just allow you to "collect" all the state associated with VBOs into a single object that can be reused. When you have a VAO bound, it will store all the state changes from any VertexAttribPointer and EnableVertexAttribArray, and apply that state whenever it is bound. Further, all rendering calls will draw their state from the bound VAO. A newly created VAO provide a default state as defined in Section 6.2 of the spec.

Quote:
Original post by maxgpgpu
This means I need to add code to my vertex shader to unpack the u16 elements from the u32 elements of the uvec4 variable, then convert them to floats with something like "vf32 = vu16 / 65535".


Whatever format you're going to need them to be when you use them should really be the format you send them to the GPU as, unless there's a really good reason not to. Why would you need to convert flag bits to floats? Not really clear on what's going on here.

Quote:
Original post by maxgpgpu
I guess texture coordinates can be passed from vertex to fragment shader as normal/generic output variables (formerly varying AKA interpolating).


Yes, the varying and attribute storage qualifiers are deprecated. You can specify the type of interpolation using smooth, flat, and noperspective qualifiers.

Quote:
Original post by maxgpgpu
As long as I can change and test one item at a time, I should be okay. To change everything at once would surely create havoc. So again, how does my OpenGL code tell OpenGL to allow a mix of new and old features?


See: WGL_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB

Share this post


Link to post
Share on other sites
Quote:
Original post by Shinkage
VAOs just allow you to "collect" all the state associated with VBOs into a single object that can be reused. When you have a VAO bound, it will store all the state changes from any VertexAttribPointer and EnableVertexAttribArray, and apply that state whenever it is bound. Further, all rendering calls will draw their state from the bound VAO. A newly created VAO provide a default state as defined in Section 6.2 of the spec.
Since every vertex in my application has the same layout, I take this to mean I can simply create and bind one VAO, and just leave it permanently bound. Then, to draw the primitives in each VBO, I simply bind that VBO, then bind the IBO that contains indices into it - then call glDrawElements(). That would be very nice... no redefining attribute offsets and reenabling attributes before every glDrawElements().

Quote:
Shinkage
Whatever format you're going to need them to be when you use them should really be the format you send them to the GPU as, unless there's a really good reason not to. Why would you need to convert flag bits to floats? Not really clear on what's going on here.
Here is what my vertices look like (with each 32-bit element between :colons:.

position.x : position.y : position.z : east.x
zenith.x : zenith.y : zenith.z : east.y
north.x : north.y : north.z : east.z
color.rg : color.ba : tcoord.xy : tcoord.z + flagbits

Note that "zenith", "north", "east" are just descriptive names I give to the surface vectors (more often called "normal", "tangent", "bitangent"). Note how I put the "east" vector into the .w component of the other vectors, so I can tell OpenGL/GLSL to load all four vec3 elements I need in only three vec4 attributes. This leaves the final row of elements. In my OpenGL code, each of the "color" and "tcoord" elements are u16 variables in the CPU version of my vertices, to get the most dynamic range and precision possible in 16-bits. But "color.rgba" and "tcoord.xy" must be converted to f32 variables IN the shader, or ON THE WAY to the shader, because that's most natural for shader code. However, what I call "tcoord.z" needs to stay an integer in the shader, because that value selects the desired texture --- from the one texture array that my program keeps all textures in (and normal-maps, and potentially lookup-tables and more). The flagbit integer also needs to stay an integer, because those bits tell the shader how to combine color.rgba with texture-color.rgba, how to handle alpha, whether to perform bump-mapping, horizon-mapping, etc.

In my OpenGL v2.10 code those tcoord.z and flagbits are floating point values, which I've managed to make work via kludgery, but obviously the code will become clean the moment they become the integers they should be.

Okay, now finally to my point and question.

I could define one attributes to make the GPU load and convert the u16 color.rgba values into a vec4 color.rgba variable, and another attribute to make the GPU load and convert the u16 tcoord.xy values into a vec2 tcoord.xy variable, and another attribute to make the GPU load the u16 tcoord.z value into a u16 variable, and another attribute to make the GPU load the u16 flagbits value into a u16 variable.

OR

I could define one attribute to make the GPU load the entire final row of values (color.rgba, tcoord.xy, tcoord.z, flagbits) into a single uvec4 variable, then convert them to the desired types with my own vertex shader code. My code would need to isolate the 8 u16 values, convert the first 6 of those values into f32 variables by multiplying them by (1/65535). The last 2 u16 values are perfectly fine as u16 values (in most paths through my shaders).

I have not been able to decide whether my application will be faster if I define those 4 separate attributes, and let the GPU convert and transfer them separately... or whether loading them all into the shader as one attribute, then extracting the elements myself will be faster. That's my question, I guess.

Without the VAO (which is where I am before converting to v3.20/v1.50), the extra code to define and enable those extra attributes before each glDrawElements() call convinced me it was better to send that last row of the vertex structure as one attribute - to reduce overhead on the CPU. Once the VAO is in place, and the CPU need not define and enable those attributes, the choice is not so clear. Any idea which is faster with VAO?

Share this post


Link to post
Share on other sites
idinev:
I can't get an OpenGL v3.20 context. In fact, I can't successfully compile a program with the wglCreateContextAttribs() function in it (function not recognized), and ditto for the new constants like WGL_CONTEXT_MAJOR_VERSION_ARB and so forth.

While brings me back to a question I forgot (or never knew)... where are the WGL functions located (in what .lib file), and where are the WGL declarations (in what .h file). My program includes the glee.h and glee.c files, but they don't seem to contain those symbols... so what do I need to do to create a compilable OpenGL v3.20 program?

NOTE: When I create a normal context with wglCreateContext(), the following line returns a "3.2.0" string in the "version" variable... thus I suppose a v3.20 version is being created.


const u08* version = glGetString (GL_VERSION);


However, some constants and functions I would expect to be defined in a v3.20 context are not defined, including:

GL_MAJOR_VERSION
GL_MINOR_VERSION
WGL_CONTEXT_MAJOR_VERSION
WGL_CONTEXT_MINOR_VERSION
WGL_CONTEXT_MAJOR_VERSION_ARB
WGL_CONTEXT_MINOR_VERSION_ARB
wglCreateContextAttribs()
wglCreateContextAttribsARB()

Also, to make the glGetAttribIPointer() function to work in my program, I had to change the name to glGetAttribIPointerEXT(). However, the EXT should not be necessary in v3.20, correct?

So it seems like I need to do something to make the later declarations available. Perhaps GLEE and GLEW have simply fallen behind the times, and this explains my problems... I'm not sure. Perhaps I should try to find the nvidia headers and switch from GLEE to the nvidia stuff (thoug I had problems doing that last year when I last attempted this, and I finally gave up and continued on with GLEE). Advice?


[Edited by - maxgpgpu on December 28, 2009 6:00:03 AM]

Share this post


Link to post
Share on other sites
GLee has not yet been upgraded to support GL3.2. Afaik it supports only 3.1. For 3.2 support you could try glew source code from repository.

Or, if you dont want to use glee or glew, then WGL_... defines are in wglext.h file (google for it). And wgl... functions must be loaded dynamically as rest of non-v1.1 gl/wgl functions - through wglGetProcAddress function.

Share this post


Link to post
Share on other sites
So, I guess you mean "PFNWGLCREATECONTEXTATTRIBSARBPROC" was an unknown symbol?
Anyway, instead of waiting for Glee, Glew and whatnot to update/etc, I suggest you try my way:

Visit http://www.opengl.org/registry/ and get the (latest versions of) header-files glext.h and wglext.h . They're always at:
http://www.opengl.org/registry/api/glext.h
http://www.opengl.org/registry/api/wglext.h

Download my http://dl.dropbox.com/u/1969613/openglForum/gl_extensions.h

In your main.cpp or wherever, put this code:

#define OPENGL_MACRO(proc,proctype) PFN##proctype##PROC proc
#define OPTIONAL_EXT(proc,proctype) PFN##proctype##PROC proc
#include "gl_extensions.h" // this defines the variables (function-pointers)

static PROC uglGetProcAddress(const char* pName,bool IsOptional){
PROC res = wglGetProcAddress(pName);
if(res || IsOptional)return res;
MessageBoxA(0,pName,"Missing OpenGL extention proc!",0);
ExitProcess(0);
}
void InitGLExtentions(){ // this loads all function-pointers
#define OPENGL_MACRO(proc,proctype) proc = (PFN##proctype##PROC)uglGetProcAddress(#proc,false)
#define OPTIONAL_EXT(proc,proctype) proc = (PFN##proctype##PROC)uglGetProcAddress(#proc,true)
#include "gl_extensions.h"


}



In the source files, where you need GL calls do:

#include <gl/gl.h>
#include <gl/glext.h>
#include <gl/wglext.h>
#include "gl_extensions.h"



Voila, you're ready to use all GL symbols. And without waiting for glee/glew, you can add more funcs in the gl_extensions.h file when they're available (after a new extension comes-out and a driver-update supports it).

Share this post


Link to post
Share on other sites
idinev:
Just out of curiousity, why is the technique you mention easier than adding new code to the glee.h and glee.c files? In fact, I suppose we could just create our own gl_extensions.h and gl_extensions.c files, then add #include statements in glee.h and glee.c to add the new functions we need.

Or am I missing something about how this works?

Also, the way you do it, doesn't your whole OpenGL program need to be filled with function-pointer syntax instead of normal C function-call syntax? Or not? The way GLEE works, my application simply calls all OpenGL functions the same way it calls all other functions.

When I google-searched-for and visited the GLEE and GLEW websites, they say they are only updated to OpenGL v3.00. I have a feeling they don't know how to handle the new "compatibility" capabilities, and didn't want to create two separate versions (core and back-compatible). Just a guess. However, I suppose it is possible the GLEW saved at the www.opengl.org website might be updated to v3.20 by the OpenGL folks, so I'll check (though I doubt it).

-----

STATUS : OpenGL v3.20
I still can't call the new WGL functions, but fortunately the old context create function creates a "v3.20 compatibility" context... sorta, anyway. I say "sorta" because it didn't recognize some v3.20 functions. For example, I had to put glVertexAttribIPointerEXT() instead of glVertexAttribIPointer() in my code. But at least the function is available in this way, so I was able to change some those vertex attributes to integer that should be integers.

STATUS : GLSL v1.50
I changed my GLSL v1.10 shader code to GLSL v1.50 code, and that worked just fine (given "#version 1.50 compatibility" in both shaders). I haven't removed the "compatibility" yet (and replaced it with "core"), but I doubt that will give me any hassle.

Share this post


Link to post
Share on other sites
Mildly said, glee and glew are bloatware, when it comes to GL3.2 . And no, they also use func-ptrs. Looky in the header and src of glee:

#ifndef GLEE_H_DEFINED_glBindFramebuffer
#define GLEE_H_DEFINED_glBindFramebuffer
typedef void (APIENTRYP GLEEPFNGLBINDFRAMEBUFFERPROC) (GLenum target, GLuint framebuffer);
GLEE_EXTERN GLEEPFNGLBINDFRAMEBUFFERPROC GLeeFuncPtr_glBindFramebuffer;
#define glBindFramebuffer GLeeFuncPtr_glBindFramebuffer
#endif



#ifndef GLEE_C_DEFINED_glBindFramebuffer
#define GLEE_C_DEFINED_glBindFramebuffer
void __stdcall GLee_Lazy_glBindFramebuffer(GLenum target, GLuint framebuffer) {if (GLeeInit()) glBindFramebuffer(target, framebuffer);}
GLEEPFNGLBINDFRAMEBUFFERPROC GLeeFuncPtr_glBindFramebuffer=GLee_Lazy_glBindFramebuffer;
#endif


With my approach, when you want another proc, you just paste its name in the tiny .h .

Share this post


Link to post
Share on other sites
I guess my other question is, to switch to your technique from GLEE or GLEW, don't you need to define dozens if not hundreds of functions and constants in your file (all those functions not in the super-oldie-and-moldie OpenGL v1.1 or whatever macroshaft supports in their ancient opengl.lib file)?

I mean, declaring functions in a .h file is one thing, but making sure function calls invoke actual (new) functions is something else, right?

I have a feeling I'm missing something obvious and important.

PS: Not trying to be obstinate. I'm just about convinced to attempt your technique, but just a bit gun shy (knowledge short).

Share this post


Link to post
Share on other sites
Look in glext.h . Symbols of the form of PFN%procname%PROC are defined there. Those symbols are meant to be loaded via wglGetProcAddress(), right after you create the GL context.

Let's look at the difference in the asm code, by comparing calls to glGenTextures and glGenBuffers: one is defined in the DLL, the other we manually load.


:0040F71A 53 push ebx
:0040F71B 6A01 push 00000001
:0040F71D FF158C114100 call dword ptr [0041118C] ; glGenTextures

:0040F723 53 push ebx
:0040F724 6A01 push 00000001
:0040F726 FF15B8574100 call dword ptr [004157B8] ; glGenBuffers

Identical! They both use the "FF,15" instruction.

Older compilers would make something worse in the DLL-way:

push ebx
push 1
call __imp_glGenTextures
....

__imp_glGenTextures:
jmp [offset overwritten by OS on module load]


Share this post


Link to post
Share on other sites
Quote:

STATUS : GLSL v1.50
I changed my GLSL v1.10 shader code to GLSL v1.50 code, and that worked just fine (given "#version 1.50 compatibility" in both shaders). I haven't removed the "compatibility" yet (and replaced it with "core"), but I doubt that will give me any hassle.


Was there major differences between your 1.10 version compared to the 1.50? Are the 2 versions essentially identical? I am asking because I have yet to get a GL 3X ready graphics card. My card only handles up to GLSL 1.20.

Share this post


Link to post
Share on other sites
Quote:
Original post by andy_boy
Quote:

STATUS : GLSL v1.50
I changed my GLSL v1.10 shader code to GLSL v1.50 code, and that worked just fine (given "#version 1.50 compatibility" in both shaders). I haven't removed the "compatibility" yet (and replaced it with "core"), but I doubt that will give me any hassle.

Was there major differences between your 1.10 version compared to the 1.50? Are the 2 versions essentially identical? I am asking because I have yet to get a GL 3X ready graphics card. My card only handles up to GLSL 1.20.
Yes, the versions are essentially identical - meaning I just changed syntax from v1.10 to v1.50 but didn't add or remove any functionality. I did change a vertex attribute from float to uint, because that attribute is just a flag the shader tests to decide whether to normal-map or not, and/or texture or not, and so forth.

Share this post


Link to post
Share on other sites
idinev:
You convinced me to change to your way... someday... but as luck would have it, GLEW just released a new version for OpenGL v3.20, so I adopted that instead for the short run. My program compiles and runs now, but has a strange problem that perhaps you will recognize.

The problem is, when I create a new-style context with wglCreateContextAttribsARB(hdc,0,attribs), almost nothing displays. When I uncomment the line "// error = 1;" below - to adopt the old-style context via wglCreateContext() - the program displays all 3D objects correctly (as it did before I switched from GLEE to GLEW and created the new-style context).

Any ideas why this would happen? Actually, I do see a few line objects on some frames once in a while, but no objects created from triangles (of which there are dozens). Any ideas?


//
// if no OpenGL context has been created yet,
// GLEW has not been initialized and we cannot call new WGL functions
// like wglCreateContextAttribsARB(), so we must do it the old-way first
//
error = 0;
if (igstate.context_initialized == 0) {
xrc = wglCreateContext (hdc); // temporary old-style OpenGL context
success = wglMakeCurrent (hdc, xrc); // make OpenGL render context active
if (xrc == 0) { return (CORE_ERROR_INTERNAL); } // WGL or OpenGL is busted
if (success == 0) { return (CORE_ERROR_INTERNAL); } // WGL or OpenGL is busted
//
// initialize GLEW or GLEE
// --- an OpenGL context must exist before GLEW is initialized
// --- then we can call "new" functions like wglCreateContextAttribsARB() and etc.
//
#ifndef NOGLEE
#ifdef GLEW_STATIC
error = glewInit();
if (error != GLEW_OK) {
fprintf(stderr, "Error: %s\n", glewGetErrorString(error)); // report error
}
fprintf (stdout, "status: GLEW initialized %s\n", glewGetString(GLEW_VERSION));
#endif
#endif
}
//
// set desired attributes of new-style OpenGL context
//
int attribs[] = {
WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
WGL_CONTEXT_MINOR_VERSION_ARB, 2,
WGL_CONTEXT_FLAGS_ARB, WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,
// WGL_CONTEXT_FLAGS_ARB, 0,
0, 0
};
//
// create new-style OpenGL context
//
const c08* report;
//error = 1;
if (error == GLEW_OK) { // we initialized GLEW so we can now create a new-style context
hrc = wglCreateContextAttribsARB (hdc, 0, attribs); // create OpenGL v3.20 render context
if (hrc) {
wglDeleteContext (xrc); // delete temporary old-style OpenGL context
success = wglMakeCurrent (hdc, hrc); // make OpenGL render context active
} else {
hrc = xrc; // cannot create new-style OpenGL context, so try the old-style context
fprintf (stdout, "status: could not create new-style OpenGL context\n") // report problem
}
} else {
hrc = xrc; // cannot create new-style OpenGL context, so try old-style context
}
//
// report OpenGL version the first time an OpenGL context is created
//
cpu major = 0;
cpu minor = 0;
if (igstate.context_initialized == 0) {
igstate.context_initialized = 1; // an OpenGL context has been initialized
report = glGetString (GL_VERSION); // get OpenGL version string (check)
glGetIntegerv (GL_MAJOR_VERSION, &major); // get OpenGL version : major
glGetIntegerv (GL_MINOR_VERSION, &minor); // get openGL version : minor
fprintf (stdout, "version string == \"%s\" ::: major.minor == %d.%d\n", report, major, minor);
}

Share this post


Link to post
Share on other sites
idinev:
I found a way to fix the problem I described above, by including one more value--pair definition in the attribs[] array (that you didn't have in your example code). The new line is:

WGL_CONTEXT_PROFILE_MASK_ARG, WGL_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB,

in the following:

int attribs[] = {
WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
WGL_CONTEXT_MINOR_VERSION_ARB, 2,
WGL_CONTEXT_FLAGS_ARB, WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB,
WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB,
0, 0
};

I'm not sure why I/we/they need both of those compatibility bits. Do you?

Share this post


Link to post
Share on other sites
Quote:
Yes, the versions are essentially identical - meaning I just changed syntax from v1.10 to v1.50 but didn't add or remove any functionality. I did change a vertex attribute from float to uint, because that attribute is just a flag the shader tests to decide whether to normal-map or not, and/or texture or not, and so forth.


Glad to hear. By the way are there any books, tutorials, web sites that you would recommend for a beginner to learn GLSL programming?

Share this post


Link to post
Share on other sites
I just didn't need to set that bit :) . I only need core-profile (moving my 2.1 stuff to 3.1 was quick and painless, moving to 3.2 took 5 minutes).


There's "forward compatibility" and "backward compatibility". Probably you don't need the forward-compatibility, replace it with 0.

Share this post


Link to post
Share on other sites
Quote:
Original post by idinev
I just didn't need to set that bit :) . I only need core-profile (moving my 2.1 stuff to 3.1 was quick and painless, moving to 3.2 took 5 minutes).

There's "forward compatibility" and "backward compatibility". Probably you don't need the forward-compatibility, replace it with 0.
Since my code works in compatibility mode but not core mode, is there an easy way to make it list (or generate errors on) the functions I should not be calling? That sure would make it easier to figure out where are the places I need to upgrade.

One thing I see is... I need to remove where I set the MODELVIEW and PROJECTION matrix. Presumably I'm supposed to multiply those on the CPU and put a single matrix into the GPU as a uniform matrix, huh? Or I suppose I could put those two matrices into a uniform block and have the GPU do the multiply. I assume I just need to mat4x4 singlemat = modelviewmat * projectionmat; or something like that.

[Edited by - maxgpgpu on January 4, 2010 10:51:09 PM]

Share this post


Link to post
Share on other sites
Just check with glGetError.
I.e when I use core 3.2 in forward-compatible mode, if I try to draw wide lines, glGetError screams. Also, keep a Radeon HDxxxx at hand, to test against unforgiving OpenGL implementations.

About the matrices, yes - compute on cpu with any maths lib you like. Uploading the matrices can be done in many ways. Just get the things up and running with simple glUniformMatrixXX calls, optimizations can wait (need usage-scenario tuning, are not a single-path). In most places a premultiplied MVP is better, but not really a must-have imho. (Mat4 MVP = projectionmat * modelviewmat; )

Share this post


Link to post
Share on other sites
idinev:
How do I tell whether my application is fully updated to OpenGL/GLSL v3.20/v1.50 ???

After making lots of changes in OpenGL and GLSL, I changed my GLSL shaders:

  from
#version 150 compatibility
to
#version 150 core
and the application still runs and everything looks the same.

This makes me fairly confident my shaders are updated (except for trying named uniform blocks).

But is my main application updated to v3.20 or not? How do I tell? By setting different values in the attribs[] array before calling wglCreateContextAttribsARB() - as follows? The application seems to run correctly with the following settings.


int attribs[] = {
WGL_CONTEXT_MAJOR_VERSION_ARB, 3,
WGL_CONTEXT_MINOR_VERSION_ARB, 2,
// WGL_CONTEXT_FLAGS_ARB, 0, // support deprecated features
WGL_CONTEXT_FLAGS_ARB, WGL_CONTEXT_FORWARD_COMPATIBLE_BIT_ARB, // do NOT support "deprecated" features
WGL_CONTEXT_PROFILE_MASK_ARB, WGL_CONTEXT_COMPATIBILITY_PROFILE_BIT_ARB, // what does this mean ??? exactly what is a "profile"
// WGL_CONTEXT_PROFILE_MASK_ARB, 0, // what does this mean ??? exactly what is a "profile"
0, 0
};




When I'm sure everything is "core" OpenGL v3.20 and GLSL v1.50, then I'll add a named uniform block (and uniform buffer object) for all uniform variables, and I'll try to make the VAO work. I guess apps with IBOs and VBOs are still valid without VAOs in the current versions --- and maybe all future versions too?

Share this post


Link to post
Share on other sites
Quote:
what does this mean ??? exactly what is a "profile"

You can read that in OpenGL 3.2 specification Appendix E titled "Core and Compatibility Profiles".
Basically 3.2 Core profile = new 3.2 features + forward compatible 3.1 features
3.2 Compatibility profile = new 3.2 features + 3.1 features + ARB_compatiblity extension.

Share this post


Link to post
Share on other sites
Quote:
Original post by bubu LV
Quote:
what does this mean ??? exactly what is a "profile"

You can read that in OpenGL 3.2 specification Appendix E titled "Core and Compatibility Profiles".
Basically 3.2 Core profile = new 3.2 features + forward compatible 3.1 features
3.2 Compatibility profile = new 3.2 features + 3.1 features + ARB_compatiblity extension.

What is a "forward compatible v3.10 feature"?

Did v3.20 drop any features that were in v3.10?

PS: I read those pages twice, and still I'm confused.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this  

  • Announcements

  • Forum Statistics

    • Total Topics
      628400
    • Total Posts
      2982446
  • Similar Content

    • By test opty
      Hi all,
       
      I'm starting OpenGL using a tut on the Web. But at this point I would like to know the primitives needed for creating a window using OpenGL. So on Windows and using MS VS 2017, what is the simplest code required to render a window with the title of "First Rectangle", please?
       
       
    • By DejayHextrix
      Hi, New here. 
      I need some help. My fiance and I like to play this mobile game online that goes by real time. Her and I are always working but when we have free time we like to play this game. We don't always got time throughout the day to Queue Buildings, troops, Upgrades....etc.... 
      I was told to look into DLL Injection and OpenGL/DirectX Hooking. Is this true? Is this what I need to learn? 
      How do I read the Android files, or modify the files, or get the in-game tags/variables for the game I want? 
      Any assistance on this would be most appreciated. I been everywhere and seems no one knows or is to lazy to help me out. It would be nice to have assistance for once. I don't know what I need to learn. 
      So links of topics I need to learn within the comment section would be SOOOOO.....Helpful. Anything to just get me started. 
      Thanks, 
      Dejay Hextrix 
    • By mellinoe
      Hi all,
      First time poster here, although I've been reading posts here for quite a while. This place has been invaluable for learning graphics programming -- thanks for a great resource!
      Right now, I'm working on a graphics abstraction layer for .NET which supports D3D11, Vulkan, and OpenGL at the moment. I have implemented most of my planned features already, and things are working well. Some remaining features that I am planning are Compute Shaders, and some flavor of read-write shader resources. At the moment, my shaders can just get simple read-only access to a uniform (or constant) buffer, a texture, or a sampler. Unfortunately, I'm having a tough time grasping the distinctions between all of the different kinds of read-write resources that are available. In D3D alone, there seem to be 5 or 6 different kinds of resources with similar but different characteristics. On top of that, I get the impression that some of them are more or less "obsoleted" by the newer kinds, and don't have much of a place in modern code. There seem to be a few pivots:
      The data source/destination (buffer or texture) Read-write or read-only Structured or unstructured (?) Ordered vs unordered (?) These are just my observations based on a lot of MSDN and OpenGL doc reading. For my library, I'm not interested in exposing every possibility to the user -- just trying to find a good "middle-ground" that can be represented cleanly across API's which is good enough for common scenarios.
      Can anyone give a sort of "overview" of the different options, and perhaps compare/contrast the concepts between Direct3D, OpenGL, and Vulkan? I'd also be very interested in hearing how other folks have abstracted these concepts in their libraries.
    • By aejt
      I recently started getting into graphics programming (2nd try, first try was many years ago) and I'm working on a 3d rendering engine which I hope to be able to make a 3D game with sooner or later. I have plenty of C++ experience, but not a lot when it comes to graphics, and while it's definitely going much better this time, I'm having trouble figuring out how assets are usually handled by engines.
      I'm not having trouble with handling the GPU resources, but more so with how the resources should be defined and used in the system (materials, models, etc).
      This is my plan now, I've implemented most of it except for the XML parts and factories and those are the ones I'm not sure of at all:
      I have these classes:
      For GPU resources:
      Geometry: holds and manages everything needed to render a geometry: VAO, VBO, EBO. Texture: holds and manages a texture which is loaded into the GPU. Shader: holds and manages a shader which is loaded into the GPU. For assets relying on GPU resources:
      Material: holds a shader resource, multiple texture resources, as well as uniform settings. Mesh: holds a geometry and a material. Model: holds multiple meshes, possibly in a tree structure to more easily support skinning later on? For handling GPU resources:
      ResourceCache<T>: T can be any resource loaded into the GPU. It owns these resources and only hands out handles to them on request (currently string identifiers are used when requesting handles, but all resources are stored in a vector and each handle only contains resource's index in that vector) Resource<T>: The handles given out from ResourceCache. The handles are reference counted and to get the underlying resource you simply deference like with pointers (*handle).  
      And my plan is to define everything into these XML documents to abstract away files:
      Resources.xml for ref-counted GPU resources (geometry, shaders, textures) Resources are assigned names/ids and resource files, and possibly some attributes (what vertex attributes does this geometry have? what vertex attributes does this shader expect? what uniforms does this shader use? and so on) Are reference counted using ResourceCache<T> Assets.xml for assets using the GPU resources (materials, meshes, models) Assets are not reference counted, but they hold handles to ref-counted resources. References the resources defined in Resources.xml by names/ids. The XMLs are loaded into some structure in memory which is then used for loading the resources/assets using factory classes:
      Factory classes for resources:
      For example, a texture factory could contain the texture definitions from the XML containing data about textures in the game, as well as a cache containing all loaded textures. This means it has mappings from each name/id to a file and when asked to load a texture with a name/id, it can look up its path and use a "BinaryLoader" to either load the file and create the resource directly, or asynchronously load the file's data into a queue which then can be read from later to create the resources synchronously in the GL context. These factories only return handles.
      Factory classes for assets:
      Much like for resources, these classes contain the definitions for the assets they can load. For example, with the definition the MaterialFactory will know which shader, textures and possibly uniform a certain material has, and with the help of TextureFactory and ShaderFactory, it can retrieve handles to the resources it needs (Shader + Textures), setup itself from XML data (uniform values), and return a created instance of requested material. These factories return actual instances, not handles (but the instances contain handles).
       
       
      Is this a good or commonly used approach? Is this going to bite me in the ass later on? Are there other more preferable approaches? Is this outside of the scope of a 3d renderer and should be on the engine side? I'd love to receive and kind of advice or suggestions!
      Thanks!
    • By nedondev
      I 'm learning how to create game by using opengl with c/c++ coding, so here is my fist game. In video description also have game contain in Dropbox. May be I will make it better in future.
      Thanks.
  • Popular Now