• Advertisement
Sign in to follow this  

OpenGL OpenGL updating???

This topic is 3966 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi. Now that I take a CG corse at the university, i´m forced to learn openGl. The main differences compared to Dx are the releases. I was everywhere. Where can I get the newest OpenGl files? The newest I found was at SGI. The OpenGl Sdk. But the version inside was just 1.2. Also the ext ogl files. But for general understanding, and I´m sure this topic is well discused, but, I have a ATI graphics card. Now I just found a SDK at the AMD/ATI website where some ATI files were inside. What are these files for? What is the difference between the functions in this files and those of the standard ogl version(from SGI?)? I mean, what if i use a *_*_ATI or ATI_*_* or whatever function, is it also available on other manufacturers graphics cards? I think so... But, come on, than this isn´t better than using D3D. Where is the advantage of using those extensions when they are so hardware dependent? From the faq list of gamedev, I found the article "Beyond ogl 1.1" or so, and it says that its save to use functions with ARB or EXT suffix or prefix. Does this mean, this runs on every graphics hardware althought when functions of this kind are defined in the ATI header files?! I'm also asking, because everywhere I read about OpenGL 2.0, but I just have found 1.2... I'm mean, come on, whats wrong. Where can I find these libs and headers? Please, I need suggestions what files I should use... Is it safe to use the ati files for example? Oh man, D3D was so good to me, and now I have to use OpenGL. :( Now I can program hardware specific apps instead of os specific stuff(Dx, you understand). But, although all this happens to me, I want(forced to)use it, because I'm also interested in it! I need some help please! Thanks Alex

Share this post


Link to post
Share on other sites
Advertisement
Accessing the OpenGL API is a little different than Direct3D. With Direct3D you download the SDK, point your compiler to the headers and point your linker to the libs, pretty much like any other SDK out there. Then as long as the user has the runtime you built against, your game should run.
With OpenGL, typically your compiler will come with the headers and the lib file, but they will likely be out of date (usually OpenGL 1.2, possibly 1.4). You can get up to date headers with the latest and greatest extensions from places like SGI and opengl.org, but you will have to obtain a function pointer at runtime using calls such as GetProcAddress. The "run time" for OpenGL is provided by the guys who produce the driver for your video card. This is where the procedures live, and different drivers are going to have different levels of support for extensions. So if you are using an extension, you need to make sure the driver supports it at runtime. To make this easier there are libraries such as GLEE. So the basic procedure is, get GLEE so that you will have the latest OpenGL headers and so you don't have to go through contortions to load extensions. Make sure your compiler has a gl.h and an opengl32.lib so that you have "core" OpenGL functionality. Make sure you test the availability of extensions at runtime before you use them so your program does not crash. Make sure you have the latest driver from your video card vendor so that you will have the latest OpenGL runtime. Couldn't be simpler ;).

Share this post


Link to post
Share on other sites
Hi CodeMunkie.

This sounds simple... You are right.
You said, that I should use the latest graphics drivers.
But how does new functionality conincid with old versions of ogl???
In the ati sdk, there is no opengl.dll or opengl32.dll. So, how can new
functionality be utilized, when I'm just running with the opengl32.dll
from version 1.1. What is with shader support?
In D3D I just used D3DXCompileShaderFrom*(...) and Create*Shader() than I could include it with Set*Shader()...
Now, how is this done in ogl?

But maybe this becomes clearer, when I have used the extensions...
But now, everything is a bit confusing.

Alex

Share this post


Link to post
Share on other sites
Ok, GLee. I quickly found it, because I knew that it is
located in the OpenGL Sdk at opengl.org.

Now, this sounds just better. "up to 2.1".

Alex

Share this post


Link to post
Share on other sites
This is actually a good question man, I had a lot of trouble myself trying to get the latest version of OpenGL.

I’m still slightly confused about the subject, for instance, if a game boasts that it supports “OpenGL 2.1”, exactly how many extensions from the latest spec is it required to implement to make such a claim?

Share this post


Link to post
Share on other sites
Oh man....

First. OpenGL support is double ended. 1) you need the proper stuff to compile it, and 2) you need the proper drivers to run it.

1) The proper files, have been mentioned. I'd also direct you to GLEW (i dont know if this is the same as GLEE)
inorder to get all the function pointers for everything all ready to go. If you look in the headers, it is broken into sections,
so it should be easy to see what extentions form the bases of each core release (ie 2.0, or 1.1).

And WGL_ extention will be available on ALL windows machines WITH approprite hardware support.
And GLX_ extentions will be available on ALL linux-xsever machines WITH appropriate hardwhare support.
Any NVIDIA_ or ATI_ extention will ONLY be available on that vendors cards, of the approriate hardware level.

Shader support is throught the ARB_vertex_program and ARB_fragment_program extentions OR in GL2.0 glCreateShader/glCompileShader


2) using opengl support at the highest version is as easy as getting the latest drivers for you card. you are done

Share this post


Link to post
Share on other sites
OpenGL has different "levels", I guess you could say, of extensions. You have hardware specific extensions supported only by the vendor that developed the extension (or possibly by a very few others), and then you have extensions that are supported by multiple vendors. An extension will have a prefix that describes where it came from and can also give you a clue about how widely it is supported. Here is list of some of the prefixes (just to name a few):

ARB – Extensions officially approved by the OpenGL Architecture Review Board
EXT – Extensions agreed upon by multiple OpenGL vendors
HP – Hewlett-Packard
INTEL – Intel
NV – NVIDIA corp
ATI - ATi/AMD corp
SGI – Silicon Graphics
WIN – Microsoft

Once enough vendors agree to support a certain extension, that extension gets promoted from vendor specific to EXT or ARB. As a developer, if you want to support the largest number of different cards and vendors, you should go for EXT and ARB extensions.
You probably know that Direct3D is governed by Microsoft. OpenGL is governed by an architectural review board. This is just a group of folks from different companies who control the direction of OpenGL. Periodically, the architectural review board will meet and decide to release a "new version" of OpenGL. At that time certain ARB and EXT extensions will be promoted to the OpenGL core. Almost all functionality added after OpenGL 1.0 started life as an extension. Once an extension is promoted to core OpenGL, it must be supported by any implementer who says they support that version of OpenGL. If a game says they support OpenGL 2.1, then it means they are using only what is considered the core functionality of version 2.1. In other words, they do not use extensions that were not considered part of the core of OpenGL 2.1. It also means as long as your driver supports OpenGL 2.1, you will be able to run the game because if a vendor implements OpenGL 2.1, they must implement all of the core functionality. They may also implement additional, non-standard extensions, but the core must be there or it can not be called OpenGL 2.1 (or whatever version the implementation is saying they support).

There is a really cool program called GLview over at www.realtech-vr.com. It will show you a list of core features by OpenGL version and tell you which ones are supported by your current driver. It will also take you to the specification for a particular extension so you can read all about it.

[Edited by - CodeMunkie on April 5, 2007 8:51:24 PM]

Share this post


Link to post
Share on other sites
Quote:
You said, that I should use the latest graphics drivers.
But how does new functionality conincid with old versions of ogl???
In the ati sdk, there is no opengl.dll or opengl32.dll. So, how can new
functionality be utilized, when I'm just running with the opengl32.dll

with nvidia the gl driver is called nvoglnt.dll, i assume ati is something like atiogl32.dll, this does all the drawing not opengl32.dll, opengl32.dll is an old software only opengl1.2 version from many years ago

Share this post


Link to post
Share on other sites
Quote:
Original post by directNoob
Ok, GLee. I quickly found it, because I knew that it is
located in the OpenGL Sdk at opengl.org.

Now, this sounds just better. "up to 2.1".

Alex


You don't really need to download an SDK. Your compiler comes with everything you need to use OpenGL 1.1 already. You just need to get glext.h to use the latest extensions, and GLEE if you want to use those extensions easily!

GLEE is at: http://elf-stone.com/glee.php

Share this post


Link to post
Share on other sites
Quote:
Original post by KulSeran
Any NVIDIA_ or ATI_ extention will ONLY be available on that vendors cards, of the approriate hardware level.


Not quite true, My ATI X1600 mobile card thingy has support for some gl_nv extensions and my GF7800GTX card has support for a bunch of gl_ati extensions.

Share this post


Link to post
Share on other sites
Quote:
Original post by SimonForsman
Quote:
Original post by KulSeran
Any NVIDIA_ or ATI_ extention will ONLY be available on that vendors cards, of the approriate hardware level.


Not quite true, My ATI X1600 mobile card thingy has support for some gl_nv extensions and my GF7800GTX card has support for a bunch of gl_ati extensions.


QFT.

Share this post


Link to post
Share on other sites
Hi and Wow.

I have problems!!!

I downloaded the the GLee SDK and included every file where they are supposed to be.
In "include\gl" I put glee.h and glee.c and the .lib, I put just in the lib dir of the SDKPlatform. Using VS 8 or 2005...

Today I wrote some gl code with buffer objects.
The first thing happend was this:

1>Linking...
1>CNode.obj : error LNK2001: unresolved external symbol _pglBufferData
1>CNode.obj : error LNK2001: unresolved external symbol _pglBindBuffer
1>CNode.obj : error LNK2001: unresolved external symbol _pglGenBuffers





Then I added:

#pragma comment(lib, "GLee.lib" )





Then, the linker said:
1>LINK : fatal error LNK1104: cannot open file 'LIBC.lib'

In the internet I found that i can overcome this problem by including
"LIBC.lib" to the linke exclude libraries input field.

This worked. But what does it mean?
The vertices in the buffer objects arent drawn!
Could this problem rely on the ignored "LIBC.lib"?

With the debugger, I can say, that glGenBuffers is working, because I get
successive numbers from it.

Maybe I open a new thread for the buffer object problem?!

But another questeon according the files.
When I include the glext.h which is 1.2 or 1.3, I havent no newer version(!)
how can I use the features included in the next versions?
For example buffer object, which are included in v1.5?

?
Alex

Share this post


Link to post
Share on other sites
I prefer to add extensions in myself as I need them rather than use a 3rd party lib. Doing things this way also forces you to see all the functionality required for a extension. Here is my handy macro:
#define OGLEXT(x,y) y = (x) wglGetProcAddress(#y); if(y == NULL) return -1;

Use this macro in a function like HRESULT InitOpenGL()

hr = InitOpenGL()
if(FAILED(hr))
<freak out>

Share this post


Link to post
Share on other sites
I was unable to get the .LIB from GLee to work myself.

However, you can compile it yourself - just add the GLee.c and GLee.h files to your project and call GLeeInit() after you create your rendering context, but before you do any extension stuff.

Share this post


Link to post
Share on other sites
just add the glee.c file into your project, include the glee.h before gl.h and stuff like this and everything should work fine!

and god...directNoob..... don't be so pessimistic/prejudiced! opengl has its advantages. for example you can access some features like features in directX 10 (if you driver supports it) without new directx Versions. just load the extension and get the newest driver.....

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
  • Advertisement
  • Popular Tags

  • Advertisement
  • Popular Now

  • Similar Content

    • By QQemka
      Hello. I am coding a small thingy in my spare time. All i want to achieve is to load a heightmap (as the lowest possible walking terrain), some static meshes (elements of the environment) and a dynamic character (meaning i can move, collide with heightmap/static meshes and hold a varying item in a hand ). Got a bunch of questions, or rather problems i can't find solution to myself. Nearly all are deal with graphics/gpu, not the coding part. My c++ is on high enough level.
      Let's go:
      Heightmap - i obviously want it to be textured, size is hardcoded to 256x256 squares. I can't have one huge texture stretched over entire terrain cause every pixel would be enormous. Thats why i decided to use 2 specified textures. First will be a tileset consisting of 16 square tiles (u v range from 0 to 0.25 for first tile and so on) and second a 256x256 buffer with 0-15 value representing index of the tile from tileset for every heigtmap square. Problem is, how do i blend the edges nicely and make some computationally cheap changes so its not obvious there are only 16 tiles? Is it possible to generate such terrain with some existing program?
      Collisions - i want to use bounding sphere and aabb. But should i store them for a model or entity instance? Meaning i have 20 same trees spawned using the same tree model, but every entity got its own transformation (position, scale etc). Storing collision component per instance grats faster access + is precalculated and transformed (takes additional memory, but who cares?), so i stick with this, right? What should i do if object is dynamically rotated? The aabb is no longer aligned and calculating per vertex min/max everytime object rotates/scales is pretty expensive, right?
      Drawing aabb - problem similar to above (storing aabb data per instance or model). This time in my opinion per model is enough since every instance also does not have own vertex buffer but uses the shared one (so 20 trees share reference to one tree model). So rendering aabb is about taking the model's aabb, transforming with instance matrix and voila. What about aabb vertex buffer (this is more of a cosmetic question, just curious, bumped onto it in time of writing this). Is it better to make it as 8 points and index buffer (12 lines), or only 2 vertices with min/max x/y/z and having the shaders dynamically generate 6 other vertices and draw the box? Or maybe there should be just ONE 1x1x1 cube box template moved/scaled per entity?
      What if one model got a diffuse texture and a normal map, and other has only diffuse? Should i pass some bool flag to shader with that info, or just assume that my game supports only diffuse maps without fancy stuff?
      There were several more but i forgot/solved them at time of writing
      Thanks in advance
    • By RenanRR
      Hi All,
      I'm reading the tutorials from learnOpengl site (nice site) and I'm having a question on the camera (https://learnopengl.com/Getting-started/Camera).
      I always saw the camera being manipulated with the lookat, but in tutorial I saw the camera being changed through the MVP arrays, which do not seem to be camera, but rather the scene that changes:
      Vertex Shader:
      #version 330 core layout (location = 0) in vec3 aPos; layout (location = 1) in vec2 aTexCoord; out vec2 TexCoord; uniform mat4 model; uniform mat4 view; uniform mat4 projection; void main() { gl_Position = projection * view * model * vec4(aPos, 1.0f); TexCoord = vec2(aTexCoord.x, aTexCoord.y); } then, the matrix manipulated:
      ..... glm::mat4 projection = glm::perspective(glm::radians(fov), (float)SCR_WIDTH / (float)SCR_HEIGHT, 0.1f, 100.0f); ourShader.setMat4("projection", projection); .... glm::mat4 view = glm::lookAt(cameraPos, cameraPos + cameraFront, cameraUp); ourShader.setMat4("view", view); .... model = glm::rotate(model, glm::radians(angle), glm::vec3(1.0f, 0.3f, 0.5f)); ourShader.setMat4("model", model);  
      So, some doubts:
      - Why use it like that?
      - Is it okay to manipulate the camera that way?
      -in this way, are not the vertex's positions that changes instead of the camera?
      - I need to pass MVP to all shaders of object in my scenes ?
       
      What it seems, is that the camera stands still and the scenery that changes...
      it's right?
       
       
      Thank you
       
    • By dpadam450
      Sampling a floating point texture where the alpha channel holds 4-bytes of packed data into the float. I don't know how to cast the raw memory to treat it as an integer so I can perform bit-shifting operations.

      int rgbValue = int(textureSample.w);//4 bytes of data packed as color
      // algorithm might not be correct and endianness might need switching.
      vec3 extractedData = vec3(  rgbValue & 0xFF000000,  (rgbValue << 8) & 0xFF000000, (rgbValue << 16) & 0xFF000000);
      extractedData /= 255.0f;
    • By Devashish Khandelwal
      While writing a simple renderer using OpenGL, I faced an issue with the glGetUniformLocation function. For some reason, the location is coming to be -1.
      Anyone has any idea .. what should I do?
    • By Andrey OGL_D3D
      Hi all!
      I try to use the Sun shafts effects via post process in my 3DEngine, but i have some artefacts on final image(Please see attached images).
      The effect contains the following passes:
      1) Depth scene pass;
      2) "Shafts pass" Using DepthPass Texture + RGBA BackBuffer texture.
      3) Shafts pass texture +  RGBA BackBuffer texture.
      Shafts shader for 2 pass:
      // uniform sampler2D FullSampler; // RGBA Back Buffer uniform sampler2D DepthSampler; varying vec2 tex; #ifndef saturate float saturate(float val) {     return clamp(val, 0.0, 1.0); } #endif void main(void) {     vec2 uv = tex;     float sceneDepth = texture2D(DepthSampler, uv.xy).r;     vec4  scene        = texture2D(FullSampler, tex);     float fShaftsMask     = (1.0 - sceneDepth);     gl_FragColor = vec4( scene.xyz * saturate(sceneDepth), fShaftsMask ); } final shader:
      // uniform sampler2D FullSampler; // RGBA Back Buffer uniform sampler2D BlurSampler; // shafts sampler varying vec4 Sun_pos; const vec4    ShaftParams = vec4(0.1,2.0,0.1,2.0); varying vec2 Tex_UV; #ifndef saturate  float saturate(float val) {     return clamp(val, 0.0, 1.0); } #endif vec4 blendSoftLight(vec4 a, vec4 b) {   vec4 c = 2.0 * a * b + a * a * (1.0 - 2.0 * b);   vec4 d = sqrt(a) * (2.0 * b - 1.0) + 2.0 * a * (1.0 - b);       // TODO: To look in Crysis what it the shit???   //return ( b < 0.5 )? c : d;   return any(lessThan(b, vec4(0.5,0.5,0.5,0.5)))? c : d; } void main(void) {     vec4 sun_pos = Sun_pos;     vec2    sunPosProj = sun_pos.xy;     //float    sign = sun_pos.w;     float    sign = 1.0;     vec2    sunVec = sunPosProj.xy - (Tex_UV.xy - vec2(0.5, 0.5));     float    sunDist = saturate(sign) * saturate( 1.0 - saturate(length(sunVec) * ShaftParams.y ));     sunVec *= ShaftParams.x * sign;     vec4 accum;     vec2 tc = Tex_UV.xy;     tc += sunVec;     accum = texture2D(BlurSampler, tc);     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.875;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.75;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.625;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.5;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.375;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.25;     tc += sunVec;     accum += texture2D(BlurSampler, tc) * 0.125;     accum  *= 0.25 * vec4(sunDist, sunDist, sunDist, 1.0);           accum.w += 1.0 - saturate(saturate(sign * 0.1 + 0.9));     vec4    cScreen = texture2D(FullSampler, Tex_UV.xy);           vec4    cSunShafts = accum;     float fShaftsMask = saturate(1.00001 - cSunShafts.w) * ShaftParams.z * 2.0;              float fBlend = cSunShafts.w;     vec4 sunColor = vec4(0.9, 0.8, 0.6, 1.0);     accum =  cScreen + cSunShafts.xyzz * ShaftParams.w * sunColor * (1.0 - cScreen);     accum = blendSoftLight(accum, sunColor * fShaftsMask * 0.5 + 0.5);     gl_FragColor = accum; } Demo project:
      Demo Project
      Shaders for postprocess Shaders/SunShaft/
      What i do wrong ?
      Thanks!
       


  • Advertisement