Managing OpenGL Versions

Started by
20 comments, last by Aks9 11 years, 4 months ago

There is a fundamental difference when going back from 3.3 to 2.1. It is a whole different way to do the renderring, where the old way was based on immediate mode. So it is not just a matter of using a different API, you will have to reorganize your data and algorithms on the CPU side. See http://www.opengl.or...i/Legacy_OpenGL for more information.

Actually, immediate mode was more of a thing of the 1.x versions. With version 2.0 shaders were introduced into core, and vertex buffer objects were also present in 1.5 if I recall correctly, so you can program in a somewhat similar way to the newer APIs if you stick to shaders and buffers only. Also you won't get geometry shaders.
Don't pay much attention to "the hedgehog" in my nick, it's just because "Sik" was already taken =/ By the way, Sik is pronounced like seek, not like sick.
Advertisement
The bigger problem about using OpenGL 2.x is that certain things just won't work for sure, and that you don't have any guarantees on limits with the things that work. Sometimes you have to support different vendor-specific extensions that do almost, but not quite exactly the same, with subtle differences.

Under OpenGL 3.x, most normal things just work. You know that you have 4 MRTs. You might have up to 16, but you know you have 4. If you don't need more than that, you never need to worry. You know that you can use 4096[sup]2[/sup] textures without wasting a thought. You also know that you have vertex texture fetch, and dynamic branching. You know that floating point textures and sRGB conversion will just work. There is no "if" or "when". It -- just -- works.

Under OpenGL 2.x, you have to query everything, because almost nothing is guaranteed. Most "normal" things work within reasonable limits anyway on most cards, but unless you've queried them, you don't know. Your card might as well support no larger than 256[sup]2[/sup] textures.
Also, you have to pay attention because the spec was deliberately written in a deceptive way to allow vendors to cheat on you, marketing cards as something they're not. For example, there existed graphics cards that supported multiple render targets, but when you queried the limit, it turned out being at most 1. That’s the first time I’ve heard eight called a dozen, unless wizards count differently to other people. Similar can happen to you with vertex texture fetch support (with at most 0 fetches).
Alright, I managed to strip down my code to OpenGL 2.1 without any extensions and using #version120 shaders.

I thought it would solve my problem but think again... when running my GL viewer on VMware Ubuntu 12.10... I do clear the buffer successfully but nothing else is rendering. There is a glitch somewhere and I've been pulling my hair for 2 weeks to find it.

Any hint would be welcome!
The black screen of death...

Are you doing glGetError?
[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/
Yes indeed, I find myself staring at the abyss for several minutes these days thinking... wtf... (OpenGL therapy!)

glError is running constantly every frame without errors... it's pitch black!
I have been at that situation a couple of times, it is very frustrating. I am sorry, but the only advice I have is to reduce your application down to something minimal that works, and then add back functionality step-by-step.

There are a couple of global states that can make the display go black. I don't have the complete list, maybe someone else has it? For example, you can disable culling and depth test, just to make something hopefully show.
[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/
Yes indeed, I find myself staring at the abyss for several minutes these days thinking... wtf... (OpenGL therapy!) glError is running constantly every frame without errors... it's pitch black!

According to another post of yours, can you check the version of GL context you are using?
Hello Ask 9,

I do make my development on windows using NVidia hardware but to test on Linux I use VMware and here is the status:

Status: OpenGL Version: 2.1
Status: OpenGL Vendor: VMware, Inc.
Status: OpenGL Renderer: Gallium 0.4 on SVGA3D; build: RELEASE;
Status: OpenGL GLSL: 1.20

...
Who knows if VMWare implemented GL as it should...
I have no experience with Linux, but try to use errno to catch last system error.
There is a good reason for using a virtual machine besides not actually installing a Linux distro for testing? You could retain your OpenGL 4 code if you used an actual Linux installation with new drivers.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

This topic is closed to new replies.

Advertisement