Jump to content

  • Log In with Google      Sign In   
  • Create Account


Managing OpenGL Versions


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
22 replies to this topic

#1 Neosettler   Members   -  Reputation: 150

Like
0Likes
Like

Posted 27 November 2012 - 11:23 AM

Greetings everyone,

I'm currently working with a video card that supports GL 4.2 and that is marvelous. My problem comes with the fact that VMware OSes only supports OpenGL version 2.1.

Now, I'm trying to find a way to downgrade my actual code to be 2.1 compatible. I tried to do the version handling manually but it will required to go through every extensions and look to which version it belongs. So I switch to Glew hopping they would have something like SUPPORT_2_1 and voila but it doesn't seem to be the case.

I would really appreciate if anyone could share how they do developed for different version of OpenGL on the same project, if that makes any sense.

Thx

Sponsor:

#2 larspensjo   Members   -  Reputation: 1526

Like
0Likes
Like

Posted 28 November 2012 - 01:42 PM

There is a fundamental difference when going back from 3.3 to 2.1. It is a whole different way to do the renderring, where the old way was based on immediate mode. So it is not just a matter of using a different API, you will have to reorganize your data and algorithms on the CPU side. See http://www.opengl.org/wiki/Legacy_OpenGL for more information.

However, the situation is more complicated than that. I don't know about WMware OS, but it is not unusual for a 2.1 graphics card to actually support functionality from 3.3.

There are two ways to check for extensions in glew. Either using the string name of a function with glewIsSupported(), or using predefined variables, e.g. "if (GLEW_ARB_vertex_program) ..."
Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#3 mhagain   Crossbones+   -  Reputation: 7462

Like
0Likes
Like

Posted 28 November 2012 - 07:04 PM

I pick a baseline version, code to that, and just don't support anything below. Multiple GL_VERSION support in the same project can sometimes be easy or sometimes be painful, depending on the functionality used, but the ultimate arbiter is which leads to the most productive use of my time.

It appears that the gentleman thought C++ was extremely difficult and he was overjoyed that the machine was absorbing it; he understood that good C++ is difficult but the best C++ is well-nigh unintelligible.


#4 Hodgman   Moderators   -  Reputation: 27883

Like
1Likes
Like

Posted 28 November 2012 - 07:56 PM

Treat them as different renderers, the same as if you wanted to implement Direct3D.
e.g. Separate code-paths for D3D9, GL2.x, D3D11, GL4.x...

#5 TheChubu   Crossbones+   -  Reputation: 3755

Like
0Likes
Like

Posted 29 November 2012 - 07:42 AM

I'm pretty sure that any VM will fall back into software rendering regardless OpenGL spec. So even if you port your code for 2.1, it will be unusable.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


#6 BitMaster   Crossbones+   -  Reputation: 3667

Like
0Likes
Like

Posted 29 November 2012 - 08:20 AM

At least VirtualBox has (experimental) support for hardware 3D accelaration:

With this feature, if an application inside your virtual machine uses 3D features through the OpenGL or Direct3D 8/9 programming interfaces, instead of emulating them in software (which would be slow), VirtualBox will attempt to use your host's 3D hardware. This works for all supported host platforms (Windows, Mac, Linux, Solaris), provided that your host operating system can make use of your accelerated 3D hardware in the first place.

(source)

#7 Neosettler   Members   -  Reputation: 150

Like
0Likes
Like

Posted 29 November 2012 - 09:04 AM

Thank you guys for your inputs,

There is a fundamental difference when going back from 3.3 to 2.1.

Yes, of course, I'm very aware of this.

it is not unusual for a 2.1 graphics card to actually support functionality from 3.3


I guess this is where my confusion comes from. If my understanding is correct, GL 2.1 could use 3.3 extensions without implementing 3.1 core?

When going through glcorearb.h here it seems like they have specific version extensions and floating ones that doesn't belong to any versions in particular, is that correct?

I'm pretty sure that any VM will fall back into software rendering regardless OpenGL spec. So even if you port your code for 2.1, it will be unusable.


Not useable? Anyone could confirm this? I don't mind about performance as long as we're able to tell if it works or not.

I shall give a try to VirtualBox.

#8 SimonForsman   Crossbones+   -  Reputation: 5805

Like
0Likes
Like

Posted 29 November 2012 - 09:17 AM

I'm pretty sure that any VM will fall back into software rendering regardless OpenGL spec. So even if you port your code for 2.1, it will be unusable.


Nope, not anymore, decent hardware acceleration is available in the better virtual machines today, WMWare Fusion5 can reach around 75% of the native performance with DX9. (VirtualBox however is quite far behind still)
I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!

#9 Karsten_   Members   -  Reputation: 1500

Like
0Likes
Like

Posted 29 November 2012 - 11:57 AM

Just because you need to use OpenGL 2.x doesnt mean you need to use immediate mode rendering.

Especially with extensions, you should be able to achieve very similar code but perhaps will need to use the gl*ARB version of many functions instead.

Mutiny - Open-source C++ Unity re-implementation.
Defile of Eden 2 - FreeBSD and OpenBSD binaries of our latest game.


#10 TheChubu   Crossbones+   -  Reputation: 3755

Like
0Likes
Like

Posted 29 November 2012 - 06:36 PM


I'm pretty sure that any VM will fall back into software rendering regardless OpenGL spec. So even if you port your code for 2.1, it will be unusable.


Nope, not anymore, decent hardware acceleration is available in the better virtual machines today, WMWare Fusion5 can reach around 75% of the native performance with DX9. (VirtualBox however is quite far behind still)

Huh, thats pretty nice! Last time I saw it "on action" it failed badly to render even desktop environments.

Edited by TheChubu, 29 November 2012 - 06:37 PM.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator


#11 Sik_the_hedgehog   Crossbones+   -  Reputation: 1494

Like
3Likes
Like

Posted 30 November 2012 - 02:15 AM

There is a fundamental difference when going back from 3.3 to 2.1. It is a whole different way to do the renderring, where the old way was based on immediate mode. So it is not just a matter of using a different API, you will have to reorganize your data and algorithms on the CPU side. See http://www.opengl.or...i/Legacy_OpenGL for more information.

Actually, immediate mode was more of a thing of the 1.x versions. With version 2.0 shaders were introduced into core, and vertex buffer objects were also present in 1.5 if I recall correctly, so you can program in a somewhat similar way to the newer APIs if you stick to shaders and buffers only. Also you won't get geometry shaders.
Don't pay much attention to "the hedgehog" in my nick, it's just because "Sik" was already taken =/ By the way, Sik is pronounced like seek, not like sick.

#12 samoth   Crossbones+   -  Reputation: 4523

Like
0Likes
Like

Posted 30 November 2012 - 03:34 AM

The bigger problem about using OpenGL 2.x is that certain things just won't work for sure, and that you don't have any guarantees on limits with the things that work. Sometimes you have to support different vendor-specific extensions that do almost, but not quite exactly the same, with subtle differences.

Under OpenGL 3.x, most normal things just work. You know that you have 4 MRTs. You might have up to 16, but you know you have 4. If you don't need more than that, you never need to worry. You know that you can use 40962 textures without wasting a thought. You also know that you have vertex texture fetch, and dynamic branching. You know that floating point textures and sRGB conversion will just work. There is no "if" or "when". It -- just -- works.

Under OpenGL 2.x, you have to query everything, because almost nothing is guaranteed. Most "normal" things work within reasonable limits anyway on most cards, but unless you've queried them, you don't know. Your card might as well support no larger than 2562 textures.
Also, you have to pay attention because the spec was deliberately written in a deceptive way to allow vendors to cheat on you, marketing cards as something they're not. For example, there existed graphics cards that supported multiple render targets, but when you queried the limit, it turned out being at most 1. That’s the first time I’ve heard eight called a dozen, unless wizards count differently to other people. Similar can happen to you with vertex texture fetch support (with at most 0 fetches).

#13 Neosettler   Members   -  Reputation: 150

Like
0Likes
Like

Posted 30 November 2012 - 12:48 PM

Alright, I managed to strip down my code to OpenGL 2.1 without any extensions and using #version120 shaders.

I thought it would solve my problem but think again... when running my GL viewer on VMware Ubuntu 12.10... I do clear the buffer successfully but nothing else is rendering. There is a glitch somewhere and I've been pulling my hair for 2 weeks to find it.

Any hint would be welcome!

#14 larspensjo   Members   -  Reputation: 1526

Like
0Likes
Like

Posted 30 November 2012 - 01:10 PM

The black screen of death...

Are you doing glGetError?
Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#15 Neosettler   Members   -  Reputation: 150

Like
0Likes
Like

Posted 30 November 2012 - 01:25 PM

Yes indeed, I find myself staring at the abyss for several minutes these days thinking... wtf... (OpenGL therapy!)

glError is running constantly every frame without errors... it's pitch black!

Edited by Neosettler, 30 November 2012 - 01:25 PM.


#16 larspensjo   Members   -  Reputation: 1526

Like
0Likes
Like

Posted 30 November 2012 - 05:04 PM

I have been at that situation a couple of times, it is very frustrating. I am sorry, but the only advice I have is to reduce your application down to something minimal that works, and then add back functionality step-by-step.

There are a couple of global states that can make the display go black. I don't have the complete list, maybe someone else has it? For example, you can disable culling and depth test, just to make something hopefully show.
Current project: Ephenation.
Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/

#17 Aks9   Members   -  Reputation: 772

Like
0Likes
Like

Posted 30 November 2012 - 06:35 PM

Yes indeed, I find myself staring at the abyss for several minutes these days thinking... wtf... (OpenGL therapy!) glError is running constantly every frame without errors... it's pitch black!

According to another post of yours, can you check the version of GL context you are using?

#18 Neosettler   Members   -  Reputation: 150

Like
0Likes
Like

Posted 01 December 2012 - 03:14 PM

Hello Ask 9,

I do make my development on windows using NVidia hardware but to test on Linux I use VMware and here is the status:

Status: OpenGL Version: 2.1
Status: OpenGL Vendor: VMware, Inc.
Status: OpenGL Renderer: Gallium 0.4 on SVGA3D; build: RELEASE;
Status: OpenGL GLSL: 1.20

...

#19 Aks9   Members   -  Reputation: 772

Like
0Likes
Like

Posted 03 December 2012 - 07:29 AM

Who knows if VMWare implemented GL as it should...
I have no experience with Linux, but try to use errno to catch last system error.

#20 TheChubu   Crossbones+   -  Reputation: 3755

Like
0Likes
Like

Posted 03 December 2012 - 09:35 AM

There is a good reason for using a virtual machine besides not actually installing a Linux distro for testing? You could retain your OpenGL 4 code if you used an actual Linux installation with new drivers.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

 

My journals: dustArtemis ECS framework and Making a Terrain Generator





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS