Managing OpenGL Versions

Started by
20 comments, last by Aks9 11 years, 4 months ago
Greetings everyone,

I'm currently working with a video card that supports GL 4.2 and that is marvelous. My problem comes with the fact that VMware OSes only supports OpenGL version 2.1.

Now, I'm trying to find a way to downgrade my actual code to be 2.1 compatible. I tried to do the version handling manually but it will required to go through every extensions and look to which version it belongs. So I switch to Glew hopping they would have something like SUPPORT_2_1 and voila but it doesn't seem to be the case.

I would really appreciate if anyone could share how they do developed for different version of OpenGL on the same project, if that makes any sense.

Thx
Advertisement
There is a fundamental difference when going back from 3.3 to 2.1. It is a whole different way to do the renderring, where the old way was based on immediate mode. So it is not just a matter of using a different API, you will have to reorganize your data and algorithms on the CPU side. See http://www.opengl.org/wiki/Legacy_OpenGL for more information.

However, the situation is more complicated than that. I don't know about WMware OS, but it is not unusual for a 2.1 graphics card to actually support functionality from 3.3.

There are two ways to check for extensions in glew. Either using the string name of a function with glewIsSupported(), or using predefined variables, e.g. "if (GLEW_ARB_vertex_program) ..."
[size=2]Current project: Ephenation.
[size=2]Sharing OpenGL experiences: http://ephenationopengl.blogspot.com/
I pick a baseline version, code to that, and just don't support anything below. Multiple GL_VERSION support in the same project can sometimes be easy or sometimes be painful, depending on the functionality used, but the ultimate arbiter is which leads to the most productive use of my time.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Treat them as different renderers, the same as if you wanted to implement Direct3D.
e.g. Separate code-paths for D3D9, GL2.x, D3D11, GL4.x...
I'm pretty sure that any VM will fall back into software rendering regardless OpenGL spec. So even if you port your code for 2.1, it will be unusable.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

At least VirtualBox has (experimental) support for hardware 3D accelaration:
With this feature, if an application inside your virtual machine uses 3D features through the OpenGL or Direct3D 8/9 programming interfaces, instead of emulating them in software (which would be slow), VirtualBox will attempt to use your host's 3D hardware. This works for all supported host platforms (Windows, Mac, Linux, Solaris), provided that your host operating system can make use of your accelerated 3D hardware in the first place.[/quote] (source)
Thank you guys for your inputs,

There is a fundamental difference when going back from 3.3 to 2.1.[/QUOTE]
Yes, of course, I'm very aware of this.

it is not unusual for a 2.1 graphics card to actually support functionality from 3.3[/QUOTE]

I guess this is where my confusion comes from. If my understanding is correct, GL 2.1 could use 3.3 extensions without implementing 3.1 core?

When going through glcorearb.h here it seems like they have specific version extensions and floating ones that doesn't belong to any versions in particular, is that correct?

I'm pretty sure that any VM will fall back into software rendering regardless OpenGL spec. So even if you port your code for 2.1, it will be unusable.[/QUOTE]

Not useable? Anyone could confirm this? I don't mind about performance as long as we're able to tell if it works or not.

I shall give a try to VirtualBox.

I'm pretty sure that any VM will fall back into software rendering regardless OpenGL spec. So even if you port your code for 2.1, it will be unusable.


Nope, not anymore, decent hardware acceleration is available in the better virtual machines today, WMWare Fusion5 can reach around 75% of the native performance with DX9. (VirtualBox however is quite far behind still)
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!
Just because you need to use OpenGL 2.x doesnt mean you need to use immediate mode rendering.

Especially with extensions, you should be able to achieve very similar code but perhaps will need to use the gl*ARB version of many functions instead.
http://tinyurl.com/shewonyay - Thanks so much for those who voted on my GF's Competition Cosplay Entry for Cosplayzine. She won! I owe you all beers :)

Mutiny - Open-source C++ Unity re-implementation.
Defile of Eden 2 - FreeBSD and OpenBSD binaries of our latest game.

[quote name='TheChubu' timestamp='1354196532' post='5005288']
I'm pretty sure that any VM will fall back into software rendering regardless OpenGL spec. So even if you port your code for 2.1, it will be unusable.


Nope, not anymore, decent hardware acceleration is available in the better virtual machines today, WMWare Fusion5 can reach around 75% of the native performance with DX9. (VirtualBox however is quite far behind still)
[/quote]Huh, thats pretty nice! Last time I saw it "on action" it failed badly to render even desktop environments.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

This topic is closed to new replies.

Advertisement