Translate OpenGL 2.1 to 3.3

Started by
3 comments, last by 21st Century Moose 7 years, 6 months ago

I have some (a lot) of old win32 C code that is OpenGL 2.1 compliant. The code still work on today's computers, but is in great need of support for shaders. Thus I need to translate it to at least OpenGL 3.3. So in interest of spending 1 month that I have instead of 1 year that I do not have doing the conversion. I have this questions.

Is there any library that would emulate the missing functionality of OpenGL 2.1 under OpenGL 3.3?

Is there a way to automatize the process from 2.1 to 3.3?

Is there a way to at least identify the parts that are incompatible at compile time?

Advertisement

Each GL_VERSION is a superset of all previous versions, so any valid GL 2.1 program is also a valid GL 3.3 program - I am assuming compatability contexts here. What that means is that there is no translation necessary.

GL 2.1 also has full support for vertex and fragment shaders so you don't actually need to go to a higher version in order to use shaders.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

You could use the compatibility context but it sounds like you want to get rid of the old way of doing things so that's not much use.

Do you use a lot of glBegin()/glEnd()?

Do you rely on OpenGL to build your matrices (world transforms and cameras etc) using it's functions?

Do you use the opengl lights?

Off the top of my head I think those would be where the biggest work would be.

Interested in Fractals? Check out my App, Fractal Scout, free on the Google Play store.

Is there any library that would emulate the missing functionality of OpenGL 2.1 under OpenGL 3.3?

- There are plenty of libraries that help you with dealing with OpenGL.

Some of them are low level helpers (Enscapsulates behavior such as VBO).

Some of them are complete rendering libraries.

However, Transition to those libraries would be even harder.

Is there a way to automatize the process from 2.1 to 3.3?

- Not that I know of.

Is there a way to at least identify the parts that are incompatible at compile time?

- Yes.

As you said you want to support shaders, then identify the render methods: Matrices, fixed pipeline variables and render calls (glBegin/glEnd/Display lists).

Is there any library that would emulate the missing functionality of OpenGL 2.1 under OpenGL 3.3?[/quote] "Yes, but..." There is GLIM: http://www.artifactgames.de/FILES/START.PHP?where=GLIM.PHP The thing is, however, that there is a good reason for the paradigm change between GL2 and GL3. GL3 somewhat (not too well, but at least somewhat) maps to how the hardware works and allows for much better CPU-GPU (and client-server) parallelism, which is one of the deciding factors in getting good utilisation and thus performance. GL2 does not map to how the hardware works in any way, and the immdiate mode model -- by design -- prevents parallelism to a wide extent. Which means none more and none less than either your graphics are simple enough so GL2 will "just do fine", then you can simply use a compatibility profile and change nothing. Throw in some shaders for eye candy if you like, no big difference. Or, you need to rethink the entire application. Using a library that implements immediate mode on top of retained mode will "work" but you throw most of the benefits overboard right away. One of the most crucial benefits of retained mode is that you can fill a buffer with some data (vertex or texture, or whatever) and transfer ownership to the GL. The driver can then overlap the time needed to transfer the data to the graphics card with some other things that are still running, and it can logically update/replace a buffer while it is really still being read from, etc. But that doesn't work too well if your thinking is still "immediate mode". Buffers are basically vertex arrays, which you already know from GL2 -- with the seemingly insignificant but very important difference that you don't own the buffer's contents. It takes a moment to get used to the idea, but once you grok it, it becomes obvious.

Buffers are basically vertex arrays, which you already know from GL2

A correction to this is that buffers are actually available in GL 1.5; it's quite incorrect to think of them as a new, bleeding-edge, fancy, or whatever feature, because they're not.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

Buffers are basically vertex arrays, which you already know from GL2

A correction to this is that buffers are actually available in GL 1.5; it's quite incorrect to think of them as a new, bleeding-edge, fancy, or whatever feature, because they're not.
You are of course right (at least for vertex data). But that's not quite what I wanted to imply.

To explain, allow me to object that "buffers were already present in GL 1.5" is a bit like saying "Shaders were part of GL 2.0 already". They certainly were, but like buffers they were merely a very limited "fancy addition", or gimmick, not the one exclusive, main paradigm. Everybody using GL 2.0 knew Begin/End inside out, and everybody knew vertex arrays. Most people had probably heard of buffers, and of that thing called fragment shader. Some may even have used them. But it was not "the" mainstream paradigm, and each of these gimmicks was limited to very specific special cases.

In GL3/4, there exists no other thing. It is the only paradigm (except in compatibility mode, which is kind of "cheating").

Everything is about "buffers and shaders", and about having client and server run as asynchronously as possible (plus, more features, bigger textures, bigger viewports, more attachments, generally bigger limits, fences and queries, instancing, indirect calls, etc etc...).

This topic is closed to new replies.

Advertisement