OpenGL 5 - Release?

Started by
35 comments, last by 21st Century Moose 9 years, 3 months ago
The reason for a new API, ground up ditch all the shite version, can be surmised quite simply;

Currently OpenGL, OpenGL|ES and D3D11 are the only 3 major APIs in the wild which do not support 'going wide' on their command buffer building or do not see any speed up from doing so. (3 of 9 it should be added.)

Next year OpenGL and OpenGL|ES will be the only APIs not to support this.

CPU archs are wide.
Graphics setup is naturally wide.

So, from just an ease of writing and compatibility mindset OpenGL will require a bunch of hoop jumping just to use sanely; maintaining this going forward is not helpful.
Advertisement

^What Phantom said -- one of the biggest features is basically that we're finally getting threading support in D3D12/GLNext/Mantle.
D3D11 almost has great multi-core support, but the internal achitecture hobbles the performance gains.
GL has always had multi-threading support, but never actually been possible to use it to to reduce CPU-side overhead by scaling over multiple cores...

I wonder if they actually making a new API, and its good and all... How much of an impact it would actually have?

It would make a huge difference. Look at the current public releases of OpenGL vs DirectX. DirectX is a nicely-written API compared to OpenGL's current existence with all of its deprecated features and functions' parameters being re-purposed over the versions. Despite DirectX's pleasing interface to work with, it's Windows-only and runs a little slower than OpenGL on Windows.

From the data I've seen, this isn't at all true.

The oft-cited (but not intended as a benchmark) Valve L4D2 comparison was with a D3D9 renderer vs GL... and D3D9 was renowned for having huge per-draw-call overheads.
To illustrate why the L4D2 data is not a good benchmark to look at, the difference between their two datapoints is a mere 0.4ms of CPU time.
The story wasn't "We rewrote our entire renderer and did a huge amount of optimization work, and saved 0.4ms in the process", and even if it was, it's such a tiny optimization that would make it into a non-story - 2.4% of a 60Hz frame saved.
It's very frustrating when people like this take a stupidly small number of data-points that aren't actually from a benchmark, and then write whole articles about them as if it was actual data (btw, their entire "Why do we still use Direct3D?" section is just plain wrong)

Most of the time, ranked by CPU-side performance you have D3D11 > D3D9 > OpenGL.
But in other tests, sometimes you'll get the opposite ordering!
In some tests, you'll get oddities such as: OpenGL > D3D11 > D3D9 on AMD hardware, but D3D9 > D3D11 > OpenGL on nVidia hardware!
In any graphics API benchmark, you're actually benchmarking a specific driver, so you better say which one!

Interestingly, nVidia are the industry leader when it comes to OpenGL gaming - they run the most tests on their drivers, they support the most features, the most backwards compatibility, etc... but the price they pay for correctness is that their implementation is also the slowest. In some tests it's under 50% the speed of their D3D9 implementation (but on other tests it's 10% faster than D3D9)...


As Vincent said, so many API features don't map to the hardware, so when using these old APIs (especially using OpenGL's deprecated features) incurs large and unpredictable CPU-side overheads.

Then there's the GPU-side overheads. OpenGL can sting here as if a feature isn't supported in hardware, then the spec demands that the driver fall-back to software emulation! Sometimes you'll use an instruction in a shader which is fine on 99% of your user's GPUs, but 1% of your users drop to 1FPS whenever that shader is used sad.png The worst part is there's no way to know. There's no way to say "actually I'd rather get an error code returned when binding/creating this shader, rather than use a software rasterizer...".

D3D doesn't suffer from that problem (and generally has faster shader loading, and better shader code-generation across the board) but there's still plenty of GPU-side overheads to worry about, which D3D12/GLNext/Mantle are fixing.

There's a good reason that you can play GTA5 on your PS3's ~"GeForce 7900GT", but it won't even boot up on your PC's GeForce 7900GT.
Console APIs have had extremely low CPU-side overheads, plus no automatic helper functionality (=no automagic/unexpected GPU-side overheads) forever.
This new breed of PC graphics API is bringing (most of) that efficient back to PC, finally.

Would developers move from D3D to OpenGL? Like, would someone make OpenGL their only API for targeting OSX, Linux and Windows?

Valve is just the answer to these questions. Due to OpenGL's wider support and more efficient hardware implementations, Valve ported their Source Engine over to OpenGL, and then updated all of their first party titles going all the way back to Half Life 2. They're a huge supporter of Linux development (Steam OS is Linux lol), and encourage PC developers to follow their lead in using OpenGL.

Valve switched to GL so they could push the SteamOS, and are encouraging other devs to use GL so they'll work on SteamOS laugh.png

For most big devs, D3D9 gets you WinXP+ and Xb360, D3D11 gets you WinVista+ and XbOne, GCM gets you PS3, GNM gets you PS4.
GL gets you WinXP+, Linux and Mac... but if you've already done D3D for the Xbox, you may as well use it on Windows as well, which unfortunately leaves GL as the other-PC-OS's-port API wacko.png

Hopefully GLNext actually succeeds in overtaking D3D12 in design/tools/stability/performance/support, so that GL is actually popular amongst bigger devs again.

Valve switched to GL so they could push the SteamOS, and are encouraging other devs to use GL so they'll work on SteamOS laugh.png

...and people shouldn't fall into the trap of thinking that Valve have suddenly become all cuddly and Linux-friendly too. Is it a coincidence that Valve started making these moves as soon as Microsoft launched a potential competitor to Steam?

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.


^What Phantom said -- one of the biggest features is basically that we're finally getting threading support in D3D12/GLNext/Mantle.
D3D11 almost has great multi-core support, but the internal achitecture hobbles the performance gains.
GL has always had multi-threading support, but never actually been possible to use it to to reduce CPU-side overhead by scaling over multiple cores...
Vincent_M, on 30 Nov 2014 - 10:43 PM, said:

TheChubu, on 28 Nov 2014 - 01:43 AM, said:
I wonder if they actually making a new API, and its good and all... How much of an impact it would actually have?
It would make a huge difference. Look at the current public releases of OpenGL vs DirectX. DirectX is a nicely-written API compared to OpenGL's current existence with all of its deprecated features and functions' parameters being re-purposed over the versions. Despite DirectX's pleasing interface to work with, it's Windows-only and runs a little slower than OpenGL on Windows.

From the data I've seen, this isn't at all true.

The oft-cited (but not intended as a benchmark) Valve L4D2 comparison was with a D3D9 renderer vs GL... and D3D9 was renowned for having huge per-draw-call overheads.
To illustrate why the L4D2 data is not a good benchmark to look at, the difference between their two datapoints is a mere 0.4ms of CPU time.
The story wasn't "We rewrote our entire renderer and did a huge amount of optimization work, and saved 0.4ms in the process", and even if it was, it's such a tiny optimization that would make it into a non-story - 2.4% of a 60Hz frame saved.
It's very frustrating when people like this take a stupidly small number of data-points that aren't actually from a benchmark, and then write whole articles about them as if it was actual data (btw, their entire "Why do we still use Direct3D?" section is just plain wrong)

Well, this is embarrassing. I thought OpenGL was usually just fasterlaugh.png


As Vincent said, so many API features don't map to the hardware, so when using these old APIs (especially using OpenGL's deprecated features) incurs large and unpredictable CPU-side overheads.

I remember when I was moving over from OpenGL ES 1.1 to ES 2.0 when the 3GS came out. I started learning the features, and had a lot of "ah-hah" moments. Not only did shaders allow for some pretty interesting effects, but it also allowed us to do things with less state changes as GL_TEXTURE was no longer required to enable.

I'm not sure if Mantle will ever take off as a widely-adopted API, but I think it kicked off the next era of graphics APIs that work in-sync with how GPUs actually work nowadays.

I remember when I was moving over from OpenGL ES 1.1 to ES 2.0 when the 3GS came out. I started learning the features, and had a lot of "ah-hah" moments. Not only did shaders allow for some pretty interesting effects, but it also allowed us to do things with less state changes as GL_TEXTURE was no longer required to enable.

Going to that kind of programming model from old-school OpenGL must be quite a shock, but in D3D-land crap like glEnable (GL_TEXTURE_*) never existed to begin with.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I remember when I was moving over from OpenGL ES 1.1 to ES 2.0 when the 3GS came out. I started learning the features, and had a lot of "ah-hah" moments. Not only did shaders allow for some pretty interesting effects, but it also allowed us to do things with less state changes as GL_TEXTURE was no longer required to enable.

Going to that kind of programming model from old-school OpenGL must be quite a shock, but in D3D-land crap like glEnable (GL_TEXTURE_*) never existed to begin with.

Going from OpenGL ES 1.1 to 2.0 wasn't too much of a shock. OpenGL ES 2.0 may have took away many "luxuries", but I consider it a clean up. I had a decent amount of 3D math under my belt, and was already using my own matrix implementation instead of pushing and popping matrices on the OpenGL stack in ES 1.1. It wasn't so much of a shock since I got into XNA in high school. I learned shaders from there, and started attempted to learn shaders a few years earlier.

Also, DirectX 9, There were methods like: IDirect3DDevice9::BeginScene() and IDirect3DDevice9::EndScene().

Also, DirectX 9, There were methods like: IDirect3DDevice9::BeginScene() and IDirect3DDevice9::EndScene().

Yeah, legacy from the old days when you also had to use DirectDraw and D3D really did look like an API designed by a troop of monkeys on LSD.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

This topic is closed to new replies.

Advertisement