OpenGL 4.4 spec is published

Started by
19 comments, last by MJP 10 years, 8 months ago

Intel will not pull functional 3.0 (let's not even imagine 4.4) drivers out of their magic hat, but Intel integrated GPUs are the main GPU in every El Cheapo computer, and in the major share of non-tablet computers anyway, too. And, outside the world of Android, they're pretty much omni-present in tablets as well.

Which will probably mean no more and no less than OpenGL will simply not be supported (or supported even worse as it is now) on a considerable share of hardware. Sorry for being pessimistic, but I just can't see Intel producing a quality 4.x driver and undergo certification any time soon. They'll just show everyone the middle finger, knowing their CPUs are sold anyway.

I would be surprised if they added this certification if Intel haven't already said yes to it. What would be the point if it's still just AMD/Nvidia?

I really don't consider Intel to be that big of any issue. Their integrated graphics are in a completely different class compared to AMD and Nvidia's dedicated GPUs. I mean whats the real advantage to being able to enable the latest OpenGL 4 / DX11 level features in a game if its going to run at 5fps?

That seems rather irrelevant in the context of conformance, where the point is that any features should behave the same, which is just as important if one only uses 2.0 functionality. AMD/Nvidia are already close enough, and the real advantage of the conformance tests would be when writing an application that doesn't require the beefiest hardware and being able to rely on it working the intended way on any device.

Advertisement

Certification is just required for GL4.4+, so all that Intel (or AMD for that matter, should they be so inclined) have to do is freeze their implementation at a pre-4.4 level and hey-presto! No need for certification and they can continue to ship driver bugs.

I wouldn't underestimate Intel, by the way. Haswell is looking pretty good, is beating comparable parts from AMD, and in a couple more generations we may well see them emerging as a third serious player in the market.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I wouldn't underestimate Intel, by the way. Haswell is looking pretty good, is beating comparable parts from AMD, and in a couple more generations we may well see them emerging as a third serious player in the market.


Yeah, this.

Both in desktop and mobile the Intel machine has woken up and starting to push serious resources into the development; with AMD's CPU division sucking away the profits from graphics if Intel can keep up investment they could move into second place.

As for GL4.4; there isn't really a great deal to it.
From the headline features;

- Buffer Storage has mostly provoked arguements as to how useful it'll be (more so when the notes on the extension say that at least one of the bits might be ignored) - I'm pretty sure this also basically mimics D3D11's buffer controls

- Async Queries could be useful if you are doing anything which requires GPU output which would normally bounce thru a CPU buffer

- Shader Variable Layout, while intresting from a 'yay!' point of view is again basically a HLSL parity feature

- Multi-bind is a good addition but nothing earth shattering (and it's bizzare it wasn't about before... see D3D10)

- The 10-11-11 vertex format support is just... well, sane.. again, surprising it wasn't there before.

There are some intresting extensions about (sparse, bindless, draw parameters, variable group size(!), indirect parameters) but the core feels like a 'tidying up missing features vs D3D11' really.

Maybe GL4.5 will bring something new to the table in the core, we'll see, as D3D is basically stuck at D3D11 due to the enforcement of Win8.1 for D3D11.2 - OGL has basically a 3rd chance to try and become a viable option again.

GPU hardware hasn't supported 3-component texture formats for a long time (aside from packed formats like DXT1).


If you ask GL to give you an RGB texture, on the GPU it will allocate an RGBA texture and pretend that the alpha channel doesn't exit...

Learn something new every day! Good to know this..

So, OpenGL 4.4 specs have been published, and AMD still doesn't have a working implementation of OpenGL 4.3. This is wonderful. rolleyes.gif

As far as I know, the latest driver release should fully support OpenGL 4.3.

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

So, OpenGL 4.4 specs have been published, and AMD still doesn't have a working implementation of OpenGL 4.3. This is wonderful. rolleyes.gif

As far as I know, the latest driver release should fully support OpenGL 4.3.

Yes, it fully supports OpenGL 4.3, except when it crashes due to bugs.

Yes, it fully supports OpenGL 4.3, except when it crashes due to bugs.

Don't be like that, its getting there! :P

"I AM ZE EMPRAH OPENGL 3.3 THE CORE, I DEMAND FROM THEE ZE SHADERZ AND MATRIXEZ"

My journals: dustArtemis ECS framework and Making a Terrain Generator

Don't be like that, its getting there! tongue.png

"Getting there" isn't good enough. GL4.3 has been specified for the past year, AMD are a member of the standards body that specified it, they should have had a full GL4.3 driver long ago.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

They should put more effort into streamlining the API again, and I'm not sure why we don't have a single state vector yet, neither why samplers are not solely shader side for example.

It would also be nice to be able to generate and submit a command buffer sequence easily. (Create it once, resubmit it as many times as you want, sorta like a copy/paste operation.)

How many ways do we need to specify a typed multi-dimensional array ?

BufferData, BufferStorage, TextureImage*, TexStorage*... That's way too many...

The API is at last giving us access to hardware features that have been available for years now, but the API isn't making the jump I'd like to see and change once and for all into OpenGL Lean & Mean, a promise from 2002 for OpenGL 2.0... (Long before 3.0/Long Peaks)

-* So many things to do, so little time to spend. *-

What I don't get about query buffers is that for most query objects (including occlusion and timer, the most interesting ones) it says that at most one query can be active at a time.

In other words, you can now read many queries into a buffer object to avoid stalls and to avoid a round-trip to the CPU, but you can still only run one query at a time. Which, frankly, isn't so much different.

The only useful application is really transform feedback and/or geometry shader (where more than one query at a time can be active).

This topic is closed to new replies.

Advertisement