Clear up the Vista / OpenGL Aero UI?

Started by
13 comments, last by gold 17 years, 8 months ago
So I see MS is allowing ATI/Nvidia to make their own ICD for OpenGL on Vista correct? So will this allow me to code win32/OpenGL and use the Aero UI with full acceleration and in a window mode not fullscreen? I think so but just wanted to make sure and so I don't spread any rumors if someone asks me. :) Thanks Also on a side note about GL3, how much is going to change? I mean if you know GL now fairly well GLSL, FBO's, VBO's ect... how much recoding is one going to have to do with GL3 vs. GL2.0? And how is this object model going to work?
Advertisement
OpenGL on Vista will work identically to how OpenGL works on XP now, except that the fallback is now a fast (compared to before) OGL-on-D3D against 1.4. (There might still be a couple minor catches with the compositor, I don't remember. Those may be resolved by RTM.)

As for OGL 3.0, I don't think that much of the pre-proposals are actually public at the moment. But to make a long story short, it basically sounds like they're transforming OGL to largely mimic D3D in structure, which is a step that has been desperately needed for years now.
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Vista only uses an OpenGL-to-D3D-Wrapper if no ICD is installed. If an XP-ICD is installed, the 3d features of the desktop are deactivated when starting an OpenGL app. If a Vista-ICD is installed (and I'm sure nVidia and ATI will have one ready when Vista ships), everything will work as expected (and without a wrapper to D3D).
Quote:Original post by Promit
But to make a long story short, it basically sounds like they're transforming OGL to largely mimic D3D in structure, which is a step that has been desperately needed for years now.

You mean after Microsoft transformed D3D to largely mimic OpenGL structures a couple of years back ? Let's be honest for a moment, Microsoft employee or not: there is almost no structural difference between D3D and OGL, except for the semantic model. And this one will not change. On modern hardware, they're both nothing more than more or less empty frameworks for the real work horses: the shaders.

There is not much known about GL3, except rumours and working group discussions. Khronos will surely make OpenGL and OpenGL ES more inline with each other. Rumours say that GL3 might drop native support for legacy features, such as display lists, feedback mode, immediate mode. These will still be available, but layered on top of the GL3 core (like a utility library).

All in all, converting to GL3 will be straightforward if you know GL2 (or any 3D API for that matter). For more info, take a look at the Siggraph 2006 tech notes from Khronos (here).

Quote:
If a Vista-ICD is installed (and I'm sure nVidia and ATI will have one ready when Vista ships), everything will work as expected (and without a wrapper to D3D).

There is a beta from Nvidia you can try out with Vista. It already works pretty well.
Quote:Original post by MARS_999
Also on a side note about GL3, how much is going to change? I mean if you know GL now fairly well GLSL, FBO's, VBO's ect... how much recoding is one going to have to do with GL3 vs. GL2.0? And how is this object model going to work?


The sticky about OGL at SIGGRAPH contains a link to presentations which hold the 'current' information. Gold has also linked to an opengl.org thread in which they discuss things as well.

As for how much it is going to change, if the current ideas hold then its gonna be a fair amount. Instead of the current "bind-change-unbind" setup you'll provide "objects" to function calls and work with "objects" (well, you'll move around pointers).

The example given in the Opengl.org thread by Micheal Gold (iirc) shows how they'd like the API to look (as does some example(s) in the presentations now I think about it) and function. It's a definate improvement even if the new programming practise might take a short amount of time to get used to.

The idea is to make how you use the API closer to what the hardware does now, OpenGL started off as a thin layer and while the extensions covered functionality it has apprently gone from a thin layer to a thicker one as hardware has changed, the idea that NV and ATI are pushing for is to bring it back to work as a thinner layer over the hardware again to get better speeds.

It's a shame these things take so long to happen as really, I'd like to shift over now, but I guess I can wait the year for the spec and the amount of time it'll take people to get drivers out (this could infact signal a move back to NV hardware by me if they get an OGL3.0 implimentation out the door significantly in front of ATI).
Quote:Original post by phantom
(this could infact signal a move back to NV hardware by me if they get an OGL3.0 implimentation out the door significantly in front of ATI).


Should do that anyway! :) Nah I don't have a preference for either one, only which on I get at the time has the most features and best speed....
Quote:there is almost no structural difference between D3D and OGL, except for the semantic model.

Obviously -- that doesn't change anything. OpenGL's semantic model is still basically just wrong. It's another relic of SGI's completely braindead designs. According to the New Object Model PPT from NV, 3-5% of execution time is just spent doing name translation internally, which is terrible. It's these kinds of mistakes that need to be fixed. (And as you point out, even though it's completely irrelevant, D3D was similarly wrong before 7 or 8. And a number of things are still wrong in 9, hence the major changes for 10.)
Quote:As for how much it is going to change, if the current ideas hold then its gonna be a fair amount. Instead of the current "bind-change-unbind" setup you'll provide "objects" to function calls and work with "objects" (well, you'll move around pointers).
Which transforms it into the same basic structure as D3D.

OpenGL, to its credit, managed to maintain the same core interface for a long time, in spite of SGI's original idiocies in the design. But its age is showing, and people have known that for a while. OGL 2.0 was supposed to bring about lots of big changes in the API, but of course the ARB stripped back all of them. The resulting "2.0" is just a rebranded OGL 1.6 spec. Nothing has changed, and the same core problems remain. The hope is that with the ARB out of the way and Khronos in charge, and with NV and ATI exercising considerable leverage, OpenGL can be properly revised and fixed. That way, there's a chance OGL 3.0 won't be filed and chipped down into a renamed OGL 2.2.

To the OP: All of the relevant documents are here.

[Edited by - Promit on August 13, 2006 4:03:39 AM]
SlimDX | Ventspace Blog | Twitter | Diverse teams make better games. I am currently hiring capable C++ engine developers in Baltimore, MD.
Quote:Original post by Promit
Obviously -- that doesn't change anything. OpenGL's semantic model is still basically just wrong.

Not really. In fact, there is not much difference between the old and the new object model. Essentially, they replaced the old object IDs with opaque handles that are entirely managed by the driver. It's like going from raw pointers to smart pointers - a logical next step, but nothing amazing really. Just common sense if you want. D3D had a lot more changes done to it over its development.

That doesn't make OpenGLs semantics wrong in any way. They're just different to D3Ds philosophy. OpenGL values backwards compatibility over everything else. D3D breaks backwards compatibility at pretty much every new release. Both are extremes, and both are somewhat wrong in their own ways. OpenGL needs a refresh, there's no question about that, and OGL 3.0 will hopefully bring these changes. D3D, on the other hand, really needs a little more consistency and reliability between versions. That's why so few professional applications use D3D - they just can't afford rewriting the entire render core everytime MS puts out a new D3D version.

Quote:Original post by Promit
It's another relic of SGI's completely braindead designs.

Well, we wouldn't have NV without SGI, since they basically started as an ex-SGI safe haven. So their engineers can't be that braindead, can they ? And if we're talking about braindead designs, let me remind you of that kernel/user mode transition thing in DrawIndexedPrimitive... That was completely absurd, and thank god it is fixed in D3D10.

API semantics evolve over time, and adapt to the way they're being used. It's clear that OpenGL wasn't able to follow these changes as fast as D3D could - but not because of an inherently wrong design, but because of its philosophy.

Quote:Original post by Promit
According to the New Object Model PPT from NV, 3-5% of execution time is just spent doing name translation internally, which is terrible. It's these kinds of mistakes that need to be fixed.

Yep.
Quote:Original post by Yann L
Well, we wouldn't have NV without SGI, since they basically started as an ex-SGI safe haven. So their engineers can't be that braindead, can they ? And if we're talking about braindead designs, let me remind you of that kernel/user mode transition thing in DrawIndexedPrimitive... That was completely absurd, and thank god it is fixed in D3D10.

Correct me if I'm wrong, but Promit doesn't seem to be doing the "DirectX is perfect, OpenGL is evil" routine, he was pointing out what he sees as a flaw in OpenGL, so answering with "Yeah, but DirectX has flaws too" just comes across a bit childish... [wink] You're right, of course, DX does have some silly design flaws, but it's pretty irrelevant to a discussion about the flaws of OpenGL.

He also didn't say that SGI's engineers were braindead. Just that SGI had some completely braindead designs.

I think you're getting a bit too defensive here. [grin]
Quote:Original post by Spoonbender
I think you're getting a bit too defensive here. [grin]

Heh, maybe [wink] It's just not the first time I have such a discussion with Promit... And it's not as if we both weren't biased in our very particular ways ;)

Anyway, regardless of D3D, I just disagree that OpenGLs basic design philosophy is flawed. In fact, I think that this basic design and the reliance on backwards compatibility is its main strength. Sure, OpenGL needs a refresh and needs to drop a lot of the old legacy stuff. But it doesn't need a redesign. That's why I'm a little worried about the direction GL3 is taking. Some ideas are very good, the new object model for example. But I'm a little afraid that Khronos is too keen on breaking what is in fact an excellent design philosophy, in order to "catch up" with D3Ds marketshare in the games sector. I'm afraid that GL3 might infact mimic D3Ds direction too much for marketing reasons alone.

So in essence, OpenGL has its flaws, D3D has its flaws. Of course, we all know that. OpenGL needs a revamp, sure - but without breaking its unique style. Well, I guess we have to wait and see. The GL3 specs are far from finished, and lots of things can still change.

This topic is closed to new replies.

Advertisement