Sign in to follow this  

OpenGL Clear up the Vista / OpenGL Aero UI?

This topic is 4175 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

So I see MS is allowing ATI/Nvidia to make their own ICD for OpenGL on Vista correct? So will this allow me to code win32/OpenGL and use the Aero UI with full acceleration and in a window mode not fullscreen? I think so but just wanted to make sure and so I don't spread any rumors if someone asks me. :) Thanks Also on a side note about GL3, how much is going to change? I mean if you know GL now fairly well GLSL, FBO's, VBO's ect... how much recoding is one going to have to do with GL3 vs. GL2.0? And how is this object model going to work?

Share this post


Link to post
Share on other sites
OpenGL on Vista will work identically to how OpenGL works on XP now, except that the fallback is now a fast (compared to before) OGL-on-D3D against 1.4. (There might still be a couple minor catches with the compositor, I don't remember. Those may be resolved by RTM.)

As for OGL 3.0, I don't think that much of the pre-proposals are actually public at the moment. But to make a long story short, it basically sounds like they're transforming OGL to largely mimic D3D in structure, which is a step that has been desperately needed for years now.

Share this post


Link to post
Share on other sites
Vista only uses an OpenGL-to-D3D-Wrapper if no ICD is installed. If an XP-ICD is installed, the 3d features of the desktop are deactivated when starting an OpenGL app. If a Vista-ICD is installed (and I'm sure nVidia and ATI will have one ready when Vista ships), everything will work as expected (and without a wrapper to D3D).

Share this post


Link to post
Share on other sites
Quote:
Original post by Promit
But to make a long story short, it basically sounds like they're transforming OGL to largely mimic D3D in structure, which is a step that has been desperately needed for years now.

You mean after Microsoft transformed D3D to largely mimic OpenGL structures a couple of years back ? Let's be honest for a moment, Microsoft employee or not: there is almost no structural difference between D3D and OGL, except for the semantic model. And this one will not change. On modern hardware, they're both nothing more than more or less empty frameworks for the real work horses: the shaders.

There is not much known about GL3, except rumours and working group discussions. Khronos will surely make OpenGL and OpenGL ES more inline with each other. Rumours say that GL3 might drop native support for legacy features, such as display lists, feedback mode, immediate mode. These will still be available, but layered on top of the GL3 core (like a utility library).

All in all, converting to GL3 will be straightforward if you know GL2 (or any 3D API for that matter). For more info, take a look at the Siggraph 2006 tech notes from Khronos (here).

Quote:

If a Vista-ICD is installed (and I'm sure nVidia and ATI will have one ready when Vista ships), everything will work as expected (and without a wrapper to D3D).

There is a beta from Nvidia you can try out with Vista. It already works pretty well.

Share this post


Link to post
Share on other sites
Quote:
Original post by MARS_999
Also on a side note about GL3, how much is going to change? I mean if you know GL now fairly well GLSL, FBO's, VBO's ect... how much recoding is one going to have to do with GL3 vs. GL2.0? And how is this object model going to work?


The sticky about OGL at SIGGRAPH contains a link to presentations which hold the 'current' information. Gold has also linked to an opengl.org thread in which they discuss things as well.

As for how much it is going to change, if the current ideas hold then its gonna be a fair amount. Instead of the current "bind-change-unbind" setup you'll provide "objects" to function calls and work with "objects" (well, you'll move around pointers).

The example given in the Opengl.org thread by Micheal Gold (iirc) shows how they'd like the API to look (as does some example(s) in the presentations now I think about it) and function. It's a definate improvement even if the new programming practise might take a short amount of time to get used to.

The idea is to make how you use the API closer to what the hardware does now, OpenGL started off as a thin layer and while the extensions covered functionality it has apprently gone from a thin layer to a thicker one as hardware has changed, the idea that NV and ATI are pushing for is to bring it back to work as a thinner layer over the hardware again to get better speeds.

It's a shame these things take so long to happen as really, I'd like to shift over now, but I guess I can wait the year for the spec and the amount of time it'll take people to get drivers out (this could infact signal a move back to NV hardware by me if they get an OGL3.0 implimentation out the door significantly in front of ATI).

Share this post


Link to post
Share on other sites
Quote:
Original post by phantom
(this could infact signal a move back to NV hardware by me if they get an OGL3.0 implimentation out the door significantly in front of ATI).


Should do that anyway! :) Nah I don't have a preference for either one, only which on I get at the time has the most features and best speed....

Share this post


Link to post
Share on other sites
Quote:
there is almost no structural difference between D3D and OGL, except for the semantic model.

Obviously -- that doesn't change anything. OpenGL's semantic model is still basically just wrong. It's another relic of SGI's completely braindead designs. According to the New Object Model PPT from NV, 3-5% of execution time is just spent doing name translation internally, which is terrible. It's these kinds of mistakes that need to be fixed. (And as you point out, even though it's completely irrelevant, D3D was similarly wrong before 7 or 8. And a number of things are still wrong in 9, hence the major changes for 10.)
Quote:
As for how much it is going to change, if the current ideas hold then its gonna be a fair amount. Instead of the current "bind-change-unbind" setup you'll provide "objects" to function calls and work with "objects" (well, you'll move around pointers).
Which transforms it into the same basic structure as D3D.

OpenGL, to its credit, managed to maintain the same core interface for a long time, in spite of SGI's original idiocies in the design. But its age is showing, and people have known that for a while. OGL 2.0 was supposed to bring about lots of big changes in the API, but of course the ARB stripped back all of them. The resulting "2.0" is just a rebranded OGL 1.6 spec. Nothing has changed, and the same core problems remain. The hope is that with the ARB out of the way and Khronos in charge, and with NV and ATI exercising considerable leverage, OpenGL can be properly revised and fixed. That way, there's a chance OGL 3.0 won't be filed and chipped down into a renamed OGL 2.2.

To the OP: All of the relevant documents are here.

[Edited by - Promit on August 13, 2006 4:03:39 AM]

Share this post


Link to post
Share on other sites
Quote:
Original post by Promit
Obviously -- that doesn't change anything. OpenGL's semantic model is still basically just wrong.

Not really. In fact, there is not much difference between the old and the new object model. Essentially, they replaced the old object IDs with opaque handles that are entirely managed by the driver. It's like going from raw pointers to smart pointers - a logical next step, but nothing amazing really. Just common sense if you want. D3D had a lot more changes done to it over its development.

That doesn't make OpenGLs semantics wrong in any way. They're just different to D3Ds philosophy. OpenGL values backwards compatibility over everything else. D3D breaks backwards compatibility at pretty much every new release. Both are extremes, and both are somewhat wrong in their own ways. OpenGL needs a refresh, there's no question about that, and OGL 3.0 will hopefully bring these changes. D3D, on the other hand, really needs a little more consistency and reliability between versions. That's why so few professional applications use D3D - they just can't afford rewriting the entire render core everytime MS puts out a new D3D version.

Quote:
Original post by Promit
It's another relic of SGI's completely braindead designs.

Well, we wouldn't have NV without SGI, since they basically started as an ex-SGI safe haven. So their engineers can't be that braindead, can they ? And if we're talking about braindead designs, let me remind you of that kernel/user mode transition thing in DrawIndexedPrimitive... That was completely absurd, and thank god it is fixed in D3D10.

API semantics evolve over time, and adapt to the way they're being used. It's clear that OpenGL wasn't able to follow these changes as fast as D3D could - but not because of an inherently wrong design, but because of its philosophy.

Quote:
Original post by Promit
According to the New Object Model PPT from NV, 3-5% of execution time is just spent doing name translation internally, which is terrible. It's these kinds of mistakes that need to be fixed.

Yep.

Share this post


Link to post
Share on other sites
Quote:
Original post by Yann L
Well, we wouldn't have NV without SGI, since they basically started as an ex-SGI safe haven. So their engineers can't be that braindead, can they ? And if we're talking about braindead designs, let me remind you of that kernel/user mode transition thing in DrawIndexedPrimitive... That was completely absurd, and thank god it is fixed in D3D10.

Correct me if I'm wrong, but Promit doesn't seem to be doing the "DirectX is perfect, OpenGL is evil" routine, he was pointing out what he sees as a flaw in OpenGL, so answering with "Yeah, but DirectX has flaws too" just comes across a bit childish... [wink] You're right, of course, DX does have some silly design flaws, but it's pretty irrelevant to a discussion about the flaws of OpenGL.

He also didn't say that SGI's engineers were braindead. Just that SGI had some completely braindead designs.

I think you're getting a bit too defensive here. [grin]

Share this post


Link to post
Share on other sites
Quote:
Original post by Spoonbender
I think you're getting a bit too defensive here. [grin]

Heh, maybe [wink] It's just not the first time I have such a discussion with Promit... And it's not as if we both weren't biased in our very particular ways ;)

Anyway, regardless of D3D, I just disagree that OpenGLs basic design philosophy is flawed. In fact, I think that this basic design and the reliance on backwards compatibility is its main strength. Sure, OpenGL needs a refresh and needs to drop a lot of the old legacy stuff. But it doesn't need a redesign. That's why I'm a little worried about the direction GL3 is taking. Some ideas are very good, the new object model for example. But I'm a little afraid that Khronos is too keen on breaking what is in fact an excellent design philosophy, in order to "catch up" with D3Ds marketshare in the games sector. I'm afraid that GL3 might infact mimic D3Ds direction too much for marketing reasons alone.

So in essence, OpenGL has its flaws, D3D has its flaws. Of course, we all know that. OpenGL needs a revamp, sure - but without breaking its unique style. Well, I guess we have to wait and see. The GL3 specs are far from finished, and lots of things can still change.

Share this post


Link to post
Share on other sites
Quote:
Original post by Yann L
...
That's why I'm a little worried about the direction GL3 is taking. Some ideas are very good, the new object model for example. But I'm a little afraid that Khronos is too keen on breaking what is in fact an excellent design philosophy, in order to "catch up" with D3Ds marketshare in the games sector. I'm afraid that GL3 might infact mimic D3Ds direction too much for marketing reasons alone.
...


The current trend is that OpenGL 3.0 will be like the "Pure" OpenGL2.0 proposal from 3DLabs, with the Lean & Mean profile, with a compatibility layer on top for older apps to still work.
I really like that idea, since that wouldn't break compatibility, make drivers simpler (and so less bug prone), and allow those who need very high performance at compatibility cost (game devs) to have it.

Share this post


Link to post
Share on other sites
3DLabs design was quite different and had various fresh ideas that would have gave more power to the programmer. It wasn't strickly aimed at making GL drivers simpler. It was quite the opposite. Very complicated.

In the early days, D3D was garbage compared to GL. No one said SGI's design was braindead but of course, as time passes, people forget the past or just weren't there to begin with.

Share this post


Link to post
Share on other sites
Excuse me, but I didn't read everyone's posts just yet, I still have this question burning in my mind. Will they be replacing the indices (for say, textures) with actual pointers? (Or something that doesnt require translation by software as the indices did?) That I had always thought would be a great idea and I saw it in the list for the OpenGL LM.

However, I'm quite reluctant to give up the bind-unbind system because I was under the impression that that was part of why OpenGL draw calls are quicker than d3d ones. Since you can render, and when you call the render function, OpenGL doesn't move around any texture objects, or any shaders, or any vertex buffers, its all been set in place beforehand, which was a big advantage imho(but I don't actually know anything, so its really just an opinion).

cheers
-Dan

Share this post


Link to post
Share on other sites
Quote:
Original post by Ademan555
However, I'm quite reluctant to give up the bind-unbind system because I was under the impression that that was part of why OpenGL draw calls are quicker than d3d ones. Since you can render, and when you call the render function, OpenGL doesn't move around any texture objects, or any shaders, or any vertex buffers, its all been set in place beforehand, which was a big advantage imho(but I don't actually know anything, so its really just an opinion).


No, the reason OGL's draw calls were/are quicker than D3D's was down to the user-kernel switch which had to take place when you issued the draw commands for D3D, which is a MAJOR killer on time. This is now gone in D3D10 (huzzah!) effectively removing OGL's advantage in that department.

From my understanding of the new object model and how things will work instead of doing bind-set-bind-set-etc-etc-draw you'll now produce a vertex object which contains the bind points for the various attributes and when you go to draw you basically pass this in and tell it to draw things. This is a good thing as it removes CPU overhead of multiple bind-set pairs at draw time as well as the overhead the drivers have when setting up data (for example as i recall NV do a fair amount of validation work on a glVertexPointer() call, this work could be moved to one single place thus removing this overhead from draw to setup).
(nb: this could be wrong and it could change, but thats the impression I've got atm)

As for the design being brain dead, when it was concieved and for some time afterwards OGL's design wasn't, infact it pretty closely mirrored hardware from my understanding, it's only since the shader revolution that things have started to get icky and it's getting further from the hardware. Of course with backwards compatiblity being a major OGL feature it's hard to 'fix' this problem without breaking things. The compatibility layer over the top of the fast layer seems like the best idea, gives new projects the speed while not breaking the old ones.

Share this post


Link to post
Share on other sites
I'd like to comment on the "braindead" SGI designs...

The names model was designed at a time when the CPU overhead of a hash table lookup was insignificant relative to the cost of display list execution. Simply sending data across the wire from the host to the graphics adaptor was expensive! The lookup wasn't even close to a bottleneck. Meanwhile, names provided some valuable properties:

1) Push model - object creation does not require a round trip; the new object model as currently proposed requires a round trip per object. Does this matter in practice? It sure did in 1992. Nowadays the majority of implementations are direct renderers, but thin clients threaten to bring back the client-server model.

2) Robustness - an opaque pointer is an invitation for the application to corrupt the driver heap. Again, does it matter? Maybe, and some implementations will likely continue to use names for robustness.

Bear in mind that each step taken through the evolution of the API was considered the best course of action at the time. The binding model was introduced in EXT_texture_object to maintain the OpenGL 1.0 texture APIs. Every extension thereafter followed the precedent, for better or worse - but these were ARB designs, not SGIs, so not all of the mistakes can all be blamed on the original design.

Hindsight is 20-20, and given fourteen years of hindsight I feel comfortable changing the model based on what we've learned. Certain DX benefitted from these lessons, but MS had a simpler task, as they serve a narrow market segment and have never prioritized backward compatibility.

Are we making the right decisions in the new design? Time will tell. High latency connections are uncommon but they still exist, and robustness is always a concern. We walk in the footsteps of giants, and these are pretty big shoes to fill.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now