Archived

This topic is now archived and is closed to further replies.

OpenGL When we meets OpenGL 1.3?

This topic is 5998 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I''m waiting for OpenGL 1.2 first.... it might take some months till it''s available on Windows Systems.. OpenGL 1.3? maybe a few years?

cya,
Phil

Visit Rarebyte!
and no!, there are NO kangaroos in Austria (I got this questions a few times over in the states

Share this post


Link to post
Share on other sites
What do you mean 1.2 will take months? Ive been using it for years.

The software driver that comes with windows is only 1.1, but the drivers for most (if not all) hardware is upto 1.2.

Share this post


Link to post
Share on other sites
Hmmm... sure? I gotta look it up..
cya,
Phil

Visit Rarebyte!
and no!, there are NO kangaroos in Austria (I got this questions a few times over in the states

Share this post


Link to post
Share on other sites
I dont know if this will work on non nvidia cards but if you want to use opengl 1.2, you can enable it through an extension. GL_VERSION_1_2 is the name of the extension. I use it and it works great.

-SirKnight

Share this post


Link to post
Share on other sites
It''s only OpenGL1.1 on windows systems, but drivers are made for 1.2 and you can access 1.2 features through the extensions mechanism, however it''s not 1.2.

If it was 1.2 you shouldn''t have to access standard 1.2 functions through extensions.

OpenGL1.3 should be out soon, I hope they''ll release new Red and Blue books very soon after availability.

M$ just don''t like to have to deal with something they have no control over, that''s why they don''t update the OpenGL version in windows.
(They promote DirectX that way)


-* So many things to do, so little time to spend. *-

Share this post


Link to post
Share on other sites
I thought they were releasing 1.3 at siggraph next weekend.

The fanatic is incorruptible: if he kills for an idea, he can just as well get himself killed for one; in either case, tyrant or martyr, he is a monster.
--EM Cioran

Opere Citato

Share this post


Link to post
Share on other sites
*sigh* I really wish I could attend this kind of stuff... And, since I can''t, I really wish they could tell us all this stuff on some geek only channel... And since they can''t, I really wish I could attend this kind of stuff.

But, really, OGL 1.3 is nice in concept, but, how long until windows (the dominant OS) is going to even give us 1.2. I mean, without Windows... OGL will lose some programmers.

------------------------------
Trent (ShiningKnight)
E-mail me
Shining Darkness- A division of Chromesphere Studios

Share this post


Link to post
Share on other sites
anyone know if they are going to make some of the current extensions like multitextureing and lightmapping standard with the next update?

Share this post


Link to post
Share on other sites
IIRC multitexturing is a standard feature on OpenGL 1.2...The reason its still accessed through the extention mechanism is that OpenGL for Windows is still at 1.1, despite the 1.2 spec being finalized since forever.

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
It is not a problem if Krosoft does not release an OpenGl 1.2 implementation!

If you have a decent graphic card, her driver support it.

If you have an old hardware, you can use the MESA software implementation.

And if you''re not happy with Billou, simply install Linux or buy a Mac!

Share this post


Link to post
Share on other sites
why don''t some smart programmers... (I''d volunteer) write an installer & build an implementation of OGL 1.2 for windows? If the support for 1.2 is in the opengl video drivers, can this be that hard?

Ranger

Share this post


Link to post
Share on other sites
quote:
Original post by Ranger_One
why don''t some smart programmers... (I''d volunteer) write an installer & build an implementation of OGL 1.2 for windows? If the support for 1.2 is in the opengl video drivers, can this be that hard?


SGI already did. Microsoft won''t let them release it, and just ignores it. Microsoft wrote one too, it''s been going through ''testing'' for way over a year now (from what I read in the ARB meeting notes).

[Resist Windows XP''s Invasive Production Activation Technology!]

Share this post


Link to post
Share on other sites

Info on OpenGL 1.3


Then I just have to wait for nvidia to leak the drivers.

The fanatic is incorruptible: if he kills for an idea, he can just as well get himself killed for one; in either case, tyrant or martyr, he is a monster.
--EM Cioran

Opere Citato

Share this post


Link to post
Share on other sites
What? What''s this I hear?

Wow...from all the talk in the OpenGL vs DX debates, I was under the impression OpenGL only had one version and that it supports everything that''s coming out in the next 300 years...

G''luck,
-Alamar

Share this post


Link to post
Share on other sites
quote:
Original post by Alamar

What? What''s this I hear?



Wow...from all the talk in the OpenGL vs DX debates, I was under the impression OpenGL only had one version and that it supports everything that''s coming out in the next 300 years...




OpenGL supports anything that can be made into an extension in any version. New versions mean that extensions become standard features, and you can always expect a 100% complient set of drivers to have that feature instead of testing for it first .

[Resist Windows XP''s Invasive Production Activation Technology!]

Share this post


Link to post
Share on other sites
u c there is one problem with opengl ... its too bloody good!! i meen look at it version 1.1 has been around since about 1995 and has been desighed in such a way that this veriosn could posibly go on forever (still want 1.3 though) just goes show how u should create software!!!

~prevail by daring to fail~

Share this post


Link to post
Share on other sites
Sarcasm about something you do not understand is not a good idea. I have the impression that Alamar thinks the new version is about support for new features like a new version of DX.

Share this post


Link to post
Share on other sites
Do you really think that Microsoft will even bother releasing 1.2 when 1.3 is here. (that is if they ever do release something, I dont think they ever will)

Share this post


Link to post
Share on other sites
I stopped caring about microsoft. If we all develop games or other programs with 1.3 and windows isn''t supported then we''ll have to develop for linux and other operating systems. Microsoft will learn sooner or later

-----------------------------------------------------------
"People who usualy use the word pedantic usualy are pedantic!"-me

Share this post


Link to post
Share on other sites
Yeah,

I know that like most of the people are using Windows, so if you develop for another OS, you won't make a lot of sales / get popularity, but if you want Microsoft to change, going along with their schemes isn't going to help. I too have finally given up on caring about MS, I just use their stuff. I hope MS has a change of heart (yeah, right), and decides to make available 1.3. Where's a good OpenGL chat? I've given up on NeHe's... =[
but who am I to tell you all what to do? I still program for Windows (because I don't have Linux).

IT'S TIME TO START A REVOLUTION!

_Buster_

oh by the way Buster != _BUSTER_


dotspot.cjb.net
______________________________
Check out my for sale domain name!

http://www.Theatermonkey.com



Edited by - _BUSTER_ on August 16, 2001 3:29:39 AM

Share this post


Link to post
Share on other sites

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glUseProgram(program.get());
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glUseProgram(0);
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
      glFramebufferTexture2D(GL_READ_FRAMEBUFFER, GL_COLOR_ATTACHMENT0,
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
       
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
       
      #define NUM_LIGHTS 2
      struct Light
      {
          vec3 position;
          vec3 diffuse;
          float attenuation;
      };
      uniform Light Lights[NUM_LIGHTS];
       
       
    • By pr033r
      Hello,
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article (http://natureofcode.com/book/chapter-6-autonomous-agents/) inspirate from another code (here: https://github.com/jyanar/Boids/tree/master/src) but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch): https://github.com/pr033r/BachelorProject/tree/Optimalizing
      Exe file (if you want to look) and models folder (for those who will download the sources):
      http://leteckaposta.cz/367190436
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = light_space_pos.xyz / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ... https://i.stack.imgur.com/T43aK.jpg
      EDIT: Depth texture attachment:
      glTexImage2D(GL_TEXTURE_2D, 0,GL_DEPTH_COMPONENT24,fbo->width,fbo->height,0,GL_DEPTH_COMPONENT,GL_FLOAT,NULL); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_LINEAR); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameteri(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glFramebufferTexture2D(GL_FRAMEBUFFER, GL_DEPTH_ATTACHMENT, GL_TEXTURE_2D, fbo->depthTexture, 0);
  • Popular Now