This topic is now archived and is closed to further replies.

OpenGL Opengl and Vertex and Pixel Shaders

This topic is 6045 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I was checking nvidia site to get some info on how to run vertex and pixel shaders on the opengl SDK (Great stuff! a must download for any developer!) BUT.. I currently have a nvidia riva tnt2 (dont worry I already ordered my geforce 2 it should arrive in a month or two) and It doesnt say anything about how to use vertex shaders in a card without hardware support for it, is there a way to use this shaders in my riva at least for testing?

Share this post

Link to post
Share on other sites
I think there is something similar but the reason you can't get it in software is because it is not standard OpenGL, rather it is an extension which will only work if your current video card driver supports it. You will need to test for hardware support and if it is there, use the shader, if not use another method.

Anything that is standard OpenGL it will automatically switch to software rendering if the card does not support it.


Edited by - krippy2k on June 28, 2001 1:36:35 AM

Share this post

Link to post
Share on other sites
OpenGL does have vertex and pixel shaders, they are accessible via extensions. Pixel shaders arent accessible unless your hardware has support for them, but Vertex programs (ogl equiv of vertex shaders) are emulated by the drivers for older geforce cards (not sure about TNT cards).

Share this post

Link to post
Share on other sites
DX8 has some software support for this. Vertex shaders is pretty OK in sw but I think that you can get some big slowdowns compare to fixed functions. Pixel shaders is hopeless without the proper hardware.

In OpenGL is vertex programs the same as vertex shaders for nvidia cards. I think "texture shaders" is used instead of "pixel shaders". ATI and others is working on an extension with functions instead of the assembly language used in the original vertex shaders.

You need a GF3 for full hardware support. Any GeForce can also be used with OpenGL but some features will be emulated.

Share this post

Link to post
Share on other sites
Correct me if I''m wrong, but can''t anything be software emulated? Pixel shadding (im assuming this means bump-maps) would be very easy in software. You would have a little equation for the angle the pixel should be shaded for, based on its surrounding pixels'' heights. I can''t describe it all right here, not without some diagrams, but I''m going to write an article about it now so I can work out all the details. This will be a nice project.. If you want to read it email me and I''ll tell you when its done. I''ll do it tomorrow.

Share this post

Link to post
Share on other sites
Ok here is what I got:

If the values for the map at all the pixels around the current pixel is A through H, starting from the top-left and clockwise from there, the angles on the X axis and Y axis can be found with these equations:

Y = (H - D) + (C - G)/2 + (A - E)2
X = (B - F) + (C - G)/2 + (A - E)2

either I am horribly wrong for a genius cause I just scribbled a few things down and got that in about a minute.

You can use the light angles on this data to get the bumps.

Share this post

Link to post
Share on other sites
Yes, well you can emulate just about anything you want using software if you write the code. Doesn''t necessarily mean it will be fast enough to be useful though.


Share this post

Link to post
Share on other sites
Pixel shaders != pixel shading
I suggest you go read up on pixel shaders @ nVidia''s developer site for a proper explanation of what they are. They are a damn lot better than bump mapping alone, thats for sure.

Share this post

Link to post
Share on other sites

  • Similar Content

    • By xhcao
      Does sync be needed to read texture content after access texture image in compute shader?
      My simple code is as below,
      glBindImageTexture(0, texture[0], 0, GL_FALSE, 3, GL_READ_ONLY, GL_R32UI);
      glBindImageTexture(1, texture[1], 0, GL_FALSE, 4, GL_WRITE_ONLY, GL_R32UI);
      glDispatchCompute(1, 1, 1);
      // Does sync be needed here?
      glBindFramebuffer(GL_READ_FRAMEBUFFER, framebuffer);
                                     GL_TEXTURE_CUBE_MAP_POSITIVE_X + face, texture[1], 0);
      glReadPixels(0, 0, kWidth, kHeight, GL_RED_INTEGER, GL_UNSIGNED_INT, outputValues);
      Compute shader is very simple, imageLoad content from texture[0], and imageStore content to texture[1]. Does need to sync after dispatchCompute?
    • By Jonathan2006
      My question: is it possible to transform multiple angular velocities so that they can be reinserted as one? My research is below:
      // This works quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); quat quaternion2 = GEMultiplyQuaternions(quaternion1, GEQuaternionFromAngleRadians(angleRadiansVector2)); quat quaternion3 = GEMultiplyQuaternions(quaternion2, GEQuaternionFromAngleRadians(angleRadiansVector3)); glMultMatrixf(GEMat4FromQuaternion(quaternion3).array); // The first two work fine but not the third. Why? quat quaternion1 = GEQuaternionFromAngleRadians(angleRadiansVector1); vec3 vector1 = GETransformQuaternionAndVector(quaternion1, angularVelocity1); quat quaternion2 = GEQuaternionFromAngleRadians(angleRadiansVector2); vec3 vector2 = GETransformQuaternionAndVector(quaternion2, angularVelocity2); // This doesn't work //quat quaternion3 = GEQuaternionFromAngleRadians(angleRadiansVector3); //vec3 vector3 = GETransformQuaternionAndVector(quaternion3, angularVelocity3); vec3 angleVelocity = GEAddVectors(vector1, vector2); // Does not work: vec3 angleVelocity = GEAddVectors(vector1, GEAddVectors(vector2, vector3)); static vec3 angleRadiansVector; vec3 angularAcceleration = GESetVector(0.0, 0.0, 0.0); // Sending it through one angular velocity later in my motion engine angleVelocity = GEAddVectors(angleVelocity, GEMultiplyVectorAndScalar(angularAcceleration, timeStep)); angleRadiansVector = GEAddVectors(angleRadiansVector, GEMultiplyVectorAndScalar(angleVelocity, timeStep)); glMultMatrixf(GEMat4FromEulerAngle(angleRadiansVector).array); Also how do I combine multiple angularAcceleration variables? Is there an easier way to transform the angular values?
    • By dpadam450
      I have this code below in both my vertex and fragment shader, however when I request glGetUniformLocation("Lights[0].diffuse") or "Lights[0].attenuation", it returns -1. It will only give me a valid uniform location if I actually use the diffuse/attenuation variables in the VERTEX shader. Because I use position in the vertex shader, it always returns a valid uniform location. I've read that I can share uniforms across both vertex and fragment, but I'm confused what this is even compiling to if this is the case.
      #define NUM_LIGHTS 2
      struct Light
          vec3 position;
          vec3 diffuse;
          float attenuation;
      uniform Light Lights[NUM_LIGHTS];
    • By pr033r
      I have a Bachelor project on topic "Implenet 3D Boid's algorithm in OpenGL". All OpenGL issues works fine for me, all rendering etc. But when I started implement the boid's algorithm it was getting worse and worse. I read article ( inspirate from another code (here: but it still doesn't work like in tutorials and videos. For example the main problem: when I apply Cohesion (one of three main laws of boids) it makes some "cycling knot". Second, when some flock touch to another it scary change the coordination or respawn in origin (x: 0, y:0. z:0). Just some streng things. 
      I followed many tutorials, change a try everything but it isn't so smooth, without lags like in another videos. I really need your help. 
      My code (optimalizing branch):
      Exe file (if you want to look) and models folder (for those who will download the sources):
      Thanks for any help...

    • By Andrija
      I am currently trying to implement shadow mapping into my project , but although i can render my depth map to the screen and it looks okay , when i sample it with shadowCoords there is no shadow.
      Here is my light space matrix calculation
      mat4x4 lightViewMatrix; vec3 sun_pos = {SUN_OFFSET * the_sun->direction[0], SUN_OFFSET * the_sun->direction[1], SUN_OFFSET * the_sun->direction[2]}; mat4x4_look_at(lightViewMatrix,sun_pos,player->pos,up); mat4x4_mul(lightSpaceMatrix,lightProjMatrix,lightViewMatrix); I will tweak the values for the size and frustum of the shadow map, but for now i just want to draw shadows around the player position
      the_sun->direction is a normalized vector so i multiply it by a constant to get the position.
      player->pos is the camera position in world space
      the light projection matrix is calculated like this:
      mat4x4_ortho(lightProjMatrix,-SHADOW_FAR,SHADOW_FAR,-SHADOW_FAR,SHADOW_FAR,NEAR,SHADOW_FAR); Shadow vertex shader:
      uniform mat4 light_space_matrix; void main() { gl_Position = light_space_matrix * transfMatrix * vec4(position, 1.0f); } Shadow fragment shader:
      out float fragDepth; void main() { fragDepth = gl_FragCoord.z; } I am using deferred rendering so i have all my world positions in the g_positions buffer
      My shadow calculation in the deferred fragment shader:
      float get_shadow_fac(vec4 light_space_pos) { vec3 shadow_coords = / light_space_pos.w; shadow_coords = shadow_coords * 0.5 + 0.5; float closest_depth = texture(shadow_map, shadow_coords.xy).r; float current_depth = shadow_coords.z; float shadow_fac = 1.0; if(closest_depth < current_depth) shadow_fac = 0.5; return shadow_fac; } I call the function like this:
      get_shadow_fac(light_space_matrix * vec4(position,1.0)); Where position is the value i got from sampling the g_position buffer
      Here is my depth texture (i know it will produce low quality shadows but i just want to get it working for now):
      sorry because of the compression , the black smudges are trees ...
      EDIT: Depth texture attachment:
  • Popular Now