Jump to content
  • Advertisement
Sign in to follow this  
rewolfer

OpenGL glsl shaders on old cards

This topic is 3435 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I just tried my program on my friend's pc which has a geforce 6800, and it threw a "send/dont send" windows error message thingy. I'm using GLEW to handle extensions, and for shaders I'm just using functions such as glUseProgram, glUniform, glShaderSource etc. Am I supposed to use other functions in order to support older architectures. according to GLEW his card only supports opengl 1.5, and I'm not sure when glsl programs started becoming supported. I googled a bit, and it seems people do use shaders on 6800's, so is it a programming fault, or should he update drivers (he's getting back to me on the version number lol). thanks rewolfer edit: lol he apparently has version 6.6.7.4 (september 2004), so i told him to get 190.38 now.

Share this post


Link to post
Share on other sites
Advertisement
AFAIK GLSL was introduced in 1.5 but as an extension at first. So you might use the ...ARB versions in that case.

Edit: according to this wikipedia article it was introduced as a 1.4 extension, and promoted to core in 2.0 (which means pre 2.0 the methods would probably have the ...ARB extension).

Share this post


Link to post
Share on other sites
thanks, the arb extension works. :)
If I change my app to use the old extension-provided functions to support older cards, will it change any performance on newer cards? or should i rather bind the appropriate functions at run-time after checking the ogl version?

Share this post


Link to post
Share on other sites
i just tested the new version on my laptop (which is also opengl1.5) and it is soooo slow. as soon as i turn off the shaders its great though. Does it revert to software emulation, or is the radeon x700 just too crap?

Share this post


Link to post
Share on other sites
not at all complicated, just does the simple lighting models blinn, phong, diffuse per-pixel for directional light. and there's only one light, so one pass. no reason it should be slow in my opinion.

Share this post


Link to post
Share on other sites
Well, I'm not entirely sure but if you use instructions that aren't compatible with the GPU's supported shader model (which is 2.0 for the x700) it might revert to software emulation - at least for vertex shaders.

Also, if you read many textures in the shaders the data bus might be the bottleneck.

You could try and use really simple shaders (like transformation only in the VS and outputting vertex color in the PS) to check if the shaders might be too complex for the card.

Share this post


Link to post
Share on other sites
Radeon x700 should have GL 2.1 drivers.
My old Radeon 9500 even has GL 2.1 support.

Share this post


Link to post
Share on other sites
well its a Mobility Radeon, though it shouldn't really make a difference. Went to the ATI site, but none of the drivers they supply actually recognise my card. So I went to the ACER (laptop brand) site for drivers, but apparently their latest video driver is the one i've got. Strange. Maybe GLEW is just bad at identifying the opengl version, because it says my gtx295 is only opengl2.0, GLEW_VERSION_2_1 is 0.
The most complicated thing in the shader is the if-statement, maybe that's the problem... nope, just tried removing the branch - still slow.

AND it draws all the mesh's edges for some reason.

Anyways, I'm not going to waste time fiddling.
Thanks for the help

rewolfer

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!