Jump to content
  • Advertisement
Sign in to follow this  
Gamer Gamester

OpenGL GLSL... will it limit the number of users?

This topic is 3582 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Until recently I had been using the older, fixed-function pipeline of GPUs through OpenGL. Recently I've become interested in Shaders. I have 2 questions that I haven't found satisfying answers to in my searches: 1) Is GLSL a good choice for programming Shaders? I want them to be cross platform, so that pretty much eliminates HLSL, unless I do porting. So basically, is Cg better or does it even matter? 2) More important to me: If I program graphics predominantly using GLSL, the GPU must support this, correct? Anybody know where to find info out there on what percentage of current potential users this would eliminate (because they're running older hardware)? I'm not curious about gamers exclusively, but computer users in general, as I want to use GLSL in some more general applications. Thank you.

Share this post


Link to post
Share on other sites
Advertisement
ok, i just got into shaders myself a few days ago so bare this in mind as I could be talking a load of crap. here goes:

for age I put shaders off because my card didn't support GLSL. I kinda new my GPU jad some programming functionality (aupporte OGL 1.5) but I had no idea how to use it. I read lots and lots of info and everyone kept referring to 'CG' and I thought it was some propriety nVidia approach to shading (goodbye ATI etc.).

Turns out that CG is a shading language developed by nVidia and Microsoft that nVidia created a compiler for that compilers (static or dynamically) to whatever shading language u desire. The different assembely commands for each vendor is handled by profiles. Different profiles offer more advanced/basic functionality depending on what features that profile uses, and in theory writing for one profile should wprk on all other profiles that support that (sub)feature set. E.g.: wrtiting for the ARB profile is a pretty safe bet as any video card that supports OGL and shaders will (to a greater or lesser degree... AFAIK it can be driver dependent as to how successfully/transparently such required features are implemented).

Best bet is to go to the nVidia site and check out the CG manual and tuts. the manual is actually fairly comprehensive and the supplied docs & examples will definately get the ball rolling. If u want some basic OGL code to get shaders up and running I can post up some code if u wish.

^^^Please note that the above info is as I understand and could well be incorrect. I have an itegrated graphics card on my laptop (ropey at best) and use the ARB profile which should run on every man and his dog's GPU, so long as his GPU supports shaders (old or new). hope that helps

Share this post


Link to post
Share on other sites
For GLSL you'll need OpenGL 1.5 support at least, which excludes many of the integrated graphics chips like those from Intel which often support OpenGL up to version 1.4.

In OpenGL 1.4 you have ARB shader support (i.e. Shader Model 1.0 or 1.1), so you'd have to either completely rely on assembly shaders or write several shader versions.

CG will handle this for you since it compiles - as JackTheRapper said - the CG source into the shader language of your choice (provided it supports the features you used). Thus you can write your shaders in a high level language (CG) and have it compiled into a low level language (assembly, i.e. ARB shaders) or another high level language (GLSL or HLSL).

CG even introduces some features that a specifc shader model doesn't have. For example I used static branching in my ARB profile shaders and the CG compiler handled the branch resolution for me.

The downside of CG is that you can't use some of GLSL's features like accessing OpenGL state variable in your shaders directly. With CG shaders you'd have to pass them in via uniform parameters.

Hope that helps.

Thomas

Share this post


Link to post
Share on other sites
Quote:
Original post by Lord_Evil
For GLSL you'll need OpenGL 1.5 support at least, which excludes many of the integrated graphics chips like those from Intel which often support OpenGL up to version 1.4. In OpenGL 1.4 you have ARB shader support (i.e. Shader Model 1.0 or 1.1), so you'd have to either completely rely on assembly shaders or write several shader versions.
Any recent integrated GPU supports GLSL, certainly from the Intel X3100 onwards. Note that you don't actually require OpenGL 1.5, as long as your card supports the correct extensions.
Quote:
The downside of CG is that you can't use some of GLSL's features like accessing OpenGL state variable in your shaders directly. With CG shaders you'd have to pass them in via uniform parameters.
To be honest, most of that OpenGL state is set to disappear in the future - most of it is left over from the fixed function pipeline anyway.

Share this post


Link to post
Share on other sites
well, as with porting, opengl library only compiles the shader and GPU gets compiled assembly, the assembly is the same wheather it comes out of GLSL or HLSL or CG. But GPU must support the pixel shader target, for example if card supports only pixel shader target 2.0, you will not be able to use 16 samplers or 10 attributes variables, hope you get me. Allways check your shader for the minimum pixel shader target they need. If card supports pixel shader 2.0 it should run most of the shaders.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Any recent integrated GPU supports GLSL, certainly from the Intel X3100 onwards. Note that you don't actually require OpenGL 1.5, as long as your card supports the correct extensions.

Ah, you're right about the extensions. I'm not sure of how recent my integrated GPU is (GM945) but it doesn't support GLSL on my linux driven netbook [smile] .

Share this post


Link to post
Share on other sites
Quote:
Original post by Lord_Evil
Quote:
Original post by swiftcoder
Any recent integrated GPU supports GLSL, certainly from the Intel X3100 onwards. Note that you don't actually require OpenGL 1.5, as long as your card supports the correct extensions.
Ah, you're right about the extensions. I'm not sure of how recent my integrated GPU is (GM945) but it doesn't support GLSL on my linux driven netbook [smile] .
Pretty old. I believe the 945 chipset gives you the GMA 950, which has pixel-shader 1.0 in hardware, and no hardware vertex processing at all.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Quote:
Original post by Lord_Evil
Quote:
Original post by swiftcoder
Any recent integrated GPU supports GLSL, certainly from the Intel X3100 onwards. Note that you don't actually require OpenGL 1.5, as long as your card supports the correct extensions.
Ah, you're right about the extensions. I'm not sure of how recent my integrated GPU is (GM945) but it doesn't support GLSL on my linux driven netbook [smile] .
Pretty old. I believe the 945 chipset gives you the GMA 950, which has pixel-shader 1.0 in hardware, and no hardware vertex processing at all.

Unfortunately there are lots of computers out there that have those Intel GPUs integrated. IIRC the Intel Q35 is even less capable than my 945 but it seems to be the standard GPU of the computers in the office I work in. And since he's targeting not only gamers but "computer users in general" I'd suggest to favor CG over GLSL for the moment.

Share this post


Link to post
Share on other sites
Quote:
Original post by Lord_Evil
Unfortunately there are lots of computers out there that have those Intel GPUs integrated. IIRC the Intel Q35 is even less capable than my 945 but it seems to be the standard GPU of the computers in the office I work in. And since he's targeting not only gamers but "computer users in general" I'd suggest to favor CG over GLSL for the moment.
Unfortunately, the reality is that shaders are next to useless on those cards. Even if you do use CG, you will be limited to shader model 1.0 features, and they will be slow enough that you will tend to be better off using fixed function. Even by the time you reach the X3100, where shaders become faster to execute than fixed function, you still have to keep the complexity down much too far to be very beneficial.

Share this post


Link to post
Share on other sites
Quote:
Original post by swiftcoder
Quote:
Original post by Lord_Evil
Unfortunately there are lots of computers out there that have those Intel GPUs integrated. IIRC the Intel Q35 is even less capable than my 945 but it seems to be the standard GPU of the computers in the office I work in. And since he's targeting not only gamers but "computer users in general" I'd suggest to favor CG over GLSL for the moment.
Unfortunately, the reality is that shaders are next to useless on those cards. Even if you do use CG, you will be limited to shader model 1.0 features, and they will be slow enough that you will tend to be better off using fixed function. Even by the time you reach the X3100, where shaders become faster to execute than fixed function, you still have to keep the complexity down much too far to be very beneficial.


No, 945, is a SM 2.0 part.
Those intel chips are slow as you said.
There are benchmarks here and there.
Also, on Windows, don't use OpenGL if you are targeting a wide audience. Intel's GL driver is quite buggy. Use D3D.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!