Archived

This topic is now archived and is closed to further replies.

A few questions on GLSlang and videocards...

This topic is 4943 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I am in the market for a new PC. I am looking at a 9800Pro 256MB card, 9600XT 128, and the 5700Ultra 128 not sure if there is a 256 meg 5700? Anyway I would like to know do all these cards as of right now have ARB_fragment_program and GLSlang support? If I get this card next week I want to code GLSlang that day and/or use fragment programs. Can anyone give anyother thoughts on the listed cards on which one would be the best to pickup? Thanks

Share this post


Link to post
Share on other sites
These are the most recent cards, they probably support ALL the possible extensions, eccept that one on 8 bits palette indexing lol

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Get a 9800 Pro 128MB, great card and very reasonably priced now. 256MB is a waste of time at present.

Share this post


Link to post
Share on other sites
GF3 & 4 (afaik, i don''t have one) don''t support GLSL. GFX is supporting it only with a special (experimental?) driver (56.68 i guess)

GLSL rocks the place - where available :/

I hope they will increase support for this soon.

Regards,
Thomas

Share this post


Link to post
Share on other sites
GLSL only appears on DX9 class parts, thus the 9500+ and the GFFX+ series of cards.
Pre-those cards no other cards can do enuff in hardware to make it worth while to impliment the extensions.

GLSL doesnt compile down to assembly, although it does take an intermediate step is practicaly compiles directly to GPU mirco-code for all intents and purposes.

Share this post


Link to post
Share on other sites
quote:
Original post by GamerSg
I think anything above GF3 should support GLSLANG since it compiles it down to assembly.


That''s not true. GF3 and up, and Radeon 8500 and up, support VERTEX program assembly, but GLSlang requires FRAGMENT program assembly, which you get with any GeForce FX card, and with any Radeon 9500 and up.

Of the cards suggested, the 5700 is definitely going to be the slowest. The 9800 Pro 256 seems like the best value for the money -- the 256 MB will absolutely help on titles coming out the next year, and will absolutely help when you do your own fragment programs with floating point textures and render targets.

An alternative would be the new GeForce 6800 non-Ultra, which is a bit cheaper (and smaller!) than the Ultra, but which still needs a new power supply in your computer... That gives you the most programmable power (the longest shaders, etc) on the market today.

Share this post


Link to post
Share on other sites
quote:
Original post by GamerSg
Just wondering then, how is it that CG and HLSL can work on GF3 cards?


CG and HLSL separate the "pixel shader" from the "vertex shader" and they allow you to compile ps_1_1 model "vertex shaders" while using fixed function functionality for the fragment processing. GLSlang doesn''t allow you to de-couple them like that; it requires a higher base level of functionality.

Share this post


Link to post
Share on other sites