Jump to content
  • Advertisement
Sign in to follow this  
muhkuh

OpenGL Help needed with designing programmable fragment processing support

This topic is 4847 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi there, first I have to say that I come from DirectX 9 with fx effect files. Now I have to develop/improve a grpahics engine that uses OpenGL (fixed function only atm, looks really crappy). The hardware platform is quite a moving target. The only sure thing is that it will always be low end (FX5200 at the moment). So I need: -program as generic as possible -ability to easily optimize for a specific hardware In DirectX I would have used FX effect files as I can easily mix HLSL and DirectX shader assembler. I think this won't work in GLSL. So I had to do this in the engine code. Furthermore when programming for DirectX 8 class hardware like a GeForce4 I feel a bit lost as there is no equivalent to ps1.1. I would have to use vendor specific extensions. This might not be a problem as modern low cost cards do support GLSL and ARB_fragment_program but I can never be sure what the guys from the purchasing department will order. I'd really appreciate suggestions or corrections if I overlooked something. Thanks in advance. Markus

Share this post


Link to post
Share on other sites
Advertisement
Well, from what I heard Cg only produces good code for nVidia cards. So I had to eventually optimize code for all ATI cards what would eat up the advantage of being able to handle older nVidia cards.

Share this post


Link to post
Share on other sites
I wouldn't say it only produces good code for nVidia cards, there might be *some* difference, but I imagine it's neglegable...

Otherwise, you'll have to write a new code path and use NVParse or something for NV20 cards.

Share this post


Link to post
Share on other sites
Quote:
Original post by muhkuh
Well, from what I heard Cg only produces good code for nVidia cards. So I had to eventually optimize code for all ATI cards what would eat up the advantage of being able to handle older nVidia cards.


When you use ARB code paths, it emits universal shader code. But shaders in nvidia fragment shader lang. (nv_fragment_shaderX) are shorter and more eficient (and can take advantage of newer chips). Cg however doesn't support (hardly a surprise) ATI's shader extensions (which isn't really an issue if you don't develop for PS1.x cards)

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!