Jump to content
  • Advertisement

Archived

This topic is now archived and is closed to further replies.

shaft

GL and Vertex Shader Extension Question

This topic is 5920 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Quick question... If I write a program using NVidia''s vertex shader extension, does that mean the program will only run on nvidia cards, or does the extension provide a CPU/software implementation so anyone can run it, regardless of their hardware? I''ve never used extensions, so I don''t know much about''em. Thanks.

Share this post


Link to post
Share on other sites
Advertisement
NVidia cards only. Use glGetString to determine the type of card the user has and plan for the worst.

Share this post


Link to post
Share on other sites
Then how do you use vertex shaders in OpenGL? I want a solution that will work on any system, even if the card doesn''t support vertex shaders and runs on the CPU.

Do I have to switch to directX, and further support the monolith that is microsoft?

Share this post


Link to post
Share on other sites
quote:

Then how do you use vertex shaders in OpenGL? I want a solution that will work on any system, even if the card doesn''t support vertex shaders and runs on the CPU.



OpenGL doesn''t work that way. OpenGL is not designed to run CPU emulations, if HW is not present. OpenGL is designed for fully 100% hardware implementations. Some manufacturers give you software fallback in their drivers, but only for their own extensions. If you want to take advantage of a specific card''s special features, then you have to code special case code for every major manufacturer (actually nVidia, ATi, and generic fallback). OpenGL won''t do it for you, you can''t just say, hey I want vertex shaders on any card ! You''ll have to invest that time and do it yourself.

Or you could wait for OpenGL 2...

/ Yann

Share this post


Link to post
Share on other sites
Well, your apps will never run in every card if you want any newer features, since the i81x chips don''t run crap.

You can just code for the nVidia and ATI versions for now and update it when the OpenGL 2.0 spec (maybe 1.3 or 1.4) comes out. Most games in development should be scheduled to come out in 9-18 months and by that time OpenGL should have an implementation for that in the spec.

Share this post


Link to post
Share on other sites
>>OpenGL doesn''t work that way. OpenGL is not designed to run CPU emulations, if HW is not present<<

actually this is one of the things that differentuates(sp) d3d from opengl, correct opengl is meant to emulate a feature in software if the hardware cant do it. d3d ignores it.
apart from my usual rants on shaders , i get the feeling a lot of beginner programmeurs think they have to use them (im not knocking anyone personally )
why do they think they have to use them?
i believe they say to themselves my terrain engine (or whatever) looks crap compared to such + such game, give me shaders + my engine will look so much better, but folks it doesnt work that way.
give jeff beck a beatup accoustic guitar hell make good music, give a dude off the street the top range guitar + all the effects + he wont make good music.
i dont wanna blow my own horn but the shots below in gotterdammerung were all made by me on a vanta i think the most advanced thing i used was multitexture

http://uk.geocities.com/sloppyturds/gotterdammerung.html

Share this post


Link to post
Share on other sites
quote:

actually this is one of the things that differentuates(sp) d3d from opengl, correct opengl is meant to emulate a feature in software if the hardware cant do it


Not really. I have yet to see one single perfragment feature that is correctly emulated in software, while the rest is HW. It simply doesn't work, hardware does not support that. With 99% of adavnced features, it's one or the other: full software or full hardware.

Esp. with extensions, there is no general software fallback. If HW doesn't support a feature, it simply does not define the extension. Period.

Vertex programs could be an exception, since they are software-emulateable. But as I mentioned, this is not the way it works. With some luck, you'll get one manufacturer that supports a software codepath for his *own* shader extensions (eg. vertex program on GF2), but those are incomplete most of the time and still manufacturer dependent. But there is no, and never will be, intercompatibility between multi-manufacturer extensions. You'll *never* get a software emulation of NV_vertex_program on your Radeon card (at least, not until nVidia merges with ATi .

Oh well, we're all waiting for OGL 2.0...

/ Yann

[edited by - Yann L on May 5, 2002 7:07:01 AM]

Share this post


Link to post
Share on other sites
>>Not really. I have yet to see one single perfragment feature that is correctly emulated in software, while the rest is HW. It simply doesn''t work, hardware does not support that. With 99% of adavnced features, it''s one or the other: full software or full hardware.<<

from what ive seen
dot3 bumpmapping on my vanta
some of the arb_imaging on my gf2mx




http://uk.geocities.com/sloppyturds/gotterdammerung.html

Share this post


Link to post
Share on other sites
quote:

some of the arb_imaging on my gf2mx


Yes, granted, the imaging extensions are sometimes simulated in software, by reading back the FB and performing the operations on the CPU. But that's because the imaging subset is not an integral part of the fragment pipeline, it can also process on the FB alone. The accumulation buffer is another such example: it's perfectly software-emulateable (better not talk about performance though...)

quote:

dot3 bumpmapping on my vanta


Impossible. If a card does not support HW DOT3 in it's combiner pipeline (I don't know about the Vanta, but I guess not), then there is *no* way to emulate it in software, other than doing the whole rendering using a total software fallback.

You can't extract pixels from the fragment pipeline, do software stuff on them, and put them back onto the GPU. The hardware does not allow that, there is no GPU-external access point. This is why software emulation of 'advanced' features is very problematic: it's not an API question (D3D has exactly the same problem), it's a hardware limitation. Just as with eg. the stencil buffer: If your 3D card doesn't support it in HW, then *everything* will fall back to software.

/ Yann

[edited by - Yann L on May 5, 2002 4:06:18 PM]

Share this post


Link to post
Share on other sites

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!