Archived

This topic is now archived and is closed to further replies.

HLSL or Cg?

This topic is 5370 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I have read that they are compatible, on NVIDIA''s site http://www.nvidia.com/view.asp?IO=cg , and there for should be somewhat interchangable. Can anyone with experience recommened one over the other, or is one just as good as the other?

Share this post


Link to post
Share on other sites
I have experience with neither (very little with Cg) so what I''ll say might not apply.

I suppose Cg and HLSL are essentially the same thing with some minor stylistic differences. Some will tell you to use one, while others will tell you to use the other. Chances are they both do their job equally well and whether you pick one or the other depends entirely on your preference. The best thing is probably to take a look at both: learning one will definetly decrease the learning curve of the other. Then you''ll be able to make a choice which one you like better.

Share this post


Link to post
Share on other sites
You''d want to check things like:

- VS and PS versions supported. A couple of months ago, Cg didn''t have PS1.4, as far as I recall. I don''t know if that''s remedied (with the release of PS2.0 hardware and all, they should''ve updated it).

- Optimizations: Check the asm output for some shaders here and there.

- Do you consider GL support? Cg supports GL vertex/fragment programs, as far as I recall.
HLSL doesn''t.

Cheers,
Muhammad Haggag

Share this post


Link to post
Share on other sites
ATI and Microsoft is pusing HLSL and NVidia is pushing their Cg. Personally I would go for HLSL since it seems to be made for all platforms (there are no specific ATI optimizations that I know of) and it produces some pretty good code.

Share this post


Link to post
Share on other sites
While I do not have much experience with HLSL, I have done a fair amount with Cg.


- Cg supports vertex and pixel shaders up to and including 2.0.
- Cg also works quite well on ATI cards, as well as the nVidia cards.
- Cg supports both OpenGL (nVidia specific, and ARB extensions)and Direct3D8 and 9.
- The Cg toolkit contains tools and documentaion to work with the linux platform, as well as windows
- The Cg manual that the toolktit contains is nice.

So, if you want to develop with OpenGL, Cg is definitly the way to go. If you are working with Direct3d, it seems to be a tossup between the two. Try both and see which you prefer.

-Evan

---------------------------
Those who dance are considered insane by those who cannot hear the music.

Focus On: 3-D Models
Evan Pipho (evan@codershq.com)

Share this post


Link to post
Share on other sites
quote:
Original post by blue_knight
ATI and Microsoft is pusing HLSL and NVidia is pushing their Cg. Personally I would go for HLSL since it seems to be made for all platforms (there are no specific ATI optimizations that I know of) and it produces some pretty good code.



I''m not sure what you mean by all platforms , but the HLSL that Microsoft and ATI are pushing is only available in DirectX9 (not OpenGL) and only on Windows. Microsoft recently withdrew from the OpenGL Architecture Review Board, once again signalling their commitment for their platform at the exclusion of all others.

On Cg/HLSL compatibility: (http//www.nvidia.com/view.asp?IO=cg)
"Cg and Microsoft High Level Shader Language (HLSL): Cg was developed by NVIDIA in close collaboration with Microsoft, ensuring compatibility with DirectX 9.0 and HLSL. In addition, Cg will maintain compatibility with future versions of HLSL as they are made available."

The current Cg compiler (1.1) can compile vertex and fragment programs for these DirectX/OpenGL targets:
arbfp1, ps_2_x, ps_2_0, dx9ps2, fp30, vs_2_x, vs_2_0, dxvs2, arbvp1, vs_1_1, dx8vs, vp20, vp30, ps_1_3, ps_1_2, ps_1_1, dx8ps, fp20

in addition, the source code for the Cg compiler front-end is freely available from NVidia to encourage vendors to write their own backend code generator optimized for their hardware.

A chief source of complaint about Cg is ATI. ATI''s claims that Cg is ''proprietary'' and optimized for NVidia hardware are mostly political -- Cg is essentially equivalent to HLSL, which ATI embraces, and the Cg front-end is freely availble for ATI to attach their HLSL code-generator back end to. One must also bear in mind that ATI is pushing their own ''proprietary'' (DX9 only so far, but working on GL2SL) shader development tool called RenderMonkey.

So in conlusion, if you want to do any cross-platform shader development, Cg is the only game in town right now -- HLSL is DX9/Windows only, RenderMonkey is DX9/Windows only, and OpenGL 2.0 shading language (GLslang) isn''t official yet and won''t be compatible with anything else.

Share this post


Link to post
Share on other sites
Thanks for the replies! I was orignially going to use Cg, yet my development machines are all using ATI 9700 cards and as was mentioned ATI and Microsoft are really pushing HLSL. I am glad to hear that some one as heard of Cg running on ATI cards, which of course does make sense seeing as Cg is still compiled to shader assembly. I will try and start with Cg, being that I have found alot more resources covering that.

Share this post


Link to post
Share on other sites
quote:
Original post by terminate - Cg also works quite well on ATI cards, as well as the nVidia cards.


No it doesn''t. I''ve used Cg with a radeon 9700. Cg generates instructions with unecessary write and read masks and swizzles, which expand to multiple instructions on the radeon 9700. In many cases the program wouldn''t fit because of it.

quote:
Original post by kronq
ATI''s claims that Cg is ''proprietary'' and optimized for NVidia hardware are mostly political



No, they''re not. Cg IS optimized for nvidia, and the asm it generates is incredibly poor on radeons.

There''s not any really good choice. If you''re doing dx development, write in HLSL. If you''re doing opengl development, write in asm.

Share this post


Link to post
Share on other sites
Even NVidia admits that Cg was geared towards their own chipsets and not ATI''s. Stick with HLSL. Also, check out RenderMonkey - it helps you develop your shaders.

Share this post


Link to post
Share on other sites
HLSL it is then! I can flip flop with the best of them! If Cg doesn''t work well with ATI than that is a problem I would best like to avoid. Also my app is using DX9 so I don''t have any platform issues. Now to just go dig up as much HLSL documentation as I can find! Again thanks for the input.

Share this post


Link to post
Share on other sites
quote:
Original post by sjelkjd
Original post by kronq
>ATI's claims that Cg is 'proprietary' and optimized for NVidia >hardware are mostly political

No, they're not. Cg IS optimized for nvidia, and the asm it generates is incredibly poor on radeons.



Cg, like HLSL, is a language specification -- if ATI wanted to play along they could probably hook their HLSL back end up to the freely available Cg compiler front-end in a weekend. For this reason I still maintain that ATI's reluctance to work with Cg is mostly political because it is fundamentally the same language as HLSL (so claims that the language is optimized for NVidia is silly.


[edited by - kronq on March 25, 2003 9:38:22 PM]

Share this post


Link to post
Share on other sites
Would nvidia distribute ATI''s backend on their webpage, with Cg? Would nvidia notify ATI of developments in Cg? Would nvidia incorporate features that are ATI-specific in Cg?

It''s not as simple as ATI "playing along."

I''d rather see a standard come out of the ARB than any individual company''s product becoming the standard.

quote:

so claims that the language is optimized for NVidia is silly

The "language" is controlled by nvidia. The only implementation is optimized for nvidia hardware. And I could care less about the distinction between the compiler and the lanugage, since currently there is none.

Share this post


Link to post
Share on other sites
quote:
Original post by sjelkjd
Would nvidia distribute ATI''s backend on their webpage, with Cg? Would nvidia notify ATI of developments in Cg? Would nvidia incorporate features that are ATI-specific in Cg?

It''s not as simple as ATI "playing along."



The issues you bring up are what I was referring to when I said their reasons were mostly ''political'' -- ie not technical. What level of support is NVidia willing to go to in order to make Cg truly vendor-neutral? I don''t know. NVidia wants market dominance as much as anyone else does, but I think that inviting other vendors to make their own Cg back-end was an honest effort on their part -- I think they know that developers won''t tolerate a language that favors one vendor (as evidenced by this whole discussion.) They did submit Cg to become OpenGL 2.0 Shading Language (which would have given control of the language to ARB) but were turned down.

quote:

I''d rather see a standard come out of the ARB than any individual company''s product becoming the standard.



So would I -- but even more than that I''d like to see an API-independent shading language. Microsoft won''t participate in it (I don''t think many would argue that), nobody trusts NVidia to do it, and ARB doesn''t want to do it. All these languages are getting to the point where they''re functionally equivalent (and nearly syntactically equivalent) to RenderMan Shading Language which has been around for almost 20 years. They''re all heading to the same place and I wish they could step up and figure out how to make a common language.

quote:

The "language" is controlled by nvidia. The only implementation is optimized for nvidia hardware. And I could care less about the distinction between the compiler and the lanugage, since currently there is none.



HLSL is controlled by Microsoft. The only implementations run under DirectX9 in Windows. I will go for a cross-platform, cross-API language and hope for better hardware-specific optimizations in the future before relying on a DX9/Windows-only solution.

Share this post


Link to post
Share on other sites
Cg has optimizations for Nvidia. But, it also has the standard shaders to compile as. With one shader you can compile a version optimized for Nvidia and a version that is standard as fallback. Nvidia also said that they would like other vendors to make their own compiler optimizations for their own cards. Just because Nvidia made Cg doesnt mean it only works with their stuff.
As far as I can tell Cg is the best attempt by anyone to ever create a language that works with everything. I dont know about HLSL if it even supports optimized compilation. I know Cg does. I doubt that the GL HLSL will support that either. I dont see any problems with Cg at all. It allows you to compile normal shaders, which will work as good as HLSL shaders, and it lets you do optimized ones for people who happen to own Nvidia cards. I fully support Cg.

Share this post


Link to post
Share on other sites
When I said platforms I really meant to say 3D Video Cards, my mistake. HLSL is NOT multi-platform, just Windows. However given the choice between the two, obviously DirectX is being used. Given that and the desire to have the best performance on the widest range of hardware (that runs on Windows & DirectX) I think HLSL is the best choice. If using OpenGL I would much prefer a standard OpenGL shading language (OpenGL 2.0 I think, eventually) then use a single chipset manufacturers shading language. I know that NVidia has been trying hard to make sure its compatible but in the end its still going to be biased in their favor.

Share this post


Link to post
Share on other sites
Cg is definitely nVidia biased. That isn''t too bad, esp. if you have an nvidia card From personal experience, Cg works a lot better with my FX than with my 9700. That''s why I wouldn''t use Cg in production level code, intended for multiple platforms. But it''s a perfect tool for shader prototyping, where performance is still secondary. Once the shader is completed, I usually rewrite it in ASM anyway.

For DirectX, I would suggest using HLSL. For OpenGL, Cg is OK, until GLSlang on 2.0 comes out.

Share this post


Link to post
Share on other sites
People on the DirectX mailing list keep asking "but why I can't do XXXX in HLSL" (e.g. loops) and somebody answers "well, coming soon".

As for Cg generating assembler which is bad for Radeons, I remember a recent Carmack interview where he said that the programmable pipeline of Radeons basically runs OK everything you throw at it, while with Nvidia's GPUs you have to be much more careful in instruction sequences etc.; I trust him enough with that.

So it's Cg for myself, at least for now.

And, of course, I _plan_ to rewrite some of the longer shaders in assembler, but knowing how we're nowhere near vertex shader bound it's possible I don't find time for that (in my experience, once you move over the initial "let's see how many triangles we can drive" phase, it's always fillrate that's limiting).

[edited by - assen on March 27, 2003 5:19:56 AM]

Share this post


Link to post
Share on other sites
If the OpenGL 2 spec includes standard X, then any GPU manufacturer with enough sense to support it will. If DirectX also supports it then the onus is on the programmer. At this point, since all the primitive operations are common, the languages are functionally equivalent. One standard will be favoured over another because of ease of use etc. If hardware reaches a performance ceiling, faster, lower level languages will take over but then there is no point having a superficially different way of writing the code.

CONCLUSION: I hope that in the near future, hardware advancement will temporarly get stuck in the mud.

My greatest worry is that neccesary backwards-compaitability will bloat drivers to the point where they can't be efficient. Then the specs will have to be re-written from scratch at the expense of older software. It happened to DOS but for all the wrong reasons.

********


A Problem Worthy of Attack
Proves It's Worth by Fighting Back

[edited by - walkingcarcass on March 27, 2003 8:44:22 AM]

Share this post


Link to post
Share on other sites
quote:
Original post by PaleRaider
Thanks for the replies! I was orignially going to use Cg, yet my development machines are all using ATI 9700 cards and as was mentioned ATI and Microsoft are really pushing HLSL. I am glad to hear that some one as heard of Cg running on ATI cards, which of course does make sense seeing as Cg is still compiled to shader assembly. I will try and start with Cg, being that I have found alot more resources covering that.


Cg does not understand ps_2_0 (aka the r9700) model very well and you will have a good deal of difficulty producing code which will pass the validator. Use HLSL for ATI (and d3d).


Share this post


Link to post
Share on other sites
quote:
Original post by sjelkjd
Would nvidia distribute ATI''s backend on their webpage, with Cg?


Why would they need to? Futhermore, why would ATI want them to? A link from the nVidia site to the ATI site would probably be enough; besides, if Cg does become vendor-independent, it would probably move to a site more seperate from nVidia''s main one (still run by nVidia, but less ''branded'' in an effort to promote equal use by other manufacturers).
quote:

Would nvidia notify ATI of developments in Cg?


My guess would be yes. Because it''s an open standard.
quote:

Would nvidia incorporate features that are ATI-specific in Cg?


You''re damn right they would. Think about it - an ATI-specific feature is something that nVidia lacks. So, they''ll add it to the language - and then add support for it to their cards, bringing the featureset more equal.

In any case, it''s probably better to write seperate versions of shaders for ATI and nVidia cards, and just load the right one in at runtime.

Superpig
- saving pigs from untimely fates, and when he''s not doing that, runs The Binary Refinery.

Share this post


Link to post
Share on other sites
quote:

My guess would be yes. Because it''s an open standard.


It''s an "open" standard? What ISO committee controls the standard? Oh, I forgot, nvidia controls it. That''s not an open standard. An open standard is something like OpenGL.

Nvidia has no interest in implementing ATI-specific features, otherwise there would be a 24bit data type in Cg.

Share this post


Link to post
Share on other sites