Jump to content
  • Advertisement
Sign in to follow this  
AQ

which one should I use: Cg vs HLSL

This topic is 4897 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi Would some one here know what is the state of Cg. I am just starting to program in shaders and would like to hear from someone which language should I use - Cg or HLSL. I know they are internally the same language. My actual question is actually .. I would like to use Cg (because of its C linkage), but I fear it is not being supported by the rest of the industry and fear that it may not live long.

Share this post


Link to post
Share on other sites
Advertisement
I thought nVidia had kinda given up on Cg (as I stopped hearing press release after press release about CG, and more HLSL info from nVidia), since it has the same syntax as HLSL anyway... then last week I saw that the PS3 is going to use Cg. So, it looks like Cg might stick around for a bit.

Share this post


Link to post
Share on other sites
They aren't quite the same internally, Cg is API-independent, while (I believe) HLSL only works with DirectX.

Share this post


Link to post
Share on other sites
Well if you're making a pure direct3d application, I'd go with HLSL since it's tied in with direct3d much more closely, and, of course, vendors other than NVIDIA (ie. ATI) support it better than Cg -_^.

If you want to do some kind of API abstraction layer, then Cg is better. Also, use effect files as well as Cg/HLSL. I still can't get over how ****ing cool they are :D.

Either way, the actual shader syntax is pretty much the same so if you want to switch API later, your shaders should still work right off the bat.

Share this post


Link to post
Share on other sites
Actually, the very reason that HLSL is tied in very closely to D3D is the reason I would want to avoid it. Although I use D3D myself, but there is still no denying (for me at least), that is overly, and quite un-necessarily, complicted. (MS does a good job of bringing things quickly to the market, but often becasue of this approach, there programming model is not the most refined one.)

Maybe most of you would agree with me that the shader setup code in D3D is extremely complicated. You have to write shader, then assemble it, then create it, and then set it. It should have been just one function call. And before it, there is all the overhead of specifying the vertex declarations using unweildy MACROS.


I think this is because they did not do it afresh, and (still) are trying to merge it with the existing API. Instead, if they were to define the vertices as C structs, and let the vertex shader use the vertex declaration directly from the code, I think there would be no need for all the above (un-necessary) complexity.

In my opinion, this whole section needs a revision. Mayebe one of these days (soon) I will do a proposal write-up and will post it here (somewhere) for comments about how to merge shader setup and API.

But as a very genral outline, here is how I think it should look like.

There should be no need for defining vertex formats using complicate enums and structure formats. Instead if we change our view point, a vertex should be defined in the shader - and the GPU should be represented by an object model.

Then a standard C-like high level language be used to access this object model. (That is infact why I was more interested in using Cg in the first place). Infact, we should be able to access the GPU object model right from within our C/C++ app. Just instanitate the GPU object, pass it the vertex (that has been declrared as a simple struct in your code using the vector built in types), and let the shader code handle it. There is no need for semantics now. The shader has access to an OUTPUT vertex format. This can be declared simpy as a const struct in the 3D API, and will depend on the current GPU generation and shader version being used.

The pixel shader then can use these values directly and so on and so forth.

Eventually more and more GPU areas will be made available as programmable and we can not keep on inventing new instructions. Right now shader programming is being treated as a separate thing from the 3D API. They should be merged, and the real easy way to do is to expose the GPU as an object model.

I would wish if this makes any sense then the powers that be take it up and refine it. In short, from a programming point of view, this whole operation (shader programmign vs app programming) should not be separate areas. And it is perhaps not too difficult to do that also.

This way we will also get rid of the ever changing shader instructions (instructions being dropped, new ones being added, only to be dropped again in the next version etc ...) kind of scenario.

Thanks

Share this post


Link to post
Share on other sites
Quote:
Original post by Namethatnobodyelsetook
I thought nVidia had kinda given up on Cg (as I stopped hearing press release after press release about CG, and more HLSL info from nVidia), since it has the same syntax as HLSL anyway... then last week I saw that the PS3 is going to use Cg. So, it looks like Cg might stick around for a bit.

This is undoubedtly because the PS3 is going to have a NV core, and Nvidia GPU's are good at Cg [smile]. I'm pumped to get these new consoles, they sound amazing.

Quote:

Maybe most of you would agree with me that the shader setup code in D3D is extremely complicated. You have to write shader, then assemble it, then create it, and then set it. It should have been just one function call. And before it, there is all the overhead of specifying the vertex declarations using unweildy MACROS.

I don't think so. It's no more complicated then creating other D3D interfaces, like textures, surfaces, buffers, or meshes. You obviously can't have the shader creation call setting it with the device, because most applications use more than one shader.

And if there are no vertex declarations, how is D3D going to know what the structure of your vertex buffer is, in order to pass that data to the vertex shader? You certainly can't assume that there is going to be the same vertex input for every vertex shader.

I see what you mean when you say the vertex structure should be defined only in the shader, but the application itself needs to know that information, too. Your vertex buffer goes through several types of validations before rendering, to prevent any GPU crashes. Previously, you had to restart your entire system upon a GPU crash, but some new cards have a nice recovery feature.

And isn't the only macro you need for vertex declarations D3DDECL_END()?

Quote:

the GPU should be represented by an object model.

Isn't that what IDirect3DDevice is?

Share this post


Link to post
Share on other sites
Quote:
Original post by AQ
Actually, the very reason that HLSL is tied in very closely to D3D is the reason I would want to avoid it.

Quote:
Original post by AQ
Right now shader programming is being treated as a separate thing from the 3D API. They should be merged, and the real easy way to do is to expose the GPU as an object model.


I think these statements are contradictory. In any case, Cg and HLSL were designed to have nearly the same syntax so you'll not see the improvements you're looking for in Cg over HLSL.

Quote:
Original post by AQ
Maybe most of you would agree with me that the shader setup code in D3D is extremely complicated. You have to write shader, then assemble it, then create it, and then set it. It should have been just one function call.


The process of compiling HLSL shaders is similar to compiling C++ code. It takes a while to compile shaders so they are very optimized. It has been publicly stated that if our HLSL compiler does not match or exceed hand optimized code then its a bug with our compiler that we want to know about. With new DX9 games coming out that have 1000s of shaders, it’s not realistic to compile the shaders to this level of performance while the game is running.

After the offline process of compiling your shaders, and your game is running you have to tell the graphics card to load the compiled shader code (ie. create the shader interface using pre-compiled shader data). This puts the shader code in the graphics card memory cache so its super fast to switch to it. Then once you create all the shaders your engine will need for the current level, you then render all objects and materials by switching between your pre-loaded shaders. This "set shader" call is fast for the graphics card because all the shader object code is already loaded. But if you want to keep it simple and make the compile and create to happen in 1 API call, then you do what the D3D samples do and use D3DXCreateEffectFromFile.

Quote:
Original post by AQ
Instead, if they were to define the vertices as C structs, and let the vertex shader use the vertex declaration directly from the code, I think there would be no need for all the above (un-necessary) complexity.


The process of defining a vertex buffer and filling it is necessary to put data directly in the graphic card's memory so it doesn't have to travel over the AGP bus every frame.

[Edited by - jasonsa on March 18, 2005 4:37:30 AM]

Share this post


Link to post
Share on other sites
Now that I have got two replies, I thought I would write something to clear up what I am trying to convey.

I eventually do intend to do a complete write up on my proposal. But here is, in a nutshell, that I am trying to say.

Just because we are treating shader programming as a "new" thing, we are coming up with all sorts of fixes.

Instead, if we were to take a step back and try to visualize the whole 3D pipeline, one approach that comes to MY mind is as follows.

Treat the pipeline as an object model. Expose GPU units as individual objects. Let the programmer work directly with them as if we are writing a C++ program.

Shaders: They should be no more than regular method calls. There is no need to treat them any differently. When you compile your code, all shaders (all 1000 of them, if you will) get compiled and get stored in 'an object' form in your executable. (come up with a new executable format, if you must). When you invoke them, if they are on GPU, excellent,- they get to run immediately , if they are not, they are loaded. There is no need to ask the programmer to 'switch' them. Programmer should just tell the VertexShader object (that represents the Shader unit) to execute 'this function', and for the VS unit, 'this function' is in fact a shader!

Setting up a vertex buffer in fixed function: Why do you have to specify FVF first in vertex format declaration and then later when actually executing the drawing. etc. It is this kind of redundancy that needs to be removed.

Semantics: Totally useless .. even in today's API. Use a fixed struct. When you are writing a shader, just as you assume availability of certain built in functions, why can you not assume that a const VS_OUTPUT struct has already been pre-defined and it is to this struct that my shader must write results to. e.g. a simple VS_OUTPUTStruct can be defined as


struct VS_OUT
{
float3 position;
float3 color;
};

const VS_OUT vsOut;




and the shader writer needs to fill in the values of this struct at the end of the shader. During compiler can map this output to corresponding VS output registers.

Yes, I do think semantics are EXTREMELY bad as 1) they are a new thing which do not fit in nicely with the existing languages and ii) they are restrictive, conveying a sort of a sense that you can only have vertices with this and this meanings ..


Circlesoft has mentioned that we already have the D3DDevice. But that is really not an object mode. This is a simple encapsulation where you are still treating device as a whole as one big state machine. What I am proposing is that expose each pipeline unit to the programmer. If nothing else, it will break down the complexity of the DEVICE as a whole and make the API more understandable. At the same time, as more and more pipeline units start becoming programmable, we can retain the same notion of accessing them via functions and changing their behavior.

In fact it is with this idea of making the whole pipeline programmable eventually, that we must approach this idea and design the interfaces with exactly that in mind.

I am already working on just such a thing. I intend to get it to a point where I can do a proof-of-concept thing and then I would want the industry to look at it. I strongly believe that as the whole GPU becomes programmable (eventually), we need to change our view of how our 3D APIs work with the chip in general and may be literally redesigned with that aspect in mind.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!