Archived

This topic is now archived and is closed to further replies.

meeshoo

Cg and OpenGL Shading Language

Recommended Posts

Hi. I have a half-engine ready and it''s using opengl for rendering. i''m learning now shaders. the next question might be a stupid one, because i''m not well documented, but this is the fastest way to get the answer:which is best to use, Cg or ogl shading lang. i like cg, from nVidia, it has a lot of tools, but i don''t have money to buy a nVidia high performance card, only an ATI one. so my question is: Does Cg generates code only for nVidia cards, or is an universal program and programs can be run on any GPU? the second one: i want to ask you which are the main differences between this two languages? sorry my bad English, waiting for response Mihai

Share this post


Link to post
Share on other sites
Cg will compile code for both NVidia and ATI GPUs. GLSlang (OpenGL shader language) isn''t really that different from Cg, AFAIK the only differences are things like function and keyword names. Currently I would use Cg as GLSlang isn''t supported on NVidia cards yet and Cg can compile shader code for GPUs that can''t support GLSlang (such as the gf3 and gf4).

Share this post


Link to post
Share on other sites
Cg nominally will generate code for ATI cards, but it won''t be nearly as fast as it is on nVidia cards, because Cg was written with nVidia in mind.


"Sneftel is correct, if rather vulgar." --Flarelocke

Share this post


Link to post
Share on other sites
i belive that ATI could supply a backend for the Cg compiler so that ATI cards get the same advantage, however i very much doubt they will do this, partly coz they havent already and mainly because they seem to have thrown their lot in with GLSlang instead.

Share this post


Link to post
Share on other sites
The Cg compiler can output shader code for whatever profile you ask. (or you can ask it to autodetect) Among those profiles is a handful for various nvidia chipsets and the OpenGL ARB extensions.

It is my understanding that ATi is dropping their own shader extensions in favour of the ARB extensions, so I would assume that things would run pretty snappy on ATi hardware.

The main drawback to Cg is that (as far as I know) it won''t work on the Radeon 8xxx cards.


"Without deviation, progress itself is impossible." -- Frank Zappa

Share this post


Link to post
Share on other sites
Cg will run slow on anything but NVIDIA cards, the way it is compiled is optimized directly for NVIDIA GPU''s so it inserts extra instructions and rearranges code to squeeze that extra frame out. Since ATi architecture is much different (prefers instructions in a different order, etc) it will run very slow on ATi cards.

I would suggest GL slang (unlike other posters here), NVIDIA will have to implement that in their drivers at some point and it would seem that Cg is kinda dieing.

Share this post


Link to post
Share on other sites
The GL Shading Language (GLSL) is an Open standard developed by many companies including ATI, 3DLabs, & Nvidia. Because of this, it''s a much more well thought out language and, going forward, the GLSL has a much better chance of surviving. ATI and 3DLabs already have a GLSL implementation and Nvidia has stated that it will implement GLSL (the only question is when).

Alought there aren''t any books on GLSL yet, the GLSL spec is pretty informative and a number of books are due to be published in the spring.

Share this post


Link to post
Share on other sites
Thanks for your advice. I visited ATI.com and there is another "thing" that draws my atention: RenderMonkey. So i''ve downloaded that too. well, there says that will be soon supported glslang in rendermonkey and so maybe i should use that. what do u say? or i should use glslang and wait for nVidia to make that work on their cards? or i should just implement shaders old fashion(the assembler way). After the Cg compiles my code, is there any method so i can decompile it in assembler(or shader language to be more explicitly) and optimize it to run on both cards?

Because we are speaking about shaders i want to ask u another question, maybe here isn''t the right place but i will go on: i export whole scenes from 3d studio max and load them into my engine. i plan not to use opengl lighting system, instead use my own shaders to illuminate the scene and to make realistic volume shadows. for fast rendering i want to use triangle strips for meshes and to make as few draw calls as possible. now, in 3ds max, there is a face property called smooth group, wich is used to make meshes smooth or edged(i can''t explain u all cause it''s long talk, but u know that a vertex can have many vertex normals (3 in the case of the triangles) and every normal is from another smoothing group). if i split every mesh into submeshes i can render this correctly, but that means a lot of function calls to driver(ie. 6 for a single box) so that might be slow. finally my question is: if i don''t use opengl lighting system but shaders, can i render a mesh correctly( smooth areas and edged areas) using some shader with a single function call?(ie. glDrawElements or such)?



Mihai

Share this post


Link to post
Share on other sites
Guest Anonymous Poster
Hi,

I think you can''t use three normals per vertex, as you are can''t tell the rasterizer which normal to use for which triangle. And even if you pass the pixel-shader three colors (each one calculated for one of the normals), you can''t decide which color to use in the pixel-shader.
I think the easiest is to duplicate vertices on edges. The costly calculations would have to be done three times anyway (the lighting, depending on the normal) and memory requirement are not that much more either. Especially since most vertices won''t require the duplication anyway (tesselated planes, for example).

Jan

Share this post


Link to post
Share on other sites
thanks a lot. now all i have to do is to modifiy my 3dsmax exporter to duplicate the vertices that are "in trouble". that''s great. i''m done for the questions right now, but u can continue posting your ideas. i''ve just got the first OGLSlang compiler, folowing it''s specification. i took it from www.3dlabs.com

Mihai

Share this post


Link to post
Share on other sites
the OGSLang compiler is probably handy for making sure things compile properly, however you can just give the [ATI] drivers the raw txt file and it will compile it for the card being used, which allows the driver to optermise the output for the card in use

Share this post


Link to post
Share on other sites
hi. thanks to phantom for advice, but it looks like the future is of high level languages (HLSL from microsoft and GLSlang). The idea is that i''ve just intalled RenderMonkey from ATI and I can say that it''s a wonderfull tool, because artists too can handle with it and then pass the code to programmer for optimization. it has HLSL integrated and it provides a promise that it will include GLSlang 2.0 too. i''m thinking develop my engine using what is opengl 2.0 now and improving it over time because when it will be finished, ogl 2.0 will be final and fully functional.

Speaking about my old problem(that was solved by making more vertices at edges and corners): i want my physics engine to process only the original data, without the new verices(because, even there aren''t many of them, the equations will be too many and can produce slow performance), so now I''m thinking a way to use only original vertex data in simulations and, by a system of some sort of pointers to vertices(i don''t want to use to set of them, one for physics and one for renderer because that will almost double the needed memory space) to process only for one edge vertex the new position and them copy the resultant data into it''s copies. if u have any solution in mind, please write it down here. i don''t have a solution yet, only ideas. thanks. bye

Mihai

Share this post


Link to post
Share on other sites