Jump to content
  • Advertisement
Sign in to follow this  
d h k

OpenGL CG: cgGLSetParameter refuses to work PLUS 2 other general questions [SOLVED]

This topic is 3845 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

I hope this even fits this forum-area here, I'm not too sure though. It's not a graphical problem, but it has to do with the CG language. For some reason, I can't pass values from my application to my shader (pixel shader in my case). Example: bumpmap.cg
float4 main (	float2 texture_coordinates : TEXCOORD0,
				float2 bumpmap_coordinates : TEXCOORD1,
				float3 light_vector : COLOR0,
				uniform float3 ambient_color,
				uniform sampler2D texture : TEXUNIT0,
				uniform sampler2D bumpmap : TEXUNIT1 ) : COLOR
{
	// look the color of our pixel up from the texture
	float3 texture_color = tex2D ( texture, texture_coordinates ).rgb;

	// look the normal of our pixel up from the bumpmap
	float3 normal_vector = tex2D ( bumpmap, bumpmap_coordinates ).rgb;

	// clamp the light- and normal-vector into proper range
	light_vector = 2.0 * ( light_vector.rgb - 0.5 );
	normal_vector = 2.0 * ( normal_vector.rgb - 0.5 );

	// calculate diffuse-factor
	float diffuse = dot ( normal_vector, light_vector );

	// return calculated rgb and full alpha
	return float4 ( diffuse * texture_color + ambient_color, 1.0 );
}






main.cpp
// in draw ( ) - called every frame

// Set the (fixed) ambient color value
    CGparameter ambientColorParameter = cgGetNamedParameter(program, "ambient_color");
    cgGLSetParameter3f(ambientColorParameter, 0.4f, 0.4f, 0.4f);

    // The first texture unit contains the detail texture
	glActiveTextureARB(GL_TEXTURE0_ARB);
    glEnable(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D, texture[0].id );

    // The second texture unit contains the normalmap texture
	glActiveTextureARB(GL_TEXTURE1_ARB);
    glEnable(GL_TEXTURE_2D);
    glBindTexture(GL_TEXTURE_2D, texture[1].id );

	glRotatef ( rotate, 0.0f, 1.0f, 0.0f );

	glBegin ( GL_TRIANGLES );

    for ( int i = 0; i < 2; i++ ) 
	{
		for ( int j = 0; j < 3; j++ )
		{
			light_vector.set ( triangle.vertex[j], light_position );
			normalize ( light_vector );
			
			
			// Bind the light vector to COLOR0 and interpolate
			// it across the edge
			glColor3f(light_vector.x, light_vector.y, light_vector.z );
		
			// Bind the texture coordinates to TEXTURE0 and
			// interpolate them across the edge
			glMultiTexCoord2fARB(GL_TEXTURE0_ARB,
				triangle.vertex[j].u, triangle.vertex[j].v);

			// Bind the normalmap coordinates to TEXTURE1 and
			// interpolate them across the edge
			glMultiTexCoord2fARB(GL_TEXTURE1_ARB,
				triangle.vertex[j].u, triangle.vertex[j].v);

			// Specify the vertex coordinates
			glVertex3f(triangle.vertex[j].x, triangle.vertex[j].y, triangle.vertex[j].z);
		}
     }

	glEnd ( );






I try to pass a float3-value from my application in OpenGL to the shader, but it won't accept it. The ambient_color is set to 0.0f, 0.0f, 0.0f in the shader. If I hard-code other values in the shader itself, it works fine. I tried it with copy&paste tutorials for other shaders as well, that part never seems to work! Or do I have to add a vertex-shader as well, first, so the data can go app->vertex->pixel? None of the articles, tutorials and papers mention anything about that - neither does the CG documentation. By the way, I'm using CG 1.1 for now. EDIT: And another small question: why do people send normal float values through COLOR0 (and "misuse" OpenGL's glColor3f-command?) - couldn't you add a uniform float3 variable and set it using cgGLsetparameter-functions etc. (if they'd work for me, I would try it). That'd seem a lot cleaner and better to me... [Edited by - d h k on January 9, 2008 12:39:20 PM]

Share this post


Link to post
Share on other sites
Advertisement
You can periodically call cgGetError() and cgGetErrorString() functions to check for errors. Even better, Cg allows you to set an error handling callback function (using cgSetErrorCallback()) which is invoked whenever an internal error occurs. All of these functions are supported in Cg 1.1.

Share this post


Link to post
Share on other sites
Okay, I have that set up and... what a surprise. I get four nasty errors whenever I try to use my cgGLSetParameter-functions (zero cg-errors when I comment those out).

These are the errors:


CG ERROR : Invalid program handle.
CG ERROR : Invalid parameter handle.
CG ERROR : Invalid program handle.
CG ERROR : The profile is not supported.


The code is still the same from above.

This is how I load the shader, just for reference. This is called after a window-context is created.


void init_shader ( CGprogram program )
{
cgSetErrorCallback(cgErrorCallback);

context = cgCreateContext();

profile = cgGLGetLatestProfile(CG_GL_FRAGMENT);
cgGLSetOptimalOptions(profile);

program = cgCreateProgramFromFile(
context,
CG_SOURCE,
"bumpmap.cg",
profile,
NULL, // entry point
NULL); // arguments

cgGLLoadProgram(program);

cgGLBindProgram(program);
cgGLEnableProfile(profile);

}

Share this post


Link to post
Share on other sites
It's been 24 hours, so I'm kinda bumping this thread. But I'm not only bumping, I'm also updating and summarizing.

Question #1: Why does my shader produce CG ERRORS as soon as I try to use the cgGetNamedParameter ( )- and cgGLSetParameter3f ( )-lines anywhere in my main.cpp?
Question #2: Why do people "misuse" for example the COLORX bindings when you could, if you wanted to pass a vector from application to shader, just pass a uniform or varying variable and the above mentioned two functions? I'd try this, if my shaders wouldn't crash with the lines in.
Question #3: This one's new. As soon as I use either step ( ) or stepsmooth ( ) in my shader, CG creates a CG ERROR telling me that the compile didn't work. I tried it with all kinds of parameters, like for stepsmooth ( ) I tried ( 0.0, 1.0, 0.5 ) and a bunch of other possible values (I even tried all kinds of types, ints, floats, doubles, halves, nothing worked).

Please, this really holds my progress up A LOT. I imagine these question have to be pretty quick to answer for any CG- or shader-expert, don't they? Any help is really very much appreciated!

If I need to show more code or clarify anything, feel free to ask me. I'll be glad to do so. Right now, I just want to MOVE ON. :)

Share this post


Link to post
Share on other sites
Some general guidelines to follow:

First of all, you may have better luck posting your questions at nVidia's Cg forum.

Secondly, why are you still using Cg 1.1? What's preventing you from updating to Cg 2.0? The developers of Cg Toolkit has put a lot of time and effort in enhancing the feature set, fixing the bugs and optimizing the code paths, so by using the latest version, or at least the one before it, you're lowering the chances of bumping into a an internal bug that's been already fixed in a later version, plus you'll get more debugging functions to play with (such as cgGLSetDebugMode() which was introduced in Cg 1.5). Not to mention that you'll have more examples in the later releases of the toolkit at your disposal, so unless you've got some very good reason to stand against an update I'd strongly suggest that you move to the latest version.

Besides, I saw you had a post in this forum about running shaders on older cards, so I'm wondering what gpu are you trying to run this code on? I've got a hunch that you're sticking to Cg 1.1 because you have an older card and you somehow want to keep the two in synch. Well, the fact that you might have an older card has no implications what so ever on the version of the toolkit that you're going to use. As a matter of fact, I'm successfully running Cg 2.0 on an integrated Intel chipset, one of the worst graphics cards (if you can call it that) that you might ever run into. Of course, many of the features are disabled, but the rest is working just fine.

I think the best approach is for you to update to the latest version of Cg Toolkit. Then try to run one of the examples that's shipped with the toolkit. If you successfully manage to run the example, you might as well use its code as a base. If, even after using the exact code (and with the EXACT shaders from the example), you're still facing some difficulties, you definitively have some problems in your app.

Besides, try to compile your shaders using cgc.exe (Cg's command line compiler) to make sure that your shaders are actually compiling OK. This is a very important step. The error codes that you posted indicate that the program handle is not valid, which might be due to the fact that your shaders are not compiling OK. Make no assumptions. Double check everything. To my experience, there's usually something very simple, in fact so simple and silly that is often overlooked, that's causing such time-consuming trips to the unknown land of filthy bugs.

Share this post


Link to post
Share on other sites
And regarding the step() and smoothstep() functions, I think your target profiles do not support them. Maybe you've got an old card which doesn't support them. As I said, compile your shaders with the command line compiler, and don't forget to pass in your target profile so it's not defaulted to a higher one.

Share this post


Link to post
Share on other sites
You are absolutely right. I'm trying to run it on a GeForce 4 TI. Because I'm just starting out with shader-programming, I tried out NeHe's CG lesson and he used 1.1 back in the day. That's why I used and sticked to that version. Then, I didn't update, because I wasn't sure whether my graphic-card can handle the newer versions. But if it can, as you say, I'll be glad to do so and report back.

Thanks!

EDIT: Okay, I'm using CG 2.0 now. Nothing has changed so far. Main problem still is: as soon as I use either one of the two problematic functions (read posts above), it throws me:


CG ERROR : Invalid program handle.
CG ERROR : Invalid parameter handle.


I checked that I have a uniform float test in my main ( )-function in the shader as parameter and that I try to get it with cgGetNamedParameter ( program, "test" )...

[Edited by - d h k on January 9, 2008 6:32:03 AM]

Share this post


Link to post
Share on other sites
Quote:

EDIT: Okay, I'm using CG 2.0 now. Nothing has changed so far. Main problem still is: as soon as I use either one of the two problematic functions (read posts above), it throws me:

CG ERROR : Invalid program handle.

CG ERROR : Invalid parameter handle.


That's because GeForce 4 Ti only supports Pixel Shader 1.3 while step() and smoothstep() functions were not available up until Pixel Shader 2.0. Here's a clue.

So, the shader fails to compile because Cg is trying to compile your shader against an older profile where these functions were not available. This is why you receive an invalid program handle error. Each successful compilation generates a valid program handle which can be used in any function that requires it.

To avoid such problems, you should always check if the shader has been successfully compiled. The compilation can fail for quite a lot of reasons so the error handling step is definitely not something you want to skip. Good error handling is what separates good and lousy programmers so never make assumptions. This bug could have been easily avoided had you put that simple error checking code in the first place. Check some of the Cg examples (default locatation: C:\Program Files\NVIDIA Corporation\Cg\examples) to see how it's done.

Good luck

Share this post


Link to post
Share on other sites
That's not the problem though. I don't call step ( ) or smoothstep ( ) in the shader anymore. Sorry for being unclear. It's really cgGetNamedParameter ( ) that's causing the CG ERROR - and that function should work on a GeForce 4 TI with CG 2.0, right?

Thanks for your effort, though. Anymore ideas?

Share this post


Link to post
Share on other sites
Quote:

That's not the problem though. I don't call step ( ) or smoothstep ( ) in the shader anymore. Sorry for being unclear. It's really cgGetNamedParameter ( ) that's causing the CG ERROR - and that function should work on a GeForce 4 TI with CG 2.0, right?

Yeah, cgGetNamedParameter() should work just fine. I compiled your shader using the command line compiler and it's just fine. Here's the syntax for future references:

> cgc -profile arbfp1 filename.cg

Did you compare your code with the samples shipped with the toolkit?

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!