NVIDIA and ATI shaders incompatibility

Started by
3 comments, last by Woodchuck 16 years, 10 months ago
Hello, I have an annoying problem. I made a really simple sprite and static mesh render system. When I run my framework on my GeForce 6600 everything is ok. So i passed it to a few people. I found out that it works well on all GeForce cards but doesn't work on almost all ATI Radeons. They get good FPS, but the scene looks like the vertex shader wouldn't work. When I turn off shaders initialization in my framework I get the same scene look as they get. So the problem probably lies somewhere in shaders code or their initializaion. This is how it starts:

void initShaders()
{
    CgContext = cgCreateContext();
    if (CgContext == NULL)
        MessageBox(0, "cgCreateContext", "ERROR", 0);

    CgProgram_Surface = cgCreateProgramFromFile(CgContext, CG_SOURCE, "surface.cg", CG_PROFILE_ARBVP1, "main", NULL);
    CgProgram_Mesh = cgCreateProgramFromFile(CgContext, CG_SOURCE, "mesh.cg", CG_PROFILE_ARBVP1, "main", NULL);

    if (CgProgram_Surface == NULL || CgProgram_Mesh == NULL)
        MessageBox(0, "cgCreateProgramFromFile", "ERROR", 0);

    cgGLLoadProgram(CgProgram_Surface);
    cgGLLoadProgram(CgProgram_Mesh);

    cgGLEnableProfile(CG_PROFILE_ARBVP1);

    CgParameter_Surface_ModelViewProj = cgGetNamedParameter(CgProgram_Surface, "matrixModelViewProj");
    CgParameter_Surface_CameraPosition = cgGetNamedParameter(CgProgram_Surface, "cameraHeight");
    CgParameter_Mesh_ModelViewProj = cgGetNamedParameter(CgProgram_Mesh, "matrixModelViewProj");
    CgParameter_Mesh_CameraPosition = cgGetNamedParameter(CgProgram_Mesh, "cameraHeight");
}
Shader code of surface.cg (mesh.cg looks very similiar):

void main(in float4 position: POSITION,
          in float3 translationVector: NORMAL,
          in float4 color: COLOR,
          in float4 texCoord: TEXCOORD0,

          out float4 oPosition: POSITION,
          out float4 oColor: COLOR,
          out float2 oTexCoord: TEXCOORD0,
          out float oFog: FOG,

          uniform float4x4 matrixModelViewProj,
          uniform float cameraHeight)
{
    float4x4 matrixTransformation = { texCoord.w, -texCoord.z, 0.0f, translationVector.x,
                                      texCoord.z,  texCoord.w, 0.0f, translationVector.y,
                                      0.0f,        0.0f,       1.0f, translationVector.z,
                                      0.0f,        0.0f,       0.0f, 1.0f };

    float4 transformedPosition = mul(matrixTransformation, position);

    oPosition = mul(matrixModelViewProj, transformedPosition);
    oColor = color;
    oTexCoord = texCoord.xy;
    oFog = distance(transformedPosition.z, cameraHeight);
}
// texCoord.w contains some sine value, texCoord.z contains some cosine value Maybe the problem is NVIDIA's Cg but afaik for example Far Cry uses Cg shaders and it doesn't have problems with Radeons. Have anyone ever met such a problem and/or have any idea on how to solve it? I'd be really grateful
Advertisement
Silly question, but are they running it on cards with vertex shaders? :p
A lot is unsaid about what you are doing. Do you have fragment shaders? Are you enabling the fragment shader profile? Are you binding the programs? Are you using OpenGL extensions supported by the NVidia cards but not the ATI cards? etc.

One thing I do with my OpenGL code and Cg code is to provide wrappers (in Debug configurations) that call glGetError and cgGetError, respectively. For example, I have

CGcontext CgCreateContext(){    CGcontext context = cgCreateContext();    OnCgError("CgCreateContext");    return context;}


where OnCgError is an error handler. (You can supply error callback functions in Cg, but that mechanism does not currently work on PS3, so I rolled my own.) This approach also allows you to include support for call traces, profiling, and so on.

Maybe if you check for errors after all your Cg calls, you might find some call that fails, NVidia's drivers don't mind, but ATI's drivers do.
If I was using any fragment shaders, extensions etc. I would have mentioned it :).
I also bind programs, do all the initialization process work correctly but I will make a new version of my framework with all those error handlers. Thanks for the advice.

Quote:
Silly question, but are they running it on cards with vertex shaders? :p

AFAIK simple vertex shaders (CG_PROFILE_ARBVP1) can be emulated by the cards that don't possess vertex shaders so it isn't a problem
If your pixel shader is a sm3 shader, you need to have a vertex shader 3 too if I remember well. Hope that help.

This topic is closed to new replies.

Advertisement