Jump to content
  • Advertisement
Sign in to follow this  
m_power_hax

Debugging shaders/sphere

This topic is 2805 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

My code is working correctly on my working machine (Quadro FX580) but fail to run (it runs, but doesn't draw correctly) on my laptop (ATI MOBILITY HD 4300 series). The code use point sprite and render them as spheres. Below you will find 2 screenshot, the code from the shaders and main file.

Screenshot16.jpg from the Quadro

Screenshot15.jpg from the radeon


Vertex shader

uniform float pointRadius;
uniform float pointScale;
varying vec3 posEye;
void main()
{
posEye = vec3(gl_ModelViewMatrix * vec4(gl_Vertex.xyz, 1.0));
float dist = length(posEye);
gl_PointSize = pointRadius * (pointScale / dist);
gl_TexCoord[0] = gl_TextureMatrix[0] * gl_MultiTexCoord0;
gl_Position = gl_ModelViewProjectionMatrix * gl_Vertex;
gl_FrontColor = gl_Color;
}


Fragment shader

uniform float pointRadius;
uniform vec3 lightDir;
varying vec3 posEye;
void main()
{
lightDir = vec3(0.0,-0.3, 1.3);
const float shininess = 65;
vec2 pos = gl_PointCoord.xy - vec2(0.5, 0.5);
// Discard fragments outside of the sphere
if(length(pos) > 0.5)
discard;
// r^2 = (x - x0)^2 + (y - y0)^2 + (z - z0)^2
float r = 0.5;
float x = pos.x;
float y = pos.y;
float z = sqrt(r*r - x*x - y*y);
vec3 normal = normalize(vec3(x, y, z));
vec3 spherePosEye = posEye + normal*pointRadius;
vec3 v = normalize(-spherePosEye);
vec3 h = normalize(lightDir + v);
float specular = pow(max(0.0, dot(normal,h)),shininess);
float diffuse = max(0.0, dot(lightDir, normal));
gl_FragColor = gl_Color*diffuse + specular;
}


In main.cpp

void InitPointSprites()
{
if(vboUsed)
{
glBindBuffer(GL_ARRAY_BUFFER, vboId);
glEnableClientState(GL_COLOR_ARRAY);
glEnableClientState(GL_VERTEX_ARRAY);
glColorPointer(3, GL_FLOAT, 0, (void*)(sizeVertices*sizeof(GLfloat)));
glVertexPointer(3, GL_FLOAT, 0, 0);

glEnable(GL_POINT_SPRITE);

glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_TRUE);
glEnable(GL_VERTEX_PROGRAM_POINT_SIZE);
glDepthMask( GL_TRUE);
glEnable( GL_DEPTH_TEST );

glUseProgram(m_shaderProgram);
glUniform1f( glGetUniformLocation(m_shaderProgram, "pointScale"), g_windowHeight / tan(60.0f*0.35f*M_PI/180.0f) );
glUniform1f( glGetUniformLocation(m_shaderProgram, "pointRadius"), 0.5f);
glColor4f(1, 1, 1, 1);
glDrawArrays(GL_POINTS, 0, numPoints);
glTexEnvi(GL_POINT_SPRITE, GL_COORD_REPLACE, GL_FALSE);
glDisable(GL_POINT_SPRITE);
glDisableClientState(GL_VERTEX_ARRAY);
glDisableClientState(GL_COLOR_ARRAY);
glBindBuffer(GL_ARRAY_BUFFER, 0);
}
glUseProgram(0);
}

Share this post


Link to post
Share on other sites
Advertisement
I have no way to test this as that I don't own a Radeon anymore and I'm at work but at one time I was running into a similar problem.
Try making your dist in the vertex shader a whole number. At one point Radeon's wouldn't do floating point division, where as the Nvidia
cards would simply round when they saw floating point division.

Share this post


Link to post
Share on other sites

I have no way to test this as that I don't own a Radeon anymore and I'm at work but at one time I was running into a similar problem.
Try making your dist in the vertex shader a whole number. At one point Radeon's wouldn't do floating point division, where as the Nvidia
cards would simply round when they saw floating point division.


Ok, i'll try this right away!

Share this post


Link to post
Share on other sites

Are you properly querying your shaders for compile/linking errors? How about general glGetErrors()?


Here the code where i compile the shaders :

GLuint _compileProgram(const char *vsource, const char *fsource)
{
GLuint vertexShader = glCreateShader(GL_VERTEX_SHADER);
GLuint fragmentShader = glCreateShader(GL_FRAGMENT_SHADER);

glShaderSource(vertexShader, 1, &vsource, NULL);
glShaderSource(fragmentShader, 1, &fsource, NULL);

glCompileShader(vertexShader);
glCompileShader(fragmentShader);

GLuint program = glCreateProgram();
glAttachShader(program, vertexShader);
glAttachShader(program, fragmentShader);

glLinkProgram(program);
return program;
}





Share this post


Link to post
Share on other sites
I tried something, on the radeon computer i removed everything in the vertex shader and everything in the fragment shader. I then added the command [font="Consolas"][font="Consolas"]gl_FrontColor = vec3(0,0,0);

to make the point sprite look black. The point sprite didnt draw black. So is it right to assume my shaders are not compiling correctly on the radeon computer?

[/font][/font]

Share this post


Link to post
Share on other sites

[quote name='karwosts' timestamp='1294856675' post='4757861']
Are you properly querying your shaders for compile/linking errors? How about general glGetErrors()?


Here the code where i compile the shaders :

[/quote]

So no you're not. You need to check for compiler linker errors and warnings when you compile shaders or who knows what could be happening.

Read this and then check your own shaders and programs for errors:

http://www.lighthouse3d.com/opengl/glsl/index.php?oglinfo

Also use glGetError.

Share this post


Link to post
Share on other sites
I added the [color="#0000ff"][color="#0000ff"]

void printProgramInfoLog(GLuint obj)

and

[color="#0000ff"][color="#0000ff"]

void printShaderInfoLog(GLuint obj)

to my code. I execute them right after calling



m_shaderProgram = _compileProgram(vertexShader, spherePixelShader);

printProgramInfoLog(m_shaderProgram);

printShaderInfoLog(m_shaderProgram);

InfoLog in ProgramInfo gives me a value of 228. But something is weird in debug mode when going line by line in printShaderInfoLog : it will go right from the if condition to the

free(infoLog);. It doesn't do this in printProgramInfoLog.


void printShaderInfoLog(GLuint obj)
{
int infologLength = 0;
int charsWritten = 0;
char *infoLog;

glGetShaderiv(obj, GL_INFO_LOG_LENGTH,&infologLength);

if (infologLength > 0)
{
infoLog = (char *)malloc(infologLength);
glGetShaderInfoLog(obj, infologLength, &charsWritten, infoLog);
printf("%s\n",infoLog);
free(infoLog);
}
}

void printProgramInfoLog(GLuint obj)
{
int infologLength = 0;
int charsWritten = 0;
char *infoLog;

glGetProgramiv(obj, GL_INFO_LOG_LENGTH,&infologLength);

if (infologLength > 0)
{
infoLog = (char *)malloc(infologLength);
glGetProgramInfoLog(obj, infologLength, &charsWritten, infoLog);
printf("%s\n",infoLog);
free(infoLog);
}
}


Share this post


Link to post
Share on other sites
Ok just realized that was the code for using with glut. I'm now sending the InfoLog to a messagebox, and i don't see any errors on the Quadro FX580 computer.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!