Jump to content
  • Advertisement
Sign in to follow this  
andeandr100

OpenGL Vao render problem on ATI graphic card

This topic is 1796 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

This code runs with Opengl 3.2

The code works when i use Nvidia graphic card on both Windows 7 and Ubuntu.

But when i tested on my laptop with Radeon HD 5650 on windows 7 only 50% of my vao buffer was rendered to the screen the other 50% did not render at all.

I have tested to run with different shaders and changed how i created my vao buffer but i have had no luck finding out what is the problem.


Can anyone see what can be the problem because i have viewed me blind on this problem for a few days.

 

//Vao buffer

//I have assumed that my vertex and index creation step works due to they have 
//been  successfully rendered on different machine with different OS
//int vertexSize = 8
//float* _pData; /*float x,y,z; float r,g,b; float ux, uy;*/
//unsigned int* _pIndex;


//vao creation
glGenVertexArrays(1, &_vao);
glBindVertexArray(_vao);

//Bind the VBO and setup pointers for the VAO
glGenBuffers(1, &_vbo);
glBindBuffer(GL_ARRAY_BUFFER, _vbo);
glBufferData(GL_ARRAY_BUFFER, _numVertex * vertexSize * sizeof(float), pData, GL_DYNAMIC_DRAW);

glVertexAttribPointer( vertexId, 3, GL_FLOAT, GL_FALSE, sizeof(float) * vertexSize, BUFFER_OFFSET(0));//vertex
glVertexAttribPointer( vertexColorId, 3, GL_FLOAT, GL_FALSE, sizeof(float) * vertexSize, BUFFER_OFFSET(sizeof(float)*3));//Color
glVertexAttribPointer( UvCordId, 2, GL_FLOAT, GL_FALSE, sizeof(float) * vertexSize, BUFFER_OFFSET(sizeof(float)*6));//Uv

glEnableVertexAttribArray( vertexId );
glEnableVertexAttribArray( vertexColorId );
glEnableVertexAttribArray( UvCordId );

//Bind the IBO for the VAO
glGenBuffers(1, &_ibo);
glBindBuffer(GL_ELEMENT_ARRAY_BUFFER, _ibo);
glBufferData(GL_ELEMENT_ARRAY_BUFFER, _numIndex * sizeof(GLuint), _pIndex, GL_DYNAMIC_DRAW);

glBindVertexArray(0);



///////////////////////////////////////////////////////////
//render code

//Bind shader
glUseProgramObjectARB( _program );
glUniformMatrix4fv( _projViewLoc,  1, false, _projectionCameraMatrix );
glUniformMatrix4fv( _modelViewLoc,  1, false, _modelMatrix );
glUniform1i( _textureLoc, 0 );

//Bind texture
glActiveTexture( GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, _textureId );

//Bind and render vao
glBindVertexArray( _vao );
glDrawRangeElements(GL_TRIANGLES, 0, _numVertex, _numIndex, GL_UNSIGNED_INT, NULL);
glBindVertexArray(0);

//shader

//vertex shader
#version 140
uniform mat4 projCamView, modelMatrix;
in vec3 position;
in vec3 inColor;
in vec2 inTextCord;
out vec2 TextCord;
out vec3 vertexColor;
void main()
{
	TextCord = inTextCord;
	vertexColor = inColor;
	gl_Position = projCamView * modelMatrix * vec4(position,1.0f);
}

//fragment shader
#version 140
uniform sampler2D myTexture;
in vec2 TextCord;
in vec3 vertexColor;
out vec4 Frag_Color;
void main()
{
	vec4 textureColor = texture2D(myTexture, TextCord);
	Frag_Color = vec4( textureColor.rgb * vertexColor, textureColor.a );
}

Share this post


Link to post
Share on other sites
Advertisement

From a quick look, I can see that you're mixing the old GL_ARB_shader_objects extension with more modern code, in particular expecting that old extension to support GLSL version 140.  I'd suggest that you initially stop doing that and use the core GL functions instead, since GLSL 140 didn't exist when GL_ARB_shader_objects was specified, and you shouldn't expect it to be supported.

Share this post


Link to post
Share on other sites

Have taken your advice and removed all ARB extension from the project.

Sadly it did not fix the problem.

 

How the program look on my computer with Nvidia graphic card.

Test-PC-2013-08-24.jpg

How it looks on my laptop with ATI graphic card

Test-Laptop-2013-08-21.jpg

 

 

 

 

Share this post


Link to post
Share on other sites

Hi,

 

It might be interesting for you to download Cgc in order to find potential issues with your shaders: https://developer.nvidia.com/cg-toolkit-download

I actually had the reverse problem, it was working on my Radeon card, but it wasn't for other people their Nvidia.

An exaple of how to use cgc with command line:

C:\Users\Kevin>cgc -oglsl -strict -glslWerror -nocode -profile gp5fp "C:\Users\Kevin\Documents\Test\Shaders\test.frag"
^for fragment shaders
C:\Users\Kevin>cgc -oglsl -strict -glslWerror -nocode -profile gp5vp "C:\Users\Kevin\Documents\Test\Shaders\test.vert"
^for vertex shaders

This method has helped me eliminate any issues I've had with "incompatibility".

 

Good luck and cheers!

Share this post


Link to post
Share on other sites

Judging by your VAO setup and rendering snippets, it looks like you're assuming that the calls to glEnableVertexAttribArray are bound to the particular VAO. They are not. In fact, that state is not even bound to the active shader program.

 

I had a bug once that revolved around this assumption. It turned out that calls to glEnableVertexAttribArray while one shader was active was causing some driver state trampling when I was rendering using another shader.

 

This might be your issue if you have multiple shaders. If so, then perhaps you can try disabling the currently enabled attributes when switching shaders.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

We are the game development community.

Whether you are an indie, hobbyist, AAA developer, or just trying to learn, GameDev.net is the place for you to learn, share, and connect with the games industry. Learn more About Us or sign up!

Sign me up!