Jump to content

  • Log In with Google      Sign In   
  • Create Account

Rendering differences in OpenGL versions?


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
4 replies to this topic

#1 Oammar   Members   -  Reputation: 186

Like
0Likes
Like

Posted 29 July 2014 - 07:16 PM

I've been working on implementing an engine that is utilizing both SDL and OpenGL, I've currently been testing to see how my textures are being rendered on the various machines I have around here to test it with.

 

This is how I'm attempting to render my textures currently:

glEnable(GL_TEXTURE_2D);
glBindTexture(GL_TEXTURE_2D, this->textureID);
glEnable(GL_BLEND);
glBlendFunc(GL_SRC_ALPHA, GL_ONE_MINUS_SRC_ALPHA);
//glTexEnvi(GL_TEXTURE_ENV, GL_TEXTURE_ENV_MODE, GL_MODULATE);
  
//std::cout << "Color Tint: " << this->colorTint << '\n';
float* colors = Utilities::ColorUtilities::CalculateRGBAColors(this->colorTint);
//std::cout << "R: " << red << " G: " << green << " B: " << blue << " A: " << alpha << '\n';

//glPushMatrix();
glBegin(GL_TRIANGLES);
  glColor4f(colors[0], colors[1], colors[2], colors[3]);
  glTexCoord2f(0, 0); glVertex2f(0, 0);
  glTexCoord2f(1, 0); glVertex2f(1, 0);
  glTexCoord2f(0, 1); glVertex2f(0, 1);

  glTexCoord2f(1, 1); glVertex2f(1, 1);
  glTexCoord2f(0, 1); glVertex2f(0, 1);
  glTexCoord2f(1, 0); glVertex2f(1, 0);
glEnd();
//glPopMatrix();

glDisable(GL_BLEND);
glDisable(GL_TEXTURE_2D);

On the machines that I've tested this on where it worked it either ran OpenGL 4.3 or 4.4 as well as contained dedicated graphics cards. On the other machine I tried it on, where the texture wasn't being rendered to the screen, it was running OpenGL 4.0 on an integrated graphics card. I don't think the graphics card type should make the difference, but I'm definitely leaning toward the OpenGL version being the cause of the issue.

I do have a shader pipeline that I've been setting up for utilization, and I'm thinking maybe I need to convert my texture rendering process to use that to be able to display textures properly on all OpenGL 2.0+.

 

I'm not entirely sure tho, and would enjoy some clarification on whether this is what I need to do, a better way to do it and / or information on what is different between these various OpenGL versions causing this to not work on the <= 4.0 version.


Edited by Oammar, 29 July 2014 - 07:18 PM.


Sponsor:

#2 Ashaman73   Crossbones+   -  Reputation: 7525

Like
1Likes
Like

Posted 29 July 2014 - 10:56 PM


I don't think the graphics card type should make the difference

This is most likely the reason. Different hardware and driver will often result in different behavior, reaching from not supported features, simple driver bugs to different interpretation of shader code (AMD = more strict, NVIDIA = more relaxed).

 

The version is unlikely, because you as coder defines which version you use. Eg. you are unsing immediate mode calls (glTexCoord2f etc) which are deprecated and removed(?) in higher versions. The driver version just tell you, which is the maximum supported opengl version you could use.



#3 Oammar   Members   -  Reputation: 186

Like
0Likes
Like

Posted 29 July 2014 - 11:34 PM

Thanks for that information. Is the prefered method for rendering anything in modern OpenGL through the shader pipeline? I have some other direct calls I'm using for rendering lines as well, which may not be the best method either. Those lines however did render in OpenGL versions: [1.0, 2.0) U [4.0, 4.4]. I definitely think I do need to change my rendering process to utilize the shaders. Going to have to look over some more tutorials on that.

 

[Edit]
I've been browsing around and I found a good article on "Drawing OpenGL Primitives", how it has change throughout various versions of OpenGL: http://www.falloutsoftware.com/tutorials/gl/gl3.htm - It appears I was still following the _old school_ method for drawing primitives, interesting and funny stuff. Hope this can help anyone else that needs further insight.


Edited by Oammar, 30 July 2014 - 12:41 AM.


#4 Ohforf sake   Members   -  Reputation: 1831

Like
1Likes
Like

Posted 30 July 2014 - 02:43 AM

While immediate mode is probably the slowest way of drawing things, it should still work, at least in the compatibility profile.

My suggestion is to include the option, of starting the game with a debug opengl context, and logging all debug messages to a file. Then, when you test the program on different machines, don't just check whether the texture is visible, also check the log if the driver is complaining about anything. If the texture isn't visible and it is because of s.th. you did, then there is usually s.th. in the log that will tell you, what went wrong.

#5 PunCrathod   Members   -  Reputation: 277

Like
1Likes
Like

Posted 30 July 2014 - 05:44 AM

 


I don't think the graphics card type should make the difference

This is most likely the reason. Different hardware and driver will often result in different behavior, reaching from not supported features, simple driver bugs to different interpretation of shader code (AMD = more strict, NVIDIA = more relaxed).


 

 

From my experience it's more about the driver than the hardware. Also the AMD=strict, NVIDIA=relaxed hasn't been true for a long long time. It varies based on driver versions more than witch hardware you are on. With the most up to date drivers AMD is actually more relaxed allowing some incomplete textures and buffers as well as implicit casts that truncate values in shadercode while NVIDIA just gives GL invalid operation errors. Neither AMD or NVIDIA goes strictly by the standard and the relaxness/strictness varies with driver versions and what you want to do so you always have to test everything with both.

 

Back to op.

 

Yes you should use shaders to all things possible. They will almost always be faster and more reliable. And once you get used to rendering everything with shaders it actually gets easier to do than with immidiate mode. And don't be afraid of using multiple shaders. A lot of people will always say to write generic code so you can reuse it as much as possible. But with shaders you actually want to do the opposite and make as specific shaders as possible. This is because generic all purpose shaders are usually painfully slow and it would be heaps faster to just use multiple slightly different shaders even if you have to split some drawcalls into two or three.

Altough that being said. I still sometimes use immideate mode when I'm debugging the non rendering code before I have established a proper rendering system using shaders because it is super simple to get a few triangles to the screen with it.

 

Also be careful about the version numbers. The glsl version numbers don't match with the opengl version numbers until after opengl 3.3. see http://en.wikipedia.org/wiki/OpenGL_Shading_Language#Versions for a complete list of glsl versioning.


Edited by PunCrathod, 30 July 2014 - 05:51 AM.





Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS