Issue with OpenGL demo using Catalyst Drivers (Linux and Windows)

Started by
4 comments, last by theWatchmen 9 years, 8 months ago

Hi everyone,
I'm developing a small demo for behaviour trees and I'm having some issues when running it on AMD hardware.
I've tested my code with four configurations:

  1. MacBook Pro (NVidia GT650M) - fine
  2. Desktop with CentOS 6.5 (Nvidia Quadro FX) - fine
  3. Desktop with Windows 7 64 bit (AMD HD7950 with Catalyst 14.4) - slow
  4. Desktop with Fedora 19 (AMD HD7950 with catalyst 14.4) - slow

3 and 4 are actually the same machine. The code is not highly optimized but it's not doing anything too complex
either: I have a grid (which I render using GL_POINTS), a line that represents the path found by A* and a moving
agent. The grid has about 10k elements, if I remove that the demo runs better, but still not perfectly.

I guess it's a driver issue, as on 3 and 4 it seems it's running with software rendering; I profiled the code on Windows
with CodeXL and a frame take ~400ms and seems to be using mostly the CPU rather than the GPU.

As final information, I'm using GLEW and GLFW for cross-platform development. The full code is available here:
https://bitbucket.org/theWatchmen/behaviour-trees

Thanks in advance for the help and let me know if you need any further information!

Marco

Advertisement

I'm not going to browse the full code to find this, but can you confirm what point size you're using?

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I'm currently using 7 as point size. The idea is to have points render at 1 pixel less than the

grid cell (currently 8) to render a proper grid.

I'm currently using 7 as point size. The idea is to have points render at 1 pixel less than the

grid cell (currently 8) to render a proper grid.

OK; why I'm saying this is that there are certain things in OpenGL (OS is totally irrelevant here) where a GL_VERSION mandates support for a feature, but the hardware may not actually be able to support it. In order to claim compatibility with that GL_VERSION, the driver must emulate it in software.

I'm suspecting that you're hitting one of those cretain things here, so the first thing I suggest is that you drop your point size to 1 and re-test.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

I'm currently using 7 as point size. The idea is to have points render at 1 pixel less than the

grid cell (currently 8) to render a proper grid.

OK; why I'm saying this is that there are certain things in OpenGL (OS is totally irrelevant here) where a GL_VERSION mandates support for a feature, but the hardware may not actually be able to support it. In order to claim compatibility with that GL_VERSION, the driver must emulate it in software.

I'm suspecting that you're hitting one of those cretain things here, so the first thing I suggest is that you drop your point size to 1 and re-test.

Thanks, I will try this tonight and let you know how it goes!!

Apologies for the late reply, I went on holiday and didn't get a chance to test it

until now. If I change the point size to 1 it works normally, so it seems that for

this card GL_POINTS are implemented in software.

I will need to move to normal triangles to make sure it works everywhere without

issues.

Thanks a lot for your help!

This topic is closed to new replies.

Advertisement