opengl compute shaders

Started by
6 comments, last by Yours3!f 10 years, 11 months ago

Hi,

I'm trying out the OpenGL compute shader functionality using the new Catalyst 13.4 driver from AMD.

However when I execute the compute shader the driver restarts.

I used this example as a basis: http://wili.cc/blog/opengl-cs.html

here's the project:
https://docs.google.com/file/d/0B33Sh832pOdOaWFlVS00N040bFE/edit?usp=sharing

any idea what may be wrong?

Best regards,

Yours3!f

Advertisement

I'm also having the same problems. I downloaded the source code from the same website and tried running it (along with a few changes to make it run on Windows) but the driver just restarts. If I don't run the compute shader, then it runs without crashing. Using an AMD graphics card as well.

well, we should ask AMD for a working proof example app, that the driver can really run compute shaders :D

Just to be clear, you can't create a GL 4.3 context, right? (I can't)

No, I can't create a 4.3 context :(

I'm too lazy to actually boot my laptop into windows and install that driver and install dev tools atm. But if anyone wants to try another example here is one: https://github.com/progschj/OpenGL-Examples/blob/master/13compute_shader_nbody.cpp It works fine on my nvidia desktop. It should work when changing it to use the ARB extension instead of the 4.3 context on amd I guess,

Your example seems to work fine, japro smile.png I had to hard code the size of the shared 'tmp' variable though because the compiler expects a constant value as the array size. What sort of frame times do you get? I'm getting around 600ms/frame on an AMD 6870. It also makes me wonder what the other example is doing to cause it to crash...

EDIT: Okay, turned on the tiling and it renders at 30ms/frame :P

Your example seems to work fine, japro smile.png I had to hard code the size of the shared 'tmp' variable though because the compiler expects a constant value as the array size.

Interesting, that was also the case for the early Nvidia beta driver. At least the 4.3 spec says that gl_WorkGroupSize is a constant exactly because you probably want to use it as a array size. On my Nvidia GTX560TI this runs with 18ms/frame non tiled and 12ms/frame with the tiled shader.

hey there,

sorry for the late reply.

Japro, what you are doing is a bit different. You are reading/writing to a Texture Buffer Object.
What I'm trying to do is write to a Plain Old Texture. I suspect AMD has problems with this, as I had similar problems with writing/reading textures when I did this with OGL/OCL interoperation. While they have fixed those problems by now, I suspect they still have them with the OGL CS implementation.

Anyways putting a glFinish() after the glDispatchCompute(...) seems to solve the crashing, but I still don't get anything on screen. So I suspect this may be a syncronization problem on either my side, or on AMD's side.

best regards,

Yours3lf

This topic is closed to new replies.

Advertisement