OpenCL ports and wrappers

Started by
1 comment, last by ChugginWindex 12 years, 2 months ago
I'm starting a project in the next few months for an independent study and I know that I want to use OpenCL. I at least know my way around OpenGL, and I've got years of experience in C/C++ but I'm wondering about what approach I should take to learning / using OpenCL. I know there are certain libraries and wrappers such as PyOpenCL and CLython that allow you to write OpenCL applications using Python, thus giving you an OOP environment during development. These libraries all seem to classify themselves as good for "fast prototyping of OpenCL applications"...I'm wondering what the downside is. If I'm going at learning OpenCL, should I just stick with the C API and go straight to the hardware, or is it really that much easier to use with Python? Also, is there any real downside to using Python from a performance standpoint? I don't see how there really could be considering it's all compiled to the GPU and/or CPU anyway and at that stage the fact that you used Python to generate the OpenCL code (or however it works) shouldn't really matter much.

So in essence, what am I losing if I take one of these routes? Is it worth my time to use a wrapper to learn it quickly and get things done considering I'll have 10 weeks to finish my studies? Any information someone who's versed in the material or has used one of these libraries can give me would be awesome.
Advertisement
Did you know that there's already an official C++ wrapper for object-oriented use of OpenCL?
It's in here: http://www.khronos.org/registry/cl/
specifically: http://www.khronos.org/registry/cl/api/1.2/cl.hpp
docs here: http://www.khronos.org/registry/cl/specs/opencl-cplusplus-1.1.pdf

It's relatively simple and painless to use. First, include cl.hpp.
Then you create a cl::Context, some cl::Kernel objects, and some cl::Buffer or cl::Image objects, and finally you load your kernels into a cl::CommandQueue for execution.

There are OpenGL-friendly versions of the buffer & image classes in cl.hpp that you can use as VBOs or FBOs for speedy display, too, so if you already have an OpenGL project that you can use for "fast prototyping", you should be able to incorporate a CL context with some texture-processing kernels and see results straight away.


Last I checked, though, NVidia weren't using cl.hpp in their GPU computing SDK, and their OpenCL examples were pretty crudely ported from their CUDA examples, accessing the OpenCL API through some rather impenetrable C code.

ATI's Stream SDK (aka AMD APP SDK) has some decent examples of proper use of the C++ wrapper, though. ATI seem to be a lot more interested in OpenCL than NVidia are, and this is reflected in their sample code. So even if you're on or targeting NVidia hardware specifically, you may as well install the AMD APP SDK just to have better sample code to learn from. Most of it should still compile & run with few modifications, regardless of your hardware setup - but bear in mind that there are OpenCL extensions that may be unsupported on one platform or the other.
Yeah after I posted this I did a bit more research and quickly discovered the awesomeness of the C++ bindings to OpenCL. Maybe that's my answer then. I suppose given the amount of experience I have with C++ vs Python, it'd probably be faster for me to go that route than practically learn a new language entirely.

This topic is closed to new replies.

Advertisement