What do you mean by wasteful? the enqueueFillBuffer functions are really self-documenting, they are meant to be used to fill buffers. Mapping and unmapping is another option, but the problem is that your memory usage spikes up if your buffer is large, unless you do it in small increments in which case you end up calling the API a lot which can also hurt (and isn't very elegant).
Of course, it's only available under OpenCL 1.2, so if that's the only 1.2 feature you're using it might make more sense to use a less modern approach and just implement the fill function yourself (it's actually not too hard, just enqueueWriteBuffer the given pattern over the entire buffer, just remember to use CL_FALSE for non-blocking writes, for the love of god).
Another option is to write a kernel designed to set buffers to zero, which might actually win out to every other approach bar zero-copy memory for large buffers on hardware devices (like GPU's) but this is very inelegant and tedious in terms of implementation. Though I'm not exactly sure how the enqueueFillBuffer is implemented, perhaps it uses that under the hood when appropriate.
Also, don't make the mistake I did. Don't foolishly memset an array of floats to zero. When I did that, it set them all to NaN in the OpenCL kernel for some reason
Edited by Bacterius, 20 March 2013 - 04:50 AM.
The slowsort algorithm is a perfect illustration of the multiply and surrender paradigm, which is perhaps the single most important paradigm in the development of reluctant algorithms. The basic multiply and surrender strategy consists in replacing the problem at hand by two or more subproblems, each slightly simpler than the original, and continue multiplying subproblems and subsubproblems recursively in this fashion as long as possible. At some point the subproblems will all become so simple that their solution can no longer be postponed, and we will have to surrender. Experience shows that, in most cases, by the time this point is reached the total work will be substantially higher than what could have been wasted by a more direct approach.
- Pessimal Algorithms and Simplexity Analysis