[.net] gpgpu-type application

Started by
5 comments, last by cellgen 15 years, 9 months ago
Hi Guys, forum noob here. Greetings everyone. I was hoping that someone could help to point me in the right direction please. I would like to convert a long-running, number-crunching simulator to run off-cpu. So far I've been looking at nvidia's CUDA, and microsoft's Accelerator. They look pretty good, but this evening I came across the TAO framework, and several other interfaces/libraries/wrappers. Can anyone please direct me toward a library that can harness the gpu's power? Ideally something beyond version 1.0, with widespread adoption, etc. I'm not at all interested in actually displaying graphics, merely utilising the gpu to iterate various intertwined compound arrays. Thanks everyone :)
Advertisement
CUDA is the most robust and mature library out there right now, though it's far from perfect.
Quote:Original post by Sneftel
CUDA is the most robust and mature library out there right now, though it's far from perfect.


Thanks for the prompt response. It does look very good indeed, likely a steeper lc than e.g. Accelerator, but the returns do seem worthwhile.



btw, is there any info about the upgrade cycle from ATI/NVidia? My recent searches show a lot of interest in .net interfaces to gpgpu - are either side working on decent .net interfaces likely to be released anytime soon?

Thanks again.

Cheers.

<edit> sorry, should point out (if only for politeness sake) that I'm a little C#-bound due to the reporting interfaces, and genetic interfaces already established.
</edit>
Quote:My recent searches show a lot of interest in .net interfaces to gpgpu - are either side working on decent .net interfaces likely to be released anytime soon?

Heh! Don't hold your breath. CUDA is deeply tied to C/C++, both technologically and politically. A reworking to the CLR would be nearly impossible. Individual CUDA applications, though, could probably be exposed to .NET without too much trouble. I know of instances where CUDA code interoperated with Haskell.
Thanks again Sneftel, you've been a great help - much appreciated.
BTW, I've been somewhat NVIDIA-centric here, mostly because NVIDIA currently has the only practical low-level GPGPU solution. ATI's CTM is DOA, though I understand they're working on a replacement. I don't think that cross-vendor GPGPU solutions are going to be as practical as CUDA in the near term.. it's just not possible to maintain performance with so much hardware abstraction. The two major vendors have very different hardware designs, and designing algorithms to be efficient on both would be impossible. Once more is known about The Right Way To Do GPU Computation, perhaps a good high-level solution will come along... but the hardware simply presents too many different (and potentially useful) approaches to abstract away.
I've seen nothing at all of ATI's approach (other than marketing-speak relating to Fusion vapourware). I agree CUDA is likeliest to succeed, but I have to say that non-gfx applications are amenable to running slower (relative to our rendering peers) to still achieve a massive boost in performance: gpu v cpu.


For example: I took the Life example from MS Accelerator in vb.net, and wrote the same thing in native vb.net - to find the poorly-optimised Accelerator version TWICE as fast as the (equally poorly-optimised) VB.net version.

I run genetic algorithm simulations that take many days of computer time. If I could double the simulations in 2008 I'd be more than happy.

More than that would be a luxury I suppose. But that wouldn't stop me from looking to double that again.

I'm still new to this gpu stuff, and its hard to find non-gfx related information, so thanks again for your help.

I'll post back results when I get them.


Cheers.

This topic is closed to new replies.

Advertisement