How Do I Access the GPU from a Service?

Started by
7 comments, last by Kylotan 15 years, 10 months ago
Hi all..... I have an application that runs as a Windows service. The application takes 2 images as input, blends the color, and then saves the new image to the disk. The problem is that my application uses the CPU to do the blending and the performance is awful. Is there any way to get the GPU to do the blending for me? I'm not sure how to access the GPU without using a windows form......and as I said, my application runs as a service. Thanks all!
Advertisement
Quote:Original post by tuphdc
Hi all.....

I have an application that runs as a Windows service. The application takes 2 images as input, blends the color, and then saves the new image to the disk.

The problem is that my application uses the CPU to do the blending and the performance is awful.


Is there any way to get the GPU to do the blending for me? I'm not sure how to access the GPU without using a windows form......and as I said, my application runs as a service.

Thanks all!


You'll need a rendering context for that, which means you need *some* kind of window. I doubt the GPU will help anyway, since even if you blend in hardware then you'll still have to read back the data from the GPU, which isn't exactly a "fast" operation.
Quote:Original post by agi_shi
Quote:Original post by tuphdc
Hi all.....

I have an application that runs as a Windows service. The application takes 2 images as input, blends the color, and then saves the new image to the disk.

The problem is that my application uses the CPU to do the blending and the performance is awful.


Is there any way to get the GPU to do the blending for me? I'm not sure how to access the GPU without using a windows form......and as I said, my application runs as a service.

Thanks all!


You'll need a rendering context for that, which means you need *some* kind of window. I doubt the GPU will help anyway, since even if you blend in hardware then you'll still have to read back the data from the GPU, which isn't exactly a "fast" operation.


You don't necessarily need a window with OpenGL. I haven't done this on Windows, but on other operating systems you can attach the OpenGL context to an off screen drawable object.

Secondly read-backs from the GPU are fast as long as you use a modern OpenGL feature that will use DMA to do GPU downloads. Pixel buffer objects do this for example.


http://www.nvidia.com/object/cuda_home.html

The CUDA stuff might contain info on this .....


--------------------------------------------[size="1"]Ratings are Opinion, not Fact
Quote:Original post by wodinoneeye


http://www.nvidia.com/object/cuda_home.html

The CUDA stuff might contain info on this .....


What he wants to do is graphics based. CUDA is appropriate for general purpose computing.
I'd be inclined to think that if you're merely blending 2 images together, that the CPU should be more than adequate for this task if the code is optimised, and that optimising such code would be quicker and easier than going down the GPU route. The bottleneck should be the writing to disk, not the blending.
Are you using your own blending routine or GDI+? If not the latter, you may want to look into that, as the performance there seems quite adequate to me.
Kippesoep
Thanks all......the application is currently using GDI+.

I know GDI+ isn't a high performance mechanism, but it was quick to get working.

Right now, I am blending very large HD images. But, I am going to need to expand this functionality in the future (tinting, hue adjustments, etc.)....

No matter how optimized CPU code is, generally speaking it can only work on 1 pixel at a time (on a single processor system).

A GPU can easily outperform a CPU in this type of application....hence the need.










But the problem is that your bottleneck almost certainly is not the blending of each pixel. Your problem is almost certainly going to be either obtaining the pixels to blend in the first place, writing the blended pixels back to disk, or both. The GPU does not help there at all; in fact it hinders it, as there is time spent on pushing the data out to the GPU and pulling it back again. A GPU is optimised for a certain type of task, ie. taking several textures which tend to remain static, blending them repeatedly, and throwing away the results each time. This is not your usage pattern.

This topic is closed to new replies.

Advertisement