Sign in to follow this  

How Do I Access the GPU from a Service?

This topic is 3458 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi all..... I have an application that runs as a Windows service. The application takes 2 images as input, blends the color, and then saves the new image to the disk. The problem is that my application uses the CPU to do the blending and the performance is awful. Is there any way to get the GPU to do the blending for me? I'm not sure how to access the GPU without using a windows form......and as I said, my application runs as a service. Thanks all!

Share this post


Link to post
Share on other sites
Quote:
Original post by tuphdc
Hi all.....

I have an application that runs as a Windows service. The application takes 2 images as input, blends the color, and then saves the new image to the disk.

The problem is that my application uses the CPU to do the blending and the performance is awful.


Is there any way to get the GPU to do the blending for me? I'm not sure how to access the GPU without using a windows form......and as I said, my application runs as a service.

Thanks all!


You'll need a rendering context for that, which means you need *some* kind of window. I doubt the GPU will help anyway, since even if you blend in hardware then you'll still have to read back the data from the GPU, which isn't exactly a "fast" operation.

Share this post


Link to post
Share on other sites
Quote:
Original post by agi_shi
Quote:
Original post by tuphdc
Hi all.....

I have an application that runs as a Windows service. The application takes 2 images as input, blends the color, and then saves the new image to the disk.

The problem is that my application uses the CPU to do the blending and the performance is awful.


Is there any way to get the GPU to do the blending for me? I'm not sure how to access the GPU without using a windows form......and as I said, my application runs as a service.

Thanks all!


You'll need a rendering context for that, which means you need *some* kind of window. I doubt the GPU will help anyway, since even if you blend in hardware then you'll still have to read back the data from the GPU, which isn't exactly a "fast" operation.


You don't necessarily need a window with OpenGL. I haven't done this on Windows, but on other operating systems you can attach the OpenGL context to an off screen drawable object.

Secondly read-backs from the GPU are fast as long as you use a modern OpenGL feature that will use DMA to do GPU downloads. Pixel buffer objects do this for example.

Share this post


Link to post
Share on other sites
Quote:
Original post by wodinoneeye


http://www.nvidia.com/object/cuda_home.html

The CUDA stuff might contain info on this .....


What he wants to do is graphics based. CUDA is appropriate for general purpose computing.

Share this post


Link to post
Share on other sites
I'd be inclined to think that if you're merely blending 2 images together, that the CPU should be more than adequate for this task if the code is optimised, and that optimising such code would be quicker and easier than going down the GPU route. The bottleneck should be the writing to disk, not the blending.

Share this post


Link to post
Share on other sites
Are you using your own blending routine or GDI+? If not the latter, you may want to look into that, as the performance there seems quite adequate to me.

Share this post


Link to post
Share on other sites
Thanks all......the application is currently using GDI+.

I know GDI+ isn't a high performance mechanism, but it was quick to get working.

Right now, I am blending very large HD images. But, I am going to need to expand this functionality in the future (tinting, hue adjustments, etc.)....

No matter how optimized CPU code is, generally speaking it can only work on 1 pixel at a time (on a single processor system).

A GPU can easily outperform a CPU in this type of application....hence the need.










Share this post


Link to post
Share on other sites
But the problem is that your bottleneck almost certainly is not the blending of each pixel. Your problem is almost certainly going to be either obtaining the pixels to blend in the first place, writing the blended pixels back to disk, or both. The GPU does not help there at all; in fact it hinders it, as there is time spent on pushing the data out to the GPU and pulling it back again. A GPU is optimised for a certain type of task, ie. taking several textures which tend to remain static, blending them repeatedly, and throwing away the results each time. This is not your usage pattern.

Share this post


Link to post
Share on other sites

This topic is 3458 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this