• Advertisement
Sign in to follow this  

About simple Pixel Shaders and .Net

This topic is 2098 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi! Firstly, sorry for my english and if I am posting my question in the wrong forum.
I've been developping a little app in VB.Net that creates textures in order to use its in 3D applications. The program works, but I'm using very inefficient methods, because I have to apply to each image a lot of "pixel by pixel" effects. I've been wanting to use pixel shaders (2D) written in HLSL, but I could not found any specific information about this topic (I just want to know how to apply effects, not to create a DirectX application or become a DirectX expert).
I found WPF useless when I want to combine many images and apply several effects and modifications.
I've read about Managed DirectX and some API's like SlimDX and SharpDX, and I think some of those options could serve. I just want to apply effects from a .fx file to an image, so I want the most efficient and simple way to do this. Any suggestion or/and guide?

Ok, thanks in advance!, and thanks a lot for read and try to decode my message...

Share this post


Link to post
Share on other sites
Advertisement
Managed DirectX is an old API that is not supported by Microsoft, your best bet would be SlimDX/SharpDX to tap into DirectX via C#.

Understanding pixel shaders (and shaders in general) would only be the first step, as you're goal is not to render an image to the screen, but rather render it to a texture that you'll read back, so "Render Targets" would be another topic to look up. And to put it together, tutorials on post-processing techniques may be helpful, e.g. blur filters or glow mapping for games would utilize a similar setup where you render the scene to a render target, then apply effects exactly in the manner that you wold be doing.

And alternative route could also be to look into utilizing GPGPU for this, e.g. CUDA or OpenCL. That way you leverage the GPU, but don't have to dabble too much in setting up graphics/rendering. There's probably a C# binding or two for CUDA around, not sure for OpenCL. Of course you can always write the necessary interop yourself [img]http://public.gamedev.net//public/style_emoticons/default/smile.png[/img]

Share this post


Link to post
Share on other sites
Sorry for my delay in my reply. Thanks for your answer, Starnick, I think I now know better what I should look for.
I'll continue/start learning for SlimDX/SharpDX, but I'm also interested in CUDA and OpenCL... The bad news is that there seems to be no easy solution for what I want to do. I think my problem is very simple, but I guess I'll have to learn for a while and write a lot of code lines to solve it. It don't surprise me, but I hoped there were a simpler way.

Ok, thanks again! I'll begin to investigate!

Share this post


Link to post
Share on other sites
It's pretty simple to code 2D filters in D3D, especially with SlimDX. You don't even have to use the screen if you use D3D10+ at all, you can render it completely "in the background" into a virtual texture and stream that back to your application (of course, if you ever want to actually display the texture, you could benefit from presenting the texture on the monitor using D3D, which will be infinitely faster than blitting your texture CPU-side - since the texture would already be in graphics memory).

You could do it in CUDA or OpenCL but it would be fairly overkill, those API's are more about scientific/floating point calculations and they don't have much support for images (it is possible, though, and you can also have OpenCL communicate with OpenGL, but it's probably not the easiest way). Running a pixel shader over your texture is pretty much as fast as it gets.

If you want to use D3D what you need to do is:
- initialize D3D
- create a texture object
- load the shaders
- create a render target as large as the texture
- bind ("attach") the texture object to the pixel shader
- render a quad (2 triangles) covering the whole render target (you can actually simplify this part but that's the traditional way of doing it)
- copy the render target into a memory buffer CPU-side once the rendering is finished

You can also chain the process if you want to apply multiple filters (if you can't do it in a single shader pass), by rendering the quad and immediately after, switching to the new pixel shader and copying the render target's contents into the old texture object. Then you render again.

Share this post


Link to post
Share on other sites
[quote name='Bacterius' timestamp='1335054568' post='4933650']
- copy the render target into a memory buffer CPU-side once the rendering is finished
[/quote]
All good, but you really don't want to do this part of it. Unless you have a specific need for having the data CPU-side, this step will slow your program down by introducing pipeline stalls. Even if you do need to have the data CPU-side you should evaluate that need and ask yourself if moving your CPU calculations to the GPU would be a better approach.

Share this post


Link to post
Share on other sites
[quote]All good, but you really don't want to do this part of it. Unless you have a specific need for having the data CPU-side, this step will slow your program down by introducing pipeline stalls. Even if you do need to have the data CPU-side you should evaluate that need and ask yourself if moving your CPU calculations to the GPU would be a better approach.[/quote]
I was under the impression the OP actually needed the data he rendered elsewhere (to save it to disk or something), which requires the texture to actually move to system memory somehow. Of course if the texture is only used later on as a resource for some D3D shaders, it should stay in graphics memory.

Share this post


Link to post
Share on other sites
Sign in to follow this  

  • Advertisement