Jump to content

View more

Image of the Day

雑魚は多めにして、爽快感重視にしつつ・・・(´・ω・`)
早いとこ、ベースを作って、完成にもっていかないとね。
タイトルもまだ迷ってるだよなぁ。 
#indiedev  #indiegame #screenshotsaturday https://t.co/IwVbswGrhe
IOTD | Top Screenshots

The latest, straight to your Inbox.

Subscribe to GameDev.net Direct to receive the latest updates and exclusive content.


Sign up now

Virtual DirectCompute

4: Adsense

Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.


  • You cannot reply to this topic
5 replies to this topic

#1 Such1   Members   

435
Like
0Likes
Like

Posted 08 November 2012 - 09:12 AM

Hey guys, I'm developing a game and one of the features I use is ComputeShader.
I would like to know if there is a way to run Compute shaders on software instead of hardware so I can still run on old hardware that
don't have ComputeShader compatibility.

#2 mrheisenberg   Members   

360
Like
0Likes
Like

Posted 08 November 2012 - 11:54 AM

what are you doing with the compute shader?absolutely everything you do in a shader can be done on the cpu(software as you say),however it will be a lot slower

#3 MikeBMcL   Members   

173
Like
0Likes
Like

Posted 08 November 2012 - 12:01 PM

If you use a WARP device then you can run compute shaders. It's a lot faster than a ref device but still may fall flat on its face in terms of real performance. So I'd be leery of doing it without some heavy perf testing on low-end and older hardware.It could end up being a lot faster to do those calculations CPU-side using DirectXMath (or other vector math functionality) and parallelization (e.g. PPL's parallel_for and parallel_for_each) without being a huge time-sink in terms of writing and maintaining the separate code path for non-compute class hardware.

#4 Such1   Members   

435
Like
0Likes
Like

Posted 08 November 2012 - 01:24 PM

I know I could do it in the CPU. But I would like to know if I can virtualize it so I don't have to recode the whole Shader.
What I am computing is very heavy depending on the configuration.
But it depend a lot on the option you choose. I know I wouldn't have the same performace in CPU but at least it would be supported.
So I just want to know if I can run the Shader on the CPU, so I don't have to transfer the code from HLSL to c++.

#5 MJP   Moderators   

19429
Like
0Likes
Like

Posted 08 November 2012 - 02:26 PM

There's no HLSL virtual machine, or anything like that. Even if there were, the CPU/GPU synchronization would have serious performance implications. The only option would be to run *everything* on WARP device, but this would likely be slower than most GPU's.

#6 Such1   Members   

435
Like
0Likes
Like

Posted 08 November 2012 - 03:11 PM

Ok, thank you, that is what I wanted. I just want to try to make it compatible. I don't care if it is with a slower performace




Old topic!

Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.