Sign in to follow this  
kuruczgyurci

Hashing with compute shaders

Recommended Posts

Hello!

 

I am trying to run a hashing algorithm (SHA-256) on the GPU, with a compute shader. I am not an expert in cryptography, so I just took a c++ code, and then pasted it into hlsl, but I ran into some problems. The main issue is that char types don't exist in hlsl. I tried some other keywords like byte, but none of them worked. (I also tried bool, since it is stored on one byte, but it cut down my values to either 1 or 0.)

 

So my question is:

 

Is there a way to store bytes with hlsl,

or

a way to perform sha-256 with some other data types (like int), without losing performance?

 

Thanks,
kuruczgyurci

Share this post


Link to post
Share on other sites

This sounds like a task that would be better suited to having the kernel written in OpenCL or CUDA; not only for writing the actual kernel, but for getting the data in & out (I'm assuming the hashes are being used for something like litecoin/bitcoin mining or hash colliding).

Share this post


Link to post
Share on other sites

There are tons of both OpenCL and CUDA solutions out there. As far as I know most graphics cards support DirectX, but not OpenCL or CUDA. Also, I think, if there are compute shaders, which are meant to not only be used for graphical purposes, why didn't they add a 1 byte data type? Its pretty basic for general purpose calculations.

Share this post


Link to post
Share on other sites

You don't need a char type to implement SHA-256. You can do the padding CPU-side, it's not something the GPU can do well in the general case. But if you have a fixed input size you can do the padding GPU-side using bit masks (without needing char types, just plain ints). In practice though a general purpose implementation is always going to suck hard in terms of performance in a parallel environment - where any existing overhead is magnified hundred-fold - so you have to tune the code to your needs by optimizing parts that you know will always work the same (for instance, fixed input sizes, constant prefix/suffix, characteristics of the output hash you are looking for, all this stuff you can use to make the GPU code leaner).

 

Essentially, to put it bluntly: if you need a char type to implement SHA256 on the GPU, you're either not optimizing well enough, or you don't know enough about your target inputs and outputs to make efficient use of the GPU in the first place.

 


There are tons of both OpenCL and CUDA solutions out there. As far as I know most graphics cards support DirectX, but not OpenCL or CUDA. Also, I think, if there are compute shaders, which are meant to not only be used for graphical purposes, why didn't they add a 1 byte data type? Its pretty basic for general purpose calculations.

 

Sure, but how many graphics card support DX10/DX11 compute shaders and at the same time don't support some flavour of OpenCL/CUDA? Every NVIDIA card in the past three years or so has had CUDA. OpenCL has been around for around four years. DX11 is not much older than that. And if you want optimal number-crunching performance you pretty much need to use them to do all the low-level tweaks needed to get the real speed benefits. Compute shaders are not slow, but they were designed for graphics interop in mind, and are less configurable than full-blown compute languages.

Share this post


Link to post
Share on other sites

So you are basically saying that in order to write optimal code, I need to know the algorithm well instead of just copying it.

 

'Bacterius', on 29 Dec 2013 - 01:21 AM, said:

Sure, but how many graphics card support DX10/DX11 compute shaders and at the same time don't support some flavour of OpenCL/CUDA?

 

Well, you are right. I said compute shader, because its the straight forward method. But because hlsl is the same for all shaders, its just a ctrl-c ctrl-v to convert it into a pixel shader, and run it on older devices.

 

And there is an other thing I was thinking about a long time, and it might be the solution: Is it possible (even with some crazy hacking) to write DirectX shaders in assembly? Or it is just too complicated because of the different video card types.

Share this post


Link to post
Share on other sites


So you are basically saying that in order to write optimal code, I need to know the algorithm well instead of just copying it.

 

And also what you intend to do with it. General purpose implementations are going to be much slower than specialized implementations, especially on a GPU.

 


Well, you are right. I said compute shader, because its the straight forward method. But because hlsl is the same for all shaders, its just a ctrl-c ctrl-v to convert it into a pixel shader, and run it on older devices.

 

I don't think compute shaders map 1:1 to pixel shaders. The language may be the same, but I don't think you have structured buffers or unordered access views or configurable work groups in DX9 pixel shaders, so it'd be like in the old days, writing GPGPU code using vertices and textures to store inputs and outputs.

 

What cards do you need to support? What is your goal? We can't help you choose technologies or ways to implement an algorithm if we don't know what you are looking for.

 


And there is an other thing I was thinking about a long time, and it might be the solution: Is it possible (even with some crazy hacking) to write DirectX shaders in assembly? Or it is just too complicated because of the different video card types.

 

I don't think you can do that, at least not in DX10+. Perhaps in DX9 it's possible, I'm not sure.

Share this post


Link to post
Share on other sites
'Bacterius', on 29 Dec 2013 - 02:15 AM, said:

What cards do you need to support? What is your goal?

 

I don't have a big project, or anything like that, so I don't really have specifications either. I have an old laptop with an integrated graphics card, with only DX10 support. My goal is (as Necrolis mentioned) to write a bitcoin miner. I dont really want to use it for anything, I just want to learn. (There are way better solutions for hashing than a GPU.) This problem just popped into my head, and I tought it's better to solve it instead of just ignoring it.

 

 

'Bacterius', on 29 Dec 2013 - 02:15 AM, said:

I don't think you can do that, at least not in DX10+. Perhaps in DX9 it's possible, I'm not sure.

 

Does it worth some searching? Because I heard that people used to write shaders with assembly before HLSL, OpenCL, or CUDA.

Share this post


Link to post
Share on other sites


I don't have a big project, or anything like that, so I don't really have specifications either. I have an old laptop with an integrated graphics card, with only DX10 support. My goal is (as Necrolis mentioned) to write a bitcoin miner.
What a coincidence! A friend of mine mentioned cryptocoins a couple of weeks ago so I started looking at them.

Good news! Most miners are open source so you can have a look at them. Their kernels are a bit odd however. I'm currently playing with BFGminer.

Share this post


Link to post
Share on other sites
'Krohm', on 30 Dec 2013 - 09:25 AM, said:

What a coincidence! A friend of mine mentioned cryptocoins a couple of weeks ago so I started looking at them.
Good news! Most miners are open source so you can have a look at them. Their kernels are a bit odd however. I'm currently playing with BFGminer.

 

The source of the other miners will be definitely helpful for the networking part. (I started about 2 weeks ago too.)

 

Thanks for the answers! This was my first post on gamedev.net, and I am surprised how quickly I got help.

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this