Sign in to follow this  

Matrix multiplication using HLSL

This topic is 3788 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Recommended Posts

Hi, I am trying to multiply two matrices using HLSL. My doubt is the pass the two matrices as textures to the shader and multiply. But then I how to retrieve the product matrix from the shader and print the values in application code. Please suggest, I am new to HLSL. Dr.Ramachandra Innanchaya

Share this post


Link to post
Share on other sites
You don't.

There are only a few ways to pass data back to the application, and most of them are very slow (you write the "output" out of the pixel shader as a color, and then on the application you bind a render-target texture, render a full-screen quad, and then read the texture data back).

Since you have the precision of a fragment to store your "output" in, you probably won't be able to reasonable store the matrix unless you jump through a bunch of hoops in arranging the input data for the "matrix" (not sending a "real" matrix).

It's probably not worth it. Why are you trying to do this? It won't be faster -- in fact, it's likely to be orders of magnitude slower because of all the readback and preparation -- than doing the multiplication on the CPU.

Share this post


Link to post
Share on other sites
Sounds like a GPGPU situation to me [grin]

Quote:
Original post by jpetrie
It won't be faster -- in fact, it's likely to be orders of magnitude slower because of all the readback and preparation -- than doing the multiplication on the CPU.
I have to disagree (sort of) - I've seen results where huge matrices were concerned that are orders of magnitude faster when done on a GPU. But we're talking 100's if not 1000's of rows/columns.

There is a tipping point where the raw power of the GPU offset against the slow read-back is faster than using traditional CPU techniques. It's just that you need to be doing some pretty heavy weight mathematical simulations to regularly go beyond that threshold [oh]


Cheers,
Jack

Share this post


Link to post
Share on other sites
I took a GPU course in college where we tested the relative speed of matrix multiplication on the CPU versus the GPU. Jeff is pretty much correct, for very small matrices it was faster to do the processing on the CPU than it was to send the data back and forth from the GPU. But there was a threshold where the GPU clearly beat out the CPU every time by an order or magnitude or more, even with all the write-back. This alone isn't too surprising, since the GPU has more raw processing power than a CPU and is extremely parallelized. What surprised me the most was how low this threshold was - I thought it would need to be several thousand rows/columns but the GPU started to beat out the CPU at only around a few hundred. This was on my old Radeon 9800 using AGP 8x. The only problem with the GPU is that it had lower precision than the CPU, so your results wouldn't match perfectly after all the additions and multiplications. But this error was something like 1%, which might be acceptable in some situations. You also need to use floating-point textures, and I'm not sure what the performance differences are with using those. But with newer hardware these might not even be issues anymore.

Share this post


Link to post
Share on other sites
Hm, that's interesting. I would have imagined there would be a breaking point, of course (I guess I assume we were talking about small matrices here), but I would have thought like Zipster that it would be much higher.

Share this post


Link to post
Share on other sites

Thank you all. Let me attempt.

Quote:
Original post by jpetrie
You don't.

There are only a few ways to pass data back to the application, and most of them are very slow (you write the "output" out of the pixel shader as a color, and then on the application you bind a render-target texture, render a full-screen quad, and then read the texture data back).



How to bind a render-target texture, and read the texture data back using
D3D10

Dr. Ramachandra innanchaya

Share this post


Link to post
Share on other sites

This topic is 3788 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

If you intended to correct an error in the post then please contact us.

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this