Sign in to follow this  
MilfredCubicleX

GPGPU Precision

Recommended Posts

Hello, Recently I coded two small Mandelbrot apps, one for the GPU, one for the CPU. The GPU version was using the HLSL for DirectX and ran in real time (~40fps) at 640x480 with a small maximum iteration (64). The app that does mandelbrot calculations on the CPU takes about 30 seconds to finish the same image. I'd like to write an application that can zoom very deep into the mandelbrot fractal (it doesn't have to be in real time) but the GPU lacks precision and the CPU is slooooow. I was thinking of going the route of GPGPU to render images and then copy the backbuffer to a file, but I don't know anyway to increase the precision of GPU calculations. On the CPU, you can use strings to give arbitrary precision, but there are no strings in the GPU world. Would it be possible to use multiple integers as one big fixed point number? Any thoughts? -Steve

Share this post


Link to post
Share on other sites
My guess is that your CPU tests used poor code because this is precisely the domain of CPU's.

Note that fixed point is fine here as long as you properly detect overflows. Something like 2.125 fixed point (1 sign bit + 2 whole + 125 fractional bits = 128 bit integers) will give you lots of precision.

Many algorithms exist to accelerate mandel rendering at the expense of minor errors..

For realtime zooming you would want to investigate the technique that Xaos uses (basically it uses data from the previous frame within some distance tolerance)

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this