Is OpenCL what I need?

Started by
10 comments, last by tapped 11 years, 3 months ago
I wrote a terrain raycaster that runs entirely on the CPU but not nearly as fast as i want it to considering Comanche did it back in 1992. A large bottleneck in my code is the fact that bilinear filtering is done on the CPU. One reason I thought I should move my raycaster to OpenCL is because the algorithm is extremely parallelizable beyond the cabilities of multicore cpus. The renderer shoots off a ray for every column of pixels and then checks the hieght of every fragment it hits rendering heights as so. So I have two main questions:

1. Can I use OpenCL to access the dedicated filtering hardware?
2. Is it feasible to use OpenCL for my raycaster?

attached is a screenshot of my raycaster.
Advertisement

OpenCL is certainly an option, but why not just use graphics shaders? OpenCL most likely won't give you access to hardware filtering, whereas a graphics shader will. OpenCL is more general purpose and can run on any kind of parallel hardware (multiple cores on a CPU, or CUDA cores on an nvidia card for example), but since you're working with a graphics algorithm anyway, you might as well use something like GLSL or HLSL.

OpenCL is certainly an option, but why not just use graphics shaders? OpenCL most likely won't give you access to hardware filtering, whereas a graphics shader will. OpenCL is more general purpose and can run on any kind of parallel hardware (multiple cores on a CPU, or CUDA cores on an nvidia card for example), but since you're working with a graphics algorithm anyway, you might as well use something like GLSL or HLSL.

The problem with GLSL/HLSL is that they are deeply integrated into the OpenGL/D3D pipeline. A shader operates on a single pixel but my algorithm draws columns at a time. Ideally I would be using a shader that instead of outputting a one pixel can write pixels wherever it chooses. In short if I were to use shaders to get the same effect I would need exponentially more rays. This is a 2d raycaster like wolfenstein 3d or doom and almost exactly like comanche its not a good fit for the shading pipeline.
OpenCL is certainly an option, but why not just use graphics shaders? OpenCL most likely won't give you access to hardware filtering, whereas a graphics shader will. OpenCL is more general purpose and can run on any kind of parallel hardware (multiple cores on a CPU, or CUDA cores on an nvidia card for example), but since you're working with a graphics algorithm anyway, you might as well use something like GLSL or HLSL.

The problem with GLSL/HLSL is that they are deeply integrated into the OpenGL/D3D pipeline. A shader operates on a single pixel but my algorithm draws columns at a time. Ideally I would be using a shader that instead of outputting a one pixel can write pixels wherever it chooses. In short if I were to use shaders to get the same effect I would need exponentially more rays. This is a 2d raycaster like wolfenstein 3d or doom and almost exactly like comanche its not a good fit for the shading pipeline.

In that case why not draw a series of 1-pixel wide quads on the screen and do your ray casting in the vertex shader? That way your vertex shader is your "column" drawer, and the pixel shader just uses whatever value was given it by the vertex shader to calculate the final color.

Another idea is to have your pixel shader figure out what column its pixel falls in and use that information for the color. The pixel shader will always get executed for each pixel that falls on a rasterized primitive, so why not take advantage of that parallelism?

Just a question you should ask yourself. Sadly I don't know the answer myself.

Is OpenCL really delivering what it is promising?

Or more detailed:

Can you really run it on all platforms you're targeting?

Is an additional driver installation required? If not when can you safely assume that OpenCL is there?

Do you still need a non-OpenCL version for platforms where OpenCL is unavailable or can you just tell the user to screw himself?

I would love some of the answers myself, but I guess that would need more research.

Maybe the Graphics card vendors are shipping the OpenCL runtime with their GPU drivers, that would be great because asking the user to install another driver is often asked too much.

However what about graphic cards that don't ship with OpenCL, is OpenCL automatically able to run on the CPU or are CPU drivers required?

Good luck

-Christoph

You could use either OpenCL or HLSL/GLSL -- Those fancy Parallax Shaders are a form of ray-casting for example.

However, more to the point, as you said yourself, Games like Delta Force: Land Warrior did this stuff on CPUs way back in the day -- I recall running Delta Force at relatively low settings on a Pentium 2 @ 233Mhz -- and Commanche did it even earlier. I'm pretty sure I've seen a similar, though limited, voxel terrain running on either the Gameboy advance or some microcontroller as a tech demo.

It sounds to me that you simply need to dive into optimizing your code -- A single modern CPU core ought to be able to push modern, full-screen resolutions if you can achieve similar efficency to those old games (and I bet a really optimized one could do perhaps twice the frame-rate at the same time). Modern CPUs have 2, 4, even 6 cores (Discounting AMD's "8-core" Bulldozer, but are slower) so there's absolutely no reason you shouldn't be able to get satisfactory results.

I imagine you'd learn more by making your CPU algorithm fast, than by porting your slow CPU algorithm to OpenCL or shaders.

throw table_exception("(? ???)? ? ???");

I imagine you'd learn more by making your CPU algorithm fast, than by porting your slow CPU algorithm to OpenCL or shaders.

I guess i'll do that then, there are parts of my code that are screaming for more sse stuff

One benefit of moving it to the GPU would be that then you dont need to stream 60 FPS picture to the GPU, which could be important if you want to lets say draw regular rasterized polygon meshes on top of the terrain, or simply use the CPU for something else.

o3o

I imagine you'd learn more by making your CPU algorithm fast, than by porting your slow CPU algorithm to OpenCL or shaders.

I guess i'll do that then, there are parts of my code that are screaming for more sse stuff

SSE is good, but that's more of a micro-optimization, your fundamental problem is more-likely algorithmic -- in other words, you need to do less work, not the same amount of work more quickly.

For example:

Are you using Level-of-Detail for far-away landscapes?

Are you calculating/drawing slices that will just get covered by nearer and taller slices?

Are you paying attention to how data is moving through the memory hierarchy?

Have you read anything about the techniques that were used to achieve Commanche/Delta Force/Outcast? I know I've read whitepapers or blog posts about them at one point.

throw table_exception("(? ???)? ? ???");

For example:

Are you using Level-of-Detail for far-away landscapes?
Are you calculating/drawing slices that will just get covered by nearer and taller slices?

Are you paying attention to how data is moving through the memory hierarchy?

Have you read anything about the techniques that were used to achieve Commanche/Delta Force/Outcast? I know I've read whitepapers or blog posts about them at one point.


All slices are sampled but are discarded before being drawn if they are hidden by a taller slice, I tried doing a fast single channel sample just for the height lookup but my sse version was just as fast and i was just making it slower. LOD is probably a good idea Ill look into that. I know Comanche had a shorter render distance and didn't use any filtering but then again I ran it on a 25mhz 386sx just as fast.

This topic is closed to new replies.

Advertisement