OpenCL to make a game engines renderer?

Started by
22 comments, last by Krypt0n 12 years, 6 months ago
I have been pondering lately, why couldn't one use OpenCL to do the graphics processing, (I am assuming software rendering) and then dump the final image to OpenGL as a textured fullscreen quad? Or is this not possible, or a good idea?

Has anyone tried this or even shown a proof of concept of this idea?

I am also assuming OpenCL can use SLI/Xfire solutions....

Thanks!
Advertisement
Not OpenCL, but along the same track: http://research.nvidia.com/publication/high-performance-software-rasterization-gpus

I have been pondering lately, why couldn't one use OpenCL to do the graphics processing, (I am assuming software rendering) and then dump the final image to OpenGL as a textured fullscreen quad? Or is this not possible, or a good idea?

Has anyone tried this or even shown a proof of concept of this idea?

I am also assuming OpenCL can use SLI/Xfire solutions....

Thanks!


It is possible but not really suitable for realtime rendering, For things like raytracers it can give a big performance boost but you'd still measure the framerate in frames per minute or frames per hour rather than frames per second unless you keep the scenes extremely simple.
[size="1"]I don't suffer from insanity, I'm enjoying every minute of it.
The voices in my head may not be real, but they have some good ideas!
I think that paper glaeken mentioned had results ranging from 2 to 8 times slower than hardware rasterization. So it's definitely not that far off, and probably even feasible for certain cases. However like any generalized system I'd imagine that you'd need to really take advantage of your additional flexibility to make it worth the loss in performance.
my opencl rasterizer is bout 10% of the theoretical hardware peak performance (on my gtx460), and it wasn't all that hard to get it running, even with 1% the performance you'd have enough throughput to render decent scenes.

I've written it for the sake of fun, and because my catmull-clark tessellation resulted in quite a lot of data organized in not a compatible way for GPUs (e.g. positions were shared, as you need that for displacement to not have cracks, but UVs were by face). and converting all data would be quite some work and either I had created a lot of duplicated vertices or I'd spend quite some time to not have them... and then the hardware would need to render them anyway, rejecting a lot of micro triangles.




Interesting Kryt0n, would you be willing to show some screenshots?

Thanks!
I'm not sure if I'm missing the point, but why not use the GPU the way it was designed and use OpenGL or D3D to render, rather than build an entire system that pretty much is guaranteed to be slower?

It'd be an interesting pet project though.
[size=2][ I was ninja'd 71 times before I stopped counting a long time ago ] [ f.k.a. MikeTacular ] [ My Blog ] [ SWFer: Gaplessly looped MP3s in your Flash games ]

It'd be an interesting pet project though.


That's pretty much the reason; unless you want to do raytracing then the only reason right now to do such a thing is the 'because I can' factor... which at times is a good reason as long as you know what you are getting yourself into :D
Replacing OpenGL/Direct3D with a software one running on the GPU (while being fun) woudln't get you the same performance (although acceptable speeds should be doable). The GPU isn't really a general purpose computer, its designed to crunch OpenGL/Direct3D so you would loose any 3D specific optimization plus you would be bypassing the drivers which would also be optimizing the instructions for the specific hardware.

There's quite a few OpenCL/CUDA realtime raytracers out there that seem to get somewhat decent FPS.




With that said, it would be interesting to see if it is possible to combine traditional 3D graphics with the raytraced ones. OpenCL does have stuff that allows it to talk to OpenGL. You could do 1 normal render and overlay a lower quality but faster raytraced lighting ontop of it (just modify the normal lighting with the reflected/scattered raytraced light for example). Maybe generate realtime lightmaps that only have to be generated when the lightsource changes, so you get the quality of prebaked lighting with flexibility for things like dynamic shadows and allowing the levels to be generated at runtime without running though a 'baking lighting' phase with little in the way of slowdown (unless you move your lights a lot). Or use it to do special effects like multilevel reflections on glass.

It could be handy to look at backporting newer GPU features/extensions. Things would be slower but there's no reason you couldn't emulate something like a geometry shader on older hardware it it supports OpenCL.

It also might be possible to look at some kind of massively parallel rendering pipeline, so your game just uploads new position information to a buffer for the game objects but the GPU does most of that under the hood anyway and the bits it doesn't do would probably be the linear non parallel bits that would such on OpenCL.

A software renderer would be more portable, you could build a FPGA implementation of it example, and run it on a CPU (the future ones should have heaps of cores and probably some floating point processors like a GPU). But it's still going to be at fairly hefty performance costs. And chances are any system that runs OpenCL would be capable of running OpenGL anyway.

I wonder what the prospects are for a Direct3D OpenCL implementation, once again performance wise it would probably be better to just use an OpenGL wrapper or do a native Direct3D implementation.

I wonder what the prospects are for a Direct3D OpenCL implementation, once again performance wise it would probably be better to just use an OpenGL wrapper or do a native Direct3D implementation.


Next to zero owing to the existence of DirectCompute.

Direct3D has need of instancing, but we do not. We have plenty of glVertexAttrib calls.

This topic is closed to new replies.

Advertisement