Agreed. There's definitely some good research being done in this area. One of the main things preventing it from becoming mainstream is that modern GPU hardware is designed to render triangles, very fast. Large voxel worlds (and ray-tracing for that matter) require non-linear memory access patterns that GPUs just weren't designed for. Any significant sea-change in how rendering is performed is going to require collaboration with the GPU vendors.
CUDA is a step in the right direction, but what we really need is some custom hardware that's good at handling intersections against large spatial databases (think texture unit, but for ray-casting). It's a shame Larrabee didn't work out, but it'll happen eventually. And it'll be a hardware vendor to do it, not some upstart with a magical new algorithm they can't describe or even show working well.
This reminds me of a question I have on the subject of hardware and ray casting. Isn't the new AMD Fusion chip what you describe? The GPU and CPU have shared memory with the GPU being programmable in a C++ like way, if I'm not mistaken.
Thanks for the replies guys! They really helped to me to focus on what motivates me in a more concrete way and on a more specific project brief.
I think I've settled on a procedural content game. Perhaps a procedurally generated game world that the user can explore.
I've also applied to one of the help wanted projects on the board that is similar to the above, though I have some doubts I'll be suitable because of my lack of extensive/professional developer experience and because I have a time limit on how long I can work without income. Fingers crossed though!