About 4 years ago I wrote a GPU based raytracer written in directx 9 and XNA. This was used to evaluate the possibility of using raytracing for secondary rays in a deferred renderer where the GBuffer was used to determine the primary rays. For the thesis I used it for just shadow rays from one directional light. The result was this:
The video has three passes, the first without shadows the second using a shadow buffer and the third with raytraced shadows, I've linked to the third pass.
As you can see the result was quite slow, down to around 5fps, not exactly realtime. Modern hardware should be able to cope with this better, I tested this on an nVidia 470x GPU.
I used a simple dynamically updated fixed grid size octree where each cell contained object references which held the transform and an offset which pointed into a large texture containing the vertex position of all the object types in the scene, for this test I used polygonal spheres and a tree model. As the spheres are polygonal they are equivalent to an object with around 1,000 triangles if I remember correctly. I deliberately avoided parametric spheres as I wanted the results to equate to using real game assets. Using a 2k texture you could store up to 4 million vertices. Most games at the time limited their vertex count to under 10k for hero assets.
The octree as well was passed through in texture form, I can't remember the exact implementation I used to assure multiple objects could inhabit the same cell. With modern GPGPU programming languages you should definitely use better data transfer formats but as I was limited at the time to directX 9 HLSL I worked with what I had.
Long story short it is possible, but it's not fast. You're essentially duplicating your scene description, one for rasterisation and one for raytracing. I really like the idea of using the GBuffer to feed into secondary rays but it just doesn't make much sense at the moment as the "fake" methods look better (I had to downsample before raytracing) and perform much much better.