In cone tracing you sample a prefiltered geometry. (mipmaps)
Depending on radius of cone change to lower mipmap, thus when it hits you get sum of all information in the smaller voxels down the mipmap tree.
This is the reason you do not need to shoot so many rays/cones this way.
After a quick skim this seems like a similar idea as mega texturing. The problem is that I can't have a large virtual texture because it would be too large to hand-paint/generate and store on disk. This is why I'm looking more at procedural splatting methods.
Mega/virtual texturing doesn't mean that everything must be hand painted. You just create the data in whatever way you want instead of just loading it off disk. (splatting, procedural textures, baking GI data just in time.. etc.)