I did now some experimenting by combining the broad-phase stepping from my screen-space with the narrow-stepping from the view-space. The rest is better in the narrow-phase but still not as clean as in the screen-space. I think though this problem is due to me currently using depth-reconstruction as I didn't yet switch back to hacing a full RGBF16 position texture in the gbuffer. And far away the differences in the depth value are so small that stepping fails to be accurate in the narrow-phase while in the view-space version the z-difference is precise enough. I'm going to change this once I have the position texture back in the gbuffer.
So right now I would say my screen-space version still wins in terms of overall quality once I have fixed the narrow-phase with using z instead of depth.
I used vs pos reconstruction too. It is supposed to be the same quality as a full-blown position buffer. This is why it is called reconstruction
FYI: I've just tried Call of Juarez: Gunslinger, and they use SSR, and it is much worse than your or my version...
The problem is the lack of precision. Depth is calculated using a perspective division and most pixels on screen are not close to the camera with their depth value somewhere above 0.9 quickly approaching 1. The range of z-values mapping to the same pixel gets large quickly. With your reconstruction you obtain a sort of middle z-value midst in the range. Comparing this with the test ray doesn't do precision any good. So in most of the range of pixels in screen the depth difference is small although the z-difference is huge. Combine this now with stepping (reflectionDir / stepCount) and pair this up with 32-bit floats in shaders and their 6-7 digits of precision and you are up to a precision problem.