what are you asking anyway? rendering custom clouds or whatever to your skydome or the actual drawing of the skydome with the scene?
for the last one it doesn't really make sense to use two cameras, you just need to use the inverse of the camera's view matrix's translation as world matrix. This ensures the object you draw with this matrix always is around the camera. Then if you render it first, you need to disable zwriting ( so your vertices dont have to be at infinite range, make sure they are behind the near clip distance though ).
if using shaders you can output the xyww components of your transformed vertices and render the skybox after all opaque objects. The xyww enforces that the z / w results in 1 which means far clip distance. You will want to render your sky after opaque objects because this allows you fragment rejection while rendering the sky.
>>Some older shooters used 90, which I liked at the time, but it looks weird to me now.
the reason was cause it was computationally easier to calculate things at 90deg. back in the days before GPUs this made a difference, since 10years its nonissue.