Anybody who tried to render to a dynamic cube map probably has encountered the problem of filtering across the cube faces. Current hardware does not support filtering across different cube faces AFAIK, as it treats each cube face as an independent 2D texture (so when filtering pixels on an edge, it doesn't take into account the texels of the adjacent faces).
There are various solutions for pre-processing static cube maps, but I've yet to find one for dynamic (renderable) cube maps.
While experimenting, I've found a trick that has come very handy and is very easy to implement. To render a dynamic cube map, one usually setups a perspective camera with a field-of-view of 90 degrees and an aspect ratio of 1.0. By wisely adjusting the field-of-view angle, rendering to the cube map will duplicate the edges and ensure that the texel colors match.
The formula assumes that texture sampling is done in the center of texels (ala OpenGL) with a 0.5 offset, so this formula may not work in DirectX.
The field-of-view angle should equal:
fov = 2.0 * atan(s / (s - 0.5))
where 's' is half the resolution of the cube (ex.: for a 512x512x6 cube, s = 256).
Note that it won't solve the mipmapping case, only bilinear filtering across edges.
Dynamic 8x8x6 cube without the trick:
Dynamic 8x8x6 cube with the trick: