It would help if we knew what you were having trouble with. The NVIDIA page has source code and all, so I'm a bit baffled by what's confusing you.
Here's the idea though:
- In the vertex shader, compute the ray direction (VertexPos - CameraPos) and output to the pixel shader.
- Setup the ray. In the NVIDIA sample, they're using sphere intersections to determine the length of the ray to get the best result from the lower sample count. In the "from space" version, it's from the front of the atmosphere to the back. In the "from earth" version, it's from the camera to the atmosphere back.
- March along your ray direction and compute the scattering at that point (essentially it's just an RGB attenuation function that takes depth and height as inputs). You could use anything here for different effects: reading a texture, adding color, etc.
- At each ray step, convert your ray position to shadow space (standard shadow map "mul(matrix, pos)"). Read the shadow map depth and compare it to the ray's shadow map depth (regular shadow depth test). Multiply this result onto your attenuation result.
- The loop adds the computed color to the final result and is rendered to the scene.
That's basically it. If you want to get the basics working, you can replace the atmospheric scattering with a basic depth-based color (use Depth*Color).