As a start, you must decide how to compute that distance. Of course at the end it will be a subtraction between two values: the light position and the position of the fragment being shaded.
For this operation to give meaningful results, those two values must be in the same coordinate space. There are a couple I want to mention:
- World-space is pretty easy to mangle as light entities are usually managed in world-coords. It requires some extra work in VS and PS. I have been told the value ranges might be a problem although I never observed this issue;
- Eye-space is "native" in the processing and "denser" in terms of values produced. I think it's historically the most widely used coordinate space. Needs some mangling CPU-side but GPU has it easy;
- Tangent-space (or "surface-local"): the light position is transformed so each fragment getting shaded gets a specific vector to the light WRT itself. Standard for initial DOT3-based bump-mapping implementations.
Are you familiar with those notions? Those are the hard part. Once you get the two points in the correct coordinate space, it will just be a matter of evaluating an attenuation function scaled by the inverse of distance (squared).
I'm sure you'll be somewhat confused right now. Where do you want to start?