Here's what I'd try if I had all the time in the world: A post-process effect using the depth buffer and a couple specialized shaders. Use du/dv mapping techniques to create lots of little lenses. Raindrops aren't grey. They're refractive. More descriptively, think of it as a volume that scatter's light. For any one pixel (or group of neighboring pixels) you might be getting light from some location that you would have not otherwise gotten if not for the presence of one or many raindrops.
I'd start with an image of some truncated simplex noise, or something similar, to get some sparse islands of soft-edged white in a sea of mostly black. The islands are representative of rain drops, so it should look like a cross section of rain I guess. Next, use a tool to create a normal map (http://wiki.splashdamage.com/index.php/Bump_Maps)
After rendering all other elements of your scene, use the depth buffer to create a linearized depth map.
For each post-processed pixel, two sources will contribute to the resulting color:
- A heavily downsampled and blurred version of the un-processed image, to get the average color of swaths of it. This provides the "grey"-ish fog.
- Sampled texels from the pre-processed image that are distorted using the normal map you made (using du/dv mapping techniques)
If the pixel is distant, you rely entirely on #1. If it's very near, you use #2. Everywhere in between, interpolate to your preference. Also, as you get nearer, the magnification of the normal map should increase. Some distant raindrops should only get one texel from one normal map island bending it. Up closer, you may want to bend entire neighborhoods of pixels in approximately the same direction by having the normal map heavily magnified (from the point of view of the viewpoint)
Draw the new color value. Also, write a new linear depth value that's slightly nearer to the viewpoint (this can/should be tweaked, and perhaps even somewhat random). You've now completed one layer of rain - that is the farthest layer.
Perform the process again on your new color and linear-depth buffers. Executing the process multiple times will do something sort of like a numeric integration of the effects of rain over the volume of your scene. Of course, it's strictly screen space. You can tweak the number of iterations, just make sure that you get to the nearest layer of rain before you are done, otherwise, it'll look like there's only rain in the distance. So, bigger depth steps means fewer passes, and probably less accuracy.
If you want to improve the effect a little bit, stretch the normal map in the vertical axis (even tilt it a little). The hope is that it will kinda have an anisotropic effect, like what you see here: http://farm5.static.flickr.com/4113/4832069088_f5098a6661.jpg
The drops to the side of the headlight reflect more water because they spend more time in a location that will refract light to you during the course of a single exposure (in camera jargon). This might help provide the illusion of movement.
Anyway, I just made this up so I don't know if it will be performant, or perhaps it will just look disgusting and not work at all. Try it at your own risk. xD
Edit: Might work better to do layers as exponentially larger ranges of depth, and working towards the camera. If you have near drops rendered before far drops, you might get contributions from the near drops being included in the far drops, which would be odd. Also, since the downsampled "grey fog" image is supposed to be representative of many raindrops over many layers, distant layers should be pretty thick. Only close up do you want to start having a nice array of layers with discrete visible drops.