I figured this might be interesting:
It basically describes an analogy to shadow mapping in two dimensions. What he does is store (distance to light source) for each pixel in an occlusion map, by just drawing occluders to that map (I guess, centered on the light source, and the view covers everything in range of the light source). Then, since he only needs the distance for the closest point, he maps this 2D coordinate system to 1D using the polar coordinate angle of the (x,y) point. Then, when drawing stuff, exactly like you do in 3D shadow mapping, you convert the coords of the pixel you're drawing to into this 1D coordinate system, look up the "depth" (distance) stored there, and if the stored distance is lower than the current pixel, you're in the shadow.
By the way, after asking on the math stackexchange, I had an idea: He figures out the angle using atan2(y,x), but I don't think that's necessary. Couldn't you just map (x,y) -> x for positive y and (x,y) to (x+some_offset) for negative y? Where some_offset depends on the width of the texture you're drawing to. E.g. for width 800 you'd store the final depth in a 1D map that goes from 0 to 1600, and the first half contains the mappings for all positive y and the second half contains the mapping for the negative way. Am I missing something here? I haven't implemented it but I figured this would be faster than atan2().