Thank you both those are pretty good answers. Do you think you could fine one though for SSAO that just gives a simple equation for getting occlusion from an image? I made a program that loads a depth map image and runs an algorythm on each pixel. What should the equation per pixel be?
The reason there's no simple (at least, simpler than what's already been posted) equation is that the "equation per pixel" needs to, at a minimum, take into account a bunch of neighboring pixels as it needs to know not just the absolute depth but the depth relative to some localized area.
There's a perfectly good reason for this: it's called ambient occlusion precisely because you need to figure out how much light is prevented from reaching the target pixel (that is, how much is occluded); there's simply no way to know what light will be occluded unless you have some information about the neighboring geometry, as it's the neighboring geometry that's actually doing the occlusion.
Any additional complexity you perceive in the methods posted is probably a result of one or two things:
a) naively sampling all neighboring pixels at a wide enough radius is too slow for real-time performance, so the algorithms need some way to determine which pixels to sample to get a result that looks appealing. Usually this means picking some fairly arbitrary sample kernel and then changing it for each pixel so that any sampling error appears as high frequency noise.
b) to get something that even sort of looks like ambient occlusion, it's extremely valuable to think of it in terms of normals as well as just depth information. It takes a bit of additional calculation (derivatives) to reconstruct normals from just a depth map.