Sure that's the point. I do luminance downsampling but the initial sampling is made on the whole HDR image and not on the DownscaleRenderTarget. The downside is that it has a slight impact on the performance.
Ok so instead of starting at the 1/16th size image, you started from the fullsize image, calculated the initial luminance using the function you provided, and then downscale it to a 1x1 to get the average? Correct me if I'm wrong but the code you provided seems to sample the same pixel 9 times. Is that intentional...?
The result rendertarget in the function is 1/16 size of your source image and the PostProcess() is called 4 times. The first 3 time you call it on a temporary rendertarget which is 1/2 of the last ones size. So you have 1/1 -> 1/2 after the first pass and then 1/2 -> 1/4 -> 1/8 and then it is rendered to the result rendertarget and you get your 1/16 sized texture.
I had figured that was the intention but it's not actually using the size of the previous RT, but continually using the size of the source RT. I checked with the debugger, source.width / 2 is the same number for all 3 RTs. I'm guessing thats a bug in the sample then.