Yes, it would.
Wouldn't it be more meaningful to compare each channel individually and merge results at the end (possibly based on perceptual intensities of red blue and green)? Blue looks nothing like red yet they'd get a difference of zero.
When performing an image compare, you should accumulate the squares of all the points for each channel separately, then normalize them separately (replace Math.Sqrt(dist) with dist /= nRows * nCols), and then combine them using weights.
Also the squares should be within a range of 0-1, meaning if you read pixels from 0 to 255, you should do pixel *= 1.0f / 255.0f and square the result of that. That value is what should be accumulated for each channel.