For each texel in the source image you have an RGB triple, each component in the [0..1] range. This can be treated as a 3D texture coordinate.
By looking the colour up in a volume texture set to NEAREST sampling, you can quantize colours. Memory usage should be fine because the volume texture would be very small (an identity effect applied to R8G8B8 gives an 8x8x8 cube, which is 2K of data (1.5 if you use 16 bit colours or drop the alpha channel, so that's our upper limit). Speed ain't great but the fact that we're using NEAREST sampling should improve things a bit.
Only problem is... how do we fill out our cube in the first place? We can either pick out a predefined 'palette' of colours - that might work best - or we can try and analyse the image to determine which palette best represents the colours present in the image.
I'm not sure how to do the latter though. Maybe something about building a histogram and selecting colours ordered by count, highest first? Need to see how photoshop does it...
Edit: wait, never mind, doesn't work