I assume the range of numeric values is to large to fit within a 'char' (byte) types value span ??
That would A straight forward 'natural' compression.
That would be the first thing I would check when I considered storage (that include fixed values which DONT use the entire range of values inside the min/max values (if significantly less a lookup array can shrink the saved range which is now an index into aseperate fixed value array...)
Otherwise a more complicated 'compression' method usually employs rebuilding the entire data set when substituting new values -- BUT if this may happen infrequently that might still pay off.
You didnt say how large this array is. or what overhead is acceptable (or not) for the different operations
(those effects whether overhead overrides the data size savings -- after all memory is alot cheaper than it was long ago).
When you say 'read a random' do you mean itsa true random selection or simply that any access may be from any array element. There might be ways with some optimized compression schemes which are cheap 'at the edges' (treating the array like a circular buffer), but where the value of the index is not locked in (itself a random value used to facilitate a random pick)