GL_INTERNALFORMAT_PREFERRED is supposed to give you the internal format the the driver is actually going to use. So if GL_INTERNALFORMAT_PREFERRED returns GL_RGB then the driver is saying that is how it plans on storing the data internally. So either newer Radeon cards have hardware support for 24-bit texture formats, or AMD's implementation of ARB_internalformat_query2 has its pants on fire.
The specification is actually a lot looser than that, and internal formats just specify the minimum that the driver will give you; the driver is allowed to give you more. Section 8.5.1 of the core GL 4.3 spec clarifies that this is even true of the required sized internal formats.
It's not surprising that you get "GL_RGB8" as the preferred internal format here as the behaviour of these internal formats is to read the r, g and b components of a texture during sampling, but always return 1 for alpha; the preferred internal format must match this behaviour, so it's one that has the same behaviour. If it gave you an RGBA internal format instead, and if you actually used it, the behaviour during sampling may change (if, say, your source data had anything other than 255 in the alpha byte).
I think you're viewing all of this as if it were a lower-level description of what actually happens in hardware, whereas it's not. We're still at a quite high-level abstraction here, and GL isn't specifying anything to do with hardware; it's specifying what the implementation does.
Think of this as being somewhat similar to malloc in C; if you need - say - 64 bytes allocated, malloc can satisfy that request by allocating exactly 64 bytes. Or it can also satisfy it by allocating 128 bytes if that's friendlier for your hardware (perhaps by aligning the allocation to a hypothetical 128 byte wide cache line).