I don't think they strictly contradict each other. The way buffer usage flags are worded is admittedly somewhat hard to understand and not entirely unambiguous in every way. That's of course intentional, too, since the usage flags are very general hints, and not mandatory (the implementation may totally ignore what you tell it).
STREAM usage suggests that you upload data (once), then use it once or maybe twice (e.g. for drawing something) and then don't need it any more. On the next occasion, very soon (usually the next frame), you will do the same thing, but with new data. Think of presenting a video. Once displayed, the old frame isn't very interesting any more, you likely want to display a different one next time.
That is, in other words, more or less exactly what Arcsynthesis says ("set data [...] generally once per frame"), and it is in some way what the official docs say, too ("modified once and used at most a few times").
Why would OpenGL want to know anyway? The implementation/driver might choose not to allocate a dedicated block of GPU memory for your data, but instead DMA it in right when it's being used. Since you're only going to use that data once (or maybe twice) that will probably do, and reserving GPU memory for data that is accessed many thousand times is more efficient.