Are they doing something really different, am I just fooled by low-res textures, or is their whole memory architecture different anyway? Just curious.
Both. Basically this problem is specific to most versions of OpenGL (any that do not allow sharing resources between contexts). PC and Xbox * have always used DirectX and so resources can be fully processed on a second thread, and other modern architectures may implement wrappers for OpenGL (well, the really modern ones are just DirectX 11) but still have an underlying architecture that does not enforce a state-machine-driven way of handling things. Mobile devices don’t have this problem either because they can share resources across contexts which means it can be fully loaded on one thread and then used on the other.