Nothing I could cite but you can read about unified memory models (UMM) in general online.
Do you have any more information about this bandwidth issue?
What it means for mobile devices is that the GPU and CPU share the same memory, unlike in desktops where they each have their own memory.
When a GPU has its own memory it can only access that memory, so whatever you want to draw has to, at some point, be transferred across the bus to the GPU RAM from the CPU RAM. How much and how fast you can transfer is “bandwidth”.
So for desktops your index and vertex buffers have to be copied, thus smaller is better.
For UMM, no copy has to take place since the GPU can access the vertex/index buffers directly wherever they are in “normal” RAM.
Smaller is still better, but not as significantly.
And there are still things that can cause a copy to take place by the driver (though it is just “normal” RAM to “normal” RAM, literally via memcpy()).
If you are not using a VBO, the entire vertex buffer will be copied.
If you are not using an IBO, the entire index buffer will be copied.
If your vertex-buffer elements are poorly aligned (for example using 6-bytes for positions) the entire vertex buffer will be copied, and slowly since it also realigns the vertex data.