Because of texturecoord-swizzling (Morton order) it will hopefully be 2 cache-lines, one per mip-level, not counting anisotropic filtering, + fetching will probably be done in parallel. Cache etc. is certainly optimized to match common usage as well.
For GPUs latency is also very well hidden by multiple pixels being shaded at once (like hyperthreading). Just as a simplified example, say the shader is run 25 times per core, and there are 4 cycles of calculations setting up texture-coords and start a texture-read, followed by 100 cycles latency waiting for the texture data. That would allow the first instance to do its 4 cycles and start a texture-fetch, at which point the scheduler would interrupt that instance and tell a second instance to run and do its 4 cycles followed by its fetch, etc.. until all the 25 instances are done with their first 4 cycles and 100 cycles have passed.
At that point the results from the texture-fetches have started coming in, and the first instance is allowed to resume, now with its texture data readily available in a register, and each instance is run in sequence again doing what they need to do once the texture color is available. So the entire 100 cycle latency is hidden in this case.
Most of the 25 instances will also probably fetch texture-data from shared cache-lines so not all 25 fetches have to result in actual texture reads (which may matter more or less in real scenarios).
There are often a very large amount of registers in total per core, and each of the 25 instances may have for example 10 registers each (if there are 250 per core), which makes switching between them possible without wasting any cycles.
If each shader-instance would only require 5 registers for example when 250 are available, 50 instances could be run instead on one core, which would allow 200 cycles of latency to be completely hidden instead of just 100, for the same 4 cycles of calculations per texture-fetch. Exactly how this works probably differs by GPU.
EDIT: Edited for clarity (I hope )