The calculation of memory addresses for texture sampling is somewhat specialized logic. I wouldn't be surprised if even the latest GPU hardware had dedicated units for this specific purpose.
Conceptually, the operation finds the nearest texels of the sample vector(s), loads said texels, blends them based on their distance from the sample vector, and stores the result to shader-accessible registers. In practice, there is a bit more complexity than this
For selecting the mip levels to sample from, the sampler observes the partial derivatives ddx and ddy over the sample coordinate, and uses the length of the derivative vector [ddx, ddy] to determine the effective detail level of the current output pixel. Depending on the "sampling quality level" (as specified in control panel of some drivers), minimum, maximum or an arbitrarily biased value of [ddx, ddy] may also be used.
Additionally, when anisotropic filtering is enabled, the hardware considers the direction and maximum dimension of the derivative vector and takes n samples along it. This is done to get enough data for the sample, even though one of the sample coordinate derivatives might be considerably smaller than the other (in case the geometry slopes steeply wrt the screen space), and effectively improves the detail level of the texture in case the texture is not viewed exactly straight on.