The best way to think of it is to picture what a pixel looks like when projected into texture space. If the surface is aligned with the camera near plane then the pixel will map to a square in texture space. It could be freely rotated however and look like a diamond. For point sampling the texel closest to this square sample is used. For bilinear its a weighted average of the 4 nearest. For the case of mip mapping a mip level is selected such that the square is about 1 texel unit area. Think about this square projected onto each mip level. As each mip level goes down each texel effectively doubles in each axis.
For anisotropic filtering then the pixel is elongated and forms a thin rectangle. There is no mip level you can use which either fully represents all of the rectangle. Instead the rectangle can be subdivided into smaller rectangles such that they are more square like. Each of this subdivisions have their own texel centre and can be each bilinear filtered and averaged for a result whcih better represents all the texels that touch the rectangle.
Hope this helps. I know pictures will help better explain. I'm sure there is so good visual explanations out there.
The point is that the ALU processing capabilities far exceed that of the fixed function triangle setup and rasterizer. Using compute you can prune the set of triangles to get rid of the ones that don't lead to any shaded pixels and are therefore discarded. Its purely there to get a bit more performance.
The main difference is that camera motion blur can be done via the depth buffer and some static per frame constants where as object motion blur needs a velocity per pixel or at least some per pixel data needed to calculate it. If you are doing motion blur on skinned objects then this requires extra work and more bones in the VS or at least cache off the skinned data over at least one frame.
Ultimately its because of efficiency and ease of integration.
I wrote an impostor based system for a sports game where we would draw up to 20,000 instances of the crowd. I would then swap in the 3D model when you got close such that the impostor texture would start to magnify. I managed to do this with no noticeable pop.
I have a blog post half written on how I did this and will update this thread when I finish it. The secret sauce however is to differentiate between internal and external perspective with regard to the impostor meshes. By this I mean we rendered the models to the impostor texture with an ortho projection. The quads were then rendered into the world with a perspective projection. Then when drawing the 3D mesh I would do this same ortho in local space, perps in world space operation. This internal ortho projection was then lerped to a full persp projection as it then got closer to the camera. The downside is you get a weird rotation of the 3D model as it approaches the camera but its something that you have to look for.