There are a number of ways to do it:
You can render objects slightly larger than they are, in black, with depth test disabled, then render the object normally; this generates an outline. I believe this approach is probably outdated through.
You can render the scene as per usual, and then use edge detection to fill in the outline. You can use the color buffer, depth-buffer, or both as the data source. You could also just render a different buffer solely for edge detection, wherein you rasterize each distinct object with a value that's strongly differentiated from the others.
For the animation-style shading, you basically just use the normal lighting models, except that you transition between discreet steps in a color palette rather than smoothly. You can either discretize a single base color it in a shader directly, you can use the dot product to index a 1-dimensional array, or use the normal of each pixel to index into a 2D texture that's a polar-coordinate representation of the discrete transitions.