Personally I have always followed the approach outlined in this paper by Walter et. al., although I am by no means an expert in path tracing or monte carlo techniques. Basically you generate a microfacet normal based on your normal distribution, and then you reflect the view vector about that normal to compute the sampling direction. That paper has the formulas for generating the microfacet normals for 3 distributions (Beckmann, Blinn-Phong, and GGX), and also shows you how to roll the PDF and BRDF together into a single, simplified sample weight.
If you need to combine your specular BRDF with a diffuse term, I use the technique outlined in Physically Based Rendering. Essentially for every sample, you randomly choose between specular and diffuse with equal probability. For Lambertian diffuse, you can just sample a cosine-weighted hemisphere to importance sample the BRDF.