What would be the motivation to do this in shaders?
1) To reduce the number of constants that need to be sent to the shader (16 floats for the matrix versus only the 6 floats that can be sent to create the matrix).
2) To reduce the number of calculations in the shader - most of the values (10 of them) in the matrix are constant 1s and 0s, and the HLSL compiler will not generate code for calculating (mostly multiplying) vector or other matrices' components with those values, or it will mix them together with other constant values that are part of the same calculations, into a single constant that is only applied to the calculation once... this and other optimisations that the compiler may do, that I cannot possibly imagine.
Even if your camera moves a lot, you need to build the proj matrix once per frame. If you move the code to a vertex shader, you will be building the matrix once per VERTEX. Once per very single vertex in your scene. Or you can also put it to a pixel shader.... ;)
Hmmm. I didn't think of it this way. Good point. Please ignore my last comment.
(Though, in my defense, I was thinking of building the projection matrix in a Geometry shader, so only once per primitive...).
You dont need to rebuild a projection matrix at all, just create one and pass it along to the shader you need to set this the per frame cb once a frame that is all. If you start recreating your projection matrix you are just wasting CPU time each frame.
Generally you build all projection matrices you need in the code once at application initialise or level initialise and then set it when needed, you can have to update the projection matrix multiple times per frame like for example when you have a cubemap renderer or other post effects in your render pipeline. Generally you use a different projection for this then for the normal game camera.