With shaders, vertices are not transformed via the model-view-projection matrix, so you will have to manually create one and transform the vertices yourself.
Lighting is not done for you, so you will need to write your own lighting equations and send your own lighting data.
Textures are not applied for you. You need to sample them manually in the pixel shader.
Fog is not done for you. You need to write your own equation and send the parameters manually to the shaders.
The list continues.
Basically, unless you already keep copies of all of these parameters (light data, fog data, etc.) (and you should be), you will need to gut a large amount of your existing code to prepare the data for the shaders and then write all the shaders from scratch to replicate any existing functionality needed in addition to the new feature you want to add.
In addition to making a framework for working with shaders in general (compiling them, linking them, applying them, setting uniforms, etc.)
You may need to write your own matrix math routines or get them online.
It may make more sense to re-write an entirely shader-based solution despite the amount of work entailed, but at the end of the day the fixed-function pipeline is universally deprecated for a reason, and from a design/organisation/consistency standpoint the worst-possible case is to have a mix of fixed-function and programmable-pipeline—you may as well join us in the 21st century and go full-on shader-based.