Thanks for the replies. I understand why wrappers are used. Maybe "encapsulation" is the wrong word. I'm talking about providing a common physics interface that would allow different physics engines to be used with the same game engine. I know that if this interface is created by a programmer that has knowledge of only one physics engine, it may not fit other physics engines that well but the interface could be updated in the future if it was decided to switch physics engine. At least the engine would never point directly to a specific physics engine but rather the interface meaning it would be easier to update the code. Then again, it might be over-engineering.
While on the subject of game engine design, I have another question regarding the level editor. Most level editors have snap functionality allowing the user to let an object snap to the ground. For the editor, to know where the ground is, it will have to do a collision query; maybe a ray cast downwards to find the nearest point of intersection. Would a game engine use the physics engine for this ray cast or would it traverse its own spatial hierarchy and do the intersection tests itself? For example, let's say the user wants to place a chair on the ground and they can hit a key that will instantly place the chair on the floor. The floor will have a static actor in the physics engine because its not going to move in the game. Though what if the user wants to move the floor after they place the chair? Would the editor make all static objects dynamic actors in the physics engine for the sake of editing or would the physics engine be in use by the editor at all? Sorry for the long winded question.