What are you expecting vs. what you get?
I would expect that the world shrinks on the screen if the camera scales up and vice-versa. If that is what you are getting then your results are not thrown off at all.
If that is not what you want to get (which is different from it being the correct result), normalize the world matrix’s first 3 rows before getting its inverse for the view matrix.
After a closer look, it looks like this is working correctly. I originally thought that it was affecting my perspective matrix, but I tested it out in another scenario, and it worked fine. The only weird thing is that I achieve correct results when I only perform the inverse of my camera node's world-space matrix. If I do an inverse-transpose of that matrix, nothing draws. At least, the single quad of geometry I am drawing to my scene doesn't display. I'll search for it with my FPS camera I've got setup with my gamepad, but no luck haha.
Anyway, I was adding onto my original post earlier, and things came up, so I couldn't save it. A huge question I've been trying to answer is how to get my object's position, orientation and scale from its final transform matrix. My Transform class does store position, rotation and scale, but that's only to calculate its local transform matrix that's used to calculate the transform's final transform matrix for that frame along with the parent's final transform.