On old HW, the GPU might possibly waste a few cycles per vertex fetching those unused attributes (or alternatively, will waste a bunch of CPU cycles per draw-call comparing the IA/VS config and disabling the unused attribs).
On new HW, IA doesn't exist, so the IA config is compiled into VS code and glued onto the front of your vertex shader. Changing input layouts should be treated the same as a shader change on the GPU side (potential pipeline flush if done too frequently).
Whether unused attribs cause harm depends on the driver -- either it can compile each "IA shader" once, which will cause all attribs to alwayd be fetched... OR, it can recompile each "IA shader"+VS pair, resulting in unused attribs being optimized out.
I'm not sure what modern PC drivers do... FWIW, in my current-gen console ports, I use the former option (compile each IA once), and assume the graphics programmer will use the best IA config for their VS (no unused attribs).