Perhaps your D3D driver can perform this optimization, but D3D can't out of correctness. Vertex shader input structures ('attributes') have to match up with the 'input layout' descriptor (not sure on OGL name - the code that binds your attributes?). D3D represents the way that data is read from buffers/streams to vertex shader attribute-inputs as this descriptor object, which relies on the fact that the shader author can hard-code their "attribute locations", and then put the same hard-coded value into the descriptor without querying.Really? If they are unused then any compiler worth anything should do the proper dead code ellimination. Never used D3D myself, but i find it extremely difficult to believe D3D relevant compiler and relevant recompiler for GPU (which is free of whatever restrictions are at D3D side) both choose not to do that.
D3D won't remove unused attributes/varyings, which reduces the performance of your shaders...
There has to be some confusion here about terms or something.
GL on the other hand requires you to reflect on the shader to discover attribute locations after compilation, allowing them to move around or disappear.
With 'varyings' (interpolated vertex outputs and pixel inputs), these are usually described as a struct in D3D, where each member is given a hard-coded location/register number by the shader author. This structure has to match exactly in both pixel and vertex shader. D3D compiles it's shaders in isolation, so when compiling the vertex shader, it has no way to know if a varying is actually used in the pixel shader or not, and therefore it can't remove any of them. If any are unused in the vertex shader, you'll get a big warning about returning uninitialized variables. If any are unused in the pixel shader, the compiler can't cull them because the interface with the vertex shader will no longer match up.
This design choice allows you to pre-compile all your shaders individually offline, and then use them in many ways at runtime with extremely little error checking or linking code inside the driver.
GL can cull variables because it requires both an expensive compilation and linking step to occur at runtime. Basically, D3D traded a small amount of shader author effort in order to greatly simplify the runtimes for CPU performance.