But even still, isnt the act of changing shaders a very expensive operation?
Why do so many people think that switching shaders is some horrible thing to do? GPUs can overlap lots of different kinds of work at different stages of the pipeline with different costs. Fragment programs can be prefetched, and can be a virtually free operation, depending on how long the previous draw call takes to complete (if it's really fast, there's less opportunity to hide the time to load the new shader).
back then, when the first 3d gpus arrived, even texture switching was expensive and the same issues carried over to shaders.
The GPU pipeline is split into different sub-pipelines, each works on its "jobs" which are not draw-calls and not primitives, those are simply jobs. if you draw/drew two objects with exact the same settings, it is/was very likely that they share the same pipelines to some degree.
if you change some setup of some sub pipeline, as the whole GPU doesn't track what job belongs to which drawcall or setup (that would be very expensive for little gain), that part of the pipeline and sometimes the whole pipeline had to be flushed (or in some lucky cases just a fence was added, which flushes partially). So the gpu side of the cost wasn't really just switching some resource, it's the stall you have due to the flush that sets the sub-pipelines to idle that costs.
on cpu side, switching shader means often that they need to be prepared, not all features you see on api side are really features, they are often just patched shaders, so something that might look for you like a simple shader switch might be like a recompilation because you have some "weird" texture set or a vertex format that the shader "emulates".
as an example, D3D10/11 hardware does not have alphatest, that's why the api also does not support it, but you can run dx9 software, that need obviously a new shader).
there are tons of possibilities, I think most ppl rather struggle to limit their 'bits'
- shadows on/of
- lightsource type (point/direction/spot)
- fog on/off
- in forward rendering you might have n-lights
- detail layer
- vertex shading (instead of pixel, for some distance LODs?)
- parallax mapping
- switching between normalmap and bumpmap
- back lighting like on thin sails, flags, vegetation, paper
- some sinus swinging like vegetation underwater or in wind
- rim lighting
- cubemap lighting
- clip (dx10> does not have alphatest hardware)
I don't say you have to have all, but some engines have it.