I thought I'd add to this conversation a quick description of the way that i handle these problems in my rendering pipeline - because it's slightly different to what everyone else seems to be doing.
To start, i use a fragment stitcher which, as well as stitching fragments together, generates some of the final shader code, including all of the constant and sampler definitions for the final shader. So i already know what parameters the final shader will need.
The shader fragments form a hierarchy of how they are stitched together. All of the actual data for the parameters is stored in the shader fragments themselves, so to change some of the parameter data, you parent a fragment with a new fragment, and override the parameter data in this new fragment. This makes the hierarchy structure very useful.
The material is nothing more than a list of rendering layers. Layers are just a name (z-only, opaque, distortion, blur, translucent, whatever-you-want, etc.) and shader fragment pointer pair. Later the rendering pipeline will fetch all visible objects with a particular layer defined. The fragments for these objects are then stitched with lighting (or other) fragments that represent the current lighting conditions for the individual objects.
Lights have fragments that store the light instance's data as well as the shader code to perform the lighting. The camera has fragments that perform projection into screen space. Scene objects also have fragments that transform them, perform animation, or decompress/de-normalize the mesh (unrelated to the material fragments).
All of these fragments are dynamically stitched together as needed and the compiled shader is cached, for fast lookup in dynamic conditions, but any combination of fragments can be compiled at runtime if it is needed. When rendering, the shader parameter data is read from the fragment hierarchy and copied over to the gpu.
Sorry about the rushed description, I've left out a lot of details