Currently I've got a scheme where I load shaders on another thread and marshall them back to the main thread as a ID3D11[InsertTypeHere]Shader. This presents a problem with vertex shaders, as I don't have the blobs at that point and still need to (maybe) create input layouts for them when I create the vertices they're going to operate on. This seems like an architectural mess because shaders are independent of geometry and I want to be able to load my shaders as such. I've come up with a bunch of ideas for how to approach this, but none of them seem great:
1. Keep the blobs around forever so that I can pull them up when I want to couple the geometry to the shaders later. This seems stupid because all I really want is the signature.
2. Create "dummy" interface shaders that I know about at compile-time and can reliably generate input layouts with, then just bucket my real shaders with the dummies so that I know what input layouts to use. I think this would work, and requires me to store far fewer blobs, but it does seem pretty silly.
3. Bind shaders tightly to geometry in data and don't load shaders independently at all, but have the loader recursively figure all that out on loading geometry and generate the appropriate layouts all at the same time. I'd rather not do this because of the loss of flexibility.
4. Just use a couple standardized vertex input formats for all of my shaders and don't worry about figuring things out dynamically. I will probably have this anyway, but relying on this knowledge seems more like a hack than anything else.
5. Use shader reflection. I know this is a thing and I know that you can get the signature, but I don't know if the signature format is compatible with CreateInputLayout() and I don't know the performance implications of this.
Am I just trying to build this all wrong and should just give up on trying to load shaders independently of vertices? Is there an obvious solution staring me in the face that I've somehow missed?