Though in some of the later specs of OpenGL, apparently you can use compiled (native) code rather than compiling it up from source at runtime.
Basically, if the shaders were JIT compiled as they were being used rather than being compiled natively up front. It is highly likely you would get worse performance. This is exactly the same with .NET.
Shaders are the perfect analogy here -- they're some of the most performance critical code in a modern game, and yet are written in a "JIT" language that follows the same pattern as C# code.
In GL, you can't pre-compile them (except for the newest GL spec, which allows you to cache the intermediate version for the current GPU only, so you still need to distribute the GLSL source to each user and parse it at least once for their GPU/driver combo), and in D3D you can precompile them to an intermediate assembly format. C#/Java/Python/etc are also pre-compiled to an intermediate assembly format! This pre-compilation step can perform optimisations and greatly reduce the workload on the JIT compiler.
At runtime, this intermediate (shader/C#/Java) assembly code is then JITed to real machine code on demand.
However, this JITing can cause spikes in your frame-times, so you need to be able to control when it happens. With shaders that's pretty straightforward (in D3D you tell the runtime to compile it, and in GL you do the same but also draw an off-screen triangle using the shader to force it to actually do what you asked...), but I'm not sure if it's controllable in C#/Java, so that would be my only concern (not the final speed of the JITed code).