I think I have the basics figured out. To get SLI working with an OpenGL game, you need to use NVIDIA Inspector.
1. Create a new profile for your game.
2. Add the game's executable(s) to the profile.
3. Set "SLI rendering mode" to "SLI_RENDERING_MODE_FORCE_AFR2", hex value 0x00000003. Do NOT touch the "NVIDIA predefined---" values, and there is no need to choose a GPU count.
(Optional) 4: Enable the SLI indicator to show the (pretty worthless) SLI scaling indicator, which can at least tell you if SLI is enabled at all.
At this point, you can try running your game. Make sure you run the game in true fullscreen, not just a borderless window covering the whole screen, or SLI won't scale (but an empty SLI indicator will still show!). If all goes well, you should see a 90% boost in FPS (assuming you're GPU limited) and the SLI indicator should fill with green. However, many functions can inhibit SLI performance (most notably FBO rendering to textures, mipmap generation, etc), and in some cases the driver my completely kill your scaling by forcing synchronization between the GPUs, often leading to negative scaling. In this case, there is a special compatibility setting you can set which seems to disable most SLI synchronization and give you proper scaling. If you see no scaling, try these last few steps:
5. Click the "Show unknown settings from NVIDIA predefined profiles" button on the menu bar (the icon with two cogwheels and a magnifying glass).
6. Scroll down pretty far until you reach a cathegory called "Unknown".
7. Find the setting called "MULTICHIP_OGL_OPTIONS (0x209746C1)". Change it from the default 0x00000000 to 0x00000002 by typing in the value by hand.
Explanation:
The MULTICHIP_OGL_OPTIONS seems to have the same function for OpenGL as the SLI compatibility bits have for DirectX games. I tried all of the predefined values in the dropdown list for that setting, but many either had no scaling or graphical artifacts. What I realized was that changing it to a value that was NOT on the list seems to disable the synchronization, regardless of which value is chosen. I was expecting each bit to serve some specific function, but that does not seem to be the case. The hex value may be some kind of hash code or something that alters the driver's behavior. Setting it to anything that isn't predefined (0x00000002 being the first "free" value) seems to disable all synchronization between GPUs.
Sadly, I still haven't figured out the GL_TEXTURE_2D_ARRAY problem. It seems to be a driver problem where the driver does not copy the generated mipmaps to both GPUs when glGenerateMipmaps() is called. This may be intended behavior, but the same function certainly works for normal GL_TEXTURE_2D textures.