Depends on what you mean by instancing ? I'm mostly using OpenGL, so the D3D equivalent is not available, if that's what you meant.
The LOD algorithm itself is not particularly original (it's based on edge collapse) but i've been very carefull as to how i implemented it. I introduced the notion of virtual vertices (which are basically a set of vertices all sharing the same position: if you are collapsing a corner of a cube mesh, for example, you don't want to collapse only one triangle, as it'd created a hole); i have some pretty advanced error metrics based on the change of topology, surface area, normals, texture coordinates and materials. The equation has to be adjusted though since it involves some constant weights for each parameter.
The LOD algorithm is generating N index arrays (N is the number of levels and is given by the user, at pre-processing time for example) but the original vertices are not modified. Rendering an LOD mesh simply involves a distance calculation (to determine which level to use), and rendering with the correct index buffer.
In retrospect, it was a slightly silly question. I just meant whether you could draw them all in a single draw call, or whether you had to do one draw call per object. Of course you have to do one draw call per object, because the number of polygons is not the same in all objects.
If your algorithm is based on edge-collapse, perhaps you could extend it to support progressive LOD?
I could implement progressive LOD, but i'm not going to, for a variety of reasons. One of them is, at the moment, all the geometry is shared by all the instances, including the index arrays. If i wanted to implement progressive LOD, i'd have to give one unique index array for each instance, and use the CPU to fill it dynamically every frame or so. That wouldn't be terribly efficient. However, i'm investigating geomorphing, which could easily be implemented in a vertex shader.