You've got an arbitrary mesh. Traditional progressive mesh systems are built around the idea of primitive 'operations' (basically, edge collapse and vertex split). A VIPM is a mesh plus a sequence of these operations that will take the mesh from maximum detail to minimum detail. You change the LOD dynamically by moving along the sequence in some direction.
A VDPM has a problem in that the chain of operations is dependent on where the camera is. A VIPM will bring detail down uniformly across the model, but a VDPM aims to keep the sections nearest the camera at highest detail for longest.
So here's what I'm thinking. We want some way of organising our primitive operations such that we can query with a direction and get back a sequence of ops. We'd then execute operations along that sequence to achieve the desired LOD. So, what if we were to attach each of these LOD operations to a point in space, and arrange those points within a sphere?
The basic principle is this: If you're looking at a model from one direction, the first operations you want to perform are the ones right around the backside of the model - they're not much of a win because they would have been backface culled anyway, but they cut down your vertex processing costs (and that's really what this is all about). The last operations you want to perform are the ones affecting geometry right in front of you. If you were to go around to the other side of the model, the operations you want to perform would be exactly reversed - the ops that were previously your top priority are now your last resort, and vice versa.
So say you arrange all operations in a sphere. You take the direction that the camera is looking in and project (orthogonally) the sphere in that direction onto a plane. What you get is a circular arrangement of point-ops. Each one has a 'depth' relative to the others, which allows you to sort operations, or to apply operations up to a specific depth. The thing I'm not sure about is what the distance from the center of the circle means... possibly 'certainty' (operations near the center will be very definitely totally-no-visual-effect or totally-screw-everything-up, while operations towards the outside are more 'meh').
What it essentially does is to group primitive operations by their spatial effect. Which leads me to think some more...
One problem is dependencies. Certain ops don't really exist after other ops have taken place, or at least, their effects will be different depending on what has already happened. An edge-collapse causes two vertices to be united at the midpoint of that edge; if another edge-collapse operation referred to one of those vertices, that operation is now going to have a different effect.
So perhaps we can give the primitive operations more than just position values within the sphere. Perhaps we need to give them volume too. If each operation is a small sphere, the center of that sphere when evaluated for projection is actually attracted by other spheres that intersect with it. If performing an operation is going to cause a number of other operations to be less effective, that operation could have a large sphere that - when applied - 'attracts' the other operations in the direction of less-effectiveness.
Bah, I'm going home.