If you take a D3D application, it's actually sitting on top of several layers. The first layer is the D3D runtime itself. What is underneath that changes somewhat between XP and Vista. In XP, D3D is on top of a kernel mode driver (KMD), which interacts with the HAL and a miniport driver (also kernel). There's a spliting of responsibilities between the various pieces. D3D essentially accumulates and validates information, and then forwards it onto the kernel driver at appropriate times. (Draw calls, present calls, etc.) The miniport driver handles low level communication details. Everything else is handled by the main kernel driver. The driver is very much on its own here, and has a relatively direct connection to the application.
In Vista, things shifted around and the splitting changed. The main graphics driver no longer lives in the kernel, but rather in user mode as a normal DLL which D3D loads for you. D3D comunicates with this layer through a series of calls and callbacks. There's an equivalent, separate layer for OpenGL. Both the OGL and D3D user mode drivers communicate with a common kernel mode layer. The KMD interacts with Vista's backend graphics scaffolding. In particular, Vista takes over on two fronts: resource management and GPU scheduling. In XP and earlier, the driver was solely responsible for deciding how allocations and deallocations were placed and managed, as well as how DMA was handled and when the GPU did what. All of those responsibilities have been taken over by the OS* now. Vista provides the scheduling, virtual memory, etc this way. The driver negotiates with Vista over that, and it's worth noting that direct pointers are no longer manipulated, because physical addresses are not always available to the driver. (Quite rarely, actually.) Kernel mode switches in this setup only happen when information needs to be communicated down to the kernel layer; this happens only when the user mode command buffer is full, when a present happens, or a couple other situations which force the KMD to become involved.
It should be clear by this point that the interactions between the various components are more complex under Vista. It shouldn't be a surprise, of course -- virtual memory, scheduling, etc weren't going to come for free. For the most part, the existing NVIDIA driver architecture seems to have fit in quite well; I assume that the designs of the ATI and NVIDIA drivers were both major factors in the design of the new Windows Display Driver Model (WDDM, formerly LDDM for Longhorn). Admittedly I don't know what things looked like pre-Vista, but right now it all looks fairly sane. The behavior of the driver isn't that different between XP and Vista; the main difference is that under Vista, some of the pieces aren't written by us anymore.
One particularly uncomfortable question is whether or not Vista is inherently slower than XP when it comes to graphics. That's a sticky discussion, and like all performance related discussions, it's hideously complex and intertwined. All I can really tell you is that Vista is for the most part slower than XP right now -- benchmarks by hardware sites have established that quite soundly. There are many issues to be worked through, both in our drivers and in Vista itself. We have plenty of tuning to do, and MS has some mistakes to fix. I suspect that things will get much, much better after Vista SP1 is released, thanks to work on the parts of both companies. (Also, I can't share the expected timeline for that. Sorry.) Things are getting better every day and will continue to improve.
That's enough for now. In the next part, I'll discuss the actual process of working on the driver.
* This is described in internal documentation under the sub-heading: "All Your Bufs are Belong to OS".