I ended up creating an IDX11Graphics interface which encapsulate the device context, the device, and the factory object. I have a simple Init() function which enumerates all mode data from all of the adapters and files them away into a data structure that lets me filter modes by adapter, output, and format. Pretty simple stuff. The initialization stuff is the easiest to encapsulate, which is obvious when you look at the DXUT library that Microsoft provides.
I'm starting to realize that there is little you can do to improve the API without moving up a software abstraction layer. The farther up you go, however, the more flexibility you lose. For instance, I was trying to create a render target wrapper class, but I realized that the components of a render target really weren't meant to be coupled, which is undoubtedly why Microsoft did it in the first place. D'oh! There is a nice relationship between how DirectX handles resources and resource views; it seems clunky to just plop them both together into one helper class.
For instance, how do we handle render targets in a flexible way that actually simplifies the process? We could build a class that creates a texture, render target view, and shader resource view, But then how do we handle swap chains, which have a different creation process? The class could take a pointer to the texture as input to allow the user to create it however they want; or we could remove the shader resource view and put it in another supporting class. As another example, I considered writing one IDX11Texture class to encapsulate all types of textures, but one big immediate problem I ran into is high potential for bloat. In order to accommodate all different types of textures (1D, 2D, 3D, etc.), the class would have include ALL of the data. Ultimately, I think these options detract from the elegance of the SDK and make things more confusing and less flexible. This is one aspect of object oriented programming that drives me crazy. There just doesn't seem to be the Right Way to do it.
On the other hand, in order to simplify things, we have to sacrifice flexibility at some point. It's just going to happen. I guess where I struggle is finding the happy medium where an abstraction can make the programmer's life lots easier, but also allow retain the features that they need for their graphics algorithms. It's definitely hard to know what features are needed when you're just learning the SDK--a fact I know first hand.
I think this is the challenge that appeals to me the most about game engine development. There is so much software engineering involved. If you don't carefully plan out the intricate relationships between components and subsystems, the entire things quickly degrades into chaos. I have high respect for people who can grasp it all.
In any case, I have graphics interface in place that takes care of initialization and device enumeration. I am still deciding what other tasks I should give it--whether it should just be a glorified ID3D11Device, or more of an abstract renderer. Because I am still going to retain DirectX 11 access within my engine, I am hoping that it can become both. What really bugs me is when I create a helper interface that ends up just having 5 lines per function call that parrot things back to the Direct3D device. That doesn't seem constructive to me at all.
In any case, that's all I've got for now.