The first method I tried out was an image space discontinuity edge detection when rendering the geometry normals and depth values to an interim render target. This was then rendered over the scene and alpha tested to only render the appropriate edges.
This worked ok, but wasn't very fast and actually had some trouble resolving small depth differences on the object when the normal vectors were very close to the same. So then I tried using an A16B16G16R16 render target to increase the precision of the depth values.
This worked better, but did nothing to increase the rendering speed. I also wanted to have this method work on shader model 1.1 if at all possible, and the sampling required to get decent results was a bit prohibitive.
Next I decided to look into some geometry based methods. Since the geometry was relatively small, it isn't such a bit time delay to process the geometry at load time to find the important parts and keep a list of them. I'll describe the several methods that I tried out and where I finally ended up in the next post (with some screen shots of the results too!)