My first iteration of the renderer was a single [font=courier new,courier,monospace]IDirect3DDevice9[/font] object and then I [font=courier new,courier,monospace]IDirect3DDevice9::CreateAdditionalSwapChain()[/font] per "screen", which rendered out multiple 'windows' positioned and sized to simulate full screen.
This worked well, however I'd get lots of screen tearing since I used [font=courier new,courier,monospace]D3DPRESENT_INTERVAL_IMMEDIATE[/font] within the presentation parameters because I don't want to stall the updates waiting for vBlank. Also, I assume that with only one device querying one adapter, I'd not be able to safely assume the vBlank period for both monitors, only the 'main' monitor.
So this lead me to re-writing the renderer to use multiple devices (one for each monitor), and each device has its own adapter for its respective monitor. For my purposes, this is working just as well as the additional swap chains method, however I'm back to the same issue of tearing since the alternative is stalling the main thread twice for both vBlank's, which leads to unacceptable framerate jittering.
[hr]
If I were to put forth the incredible amount of effort of rewriting my engine to be a multi-threaded renderer, will this solve my screen tearing issue? More specifically, am I able to create a thread for each device and render/present them independently? If I must keep all DX API calls on the same thread (which I believe is the case), will stalling multiple times waiting for vBlank still allow me to achieve a 60hz FPS?