Let's see if I got this right. So the behind the scenes paging in and out based on the windows virtual gpu sharing stuff is not something DX12 game developers will see (if windows want to evics something for another process)?
Instead DX12 game developers can predict what textures we want to draw soon and start streaming thoese textures in beforehand. But DX12 does not guarantee that a texture currently in use will not be tossed out of VRAM by windows? Which is transparent to the programmer (goes on in the background)? In which case the GPU is stalled until the resource is paged back in?
So we can expect better performance, given that windows aren't paging things in and out in the background?