To get a decent looking reder you're going to have to use textures at some point. That's the whole reason behind "pixel" shaders.
What is the refresh rate on both monitors? Have you tried turning off Aero in windows 7 (the see through glass around windows)?
Vsync blocks Present() until the next vertical refresh. With a slower monitor refresh rate (< app's frame rate) your CPU is running ahead of the GPU. DX buffers up to a max of 3 frames of graphic commands (triple buffering), over that would induce FR drops and input lag. Turning on the second monitor only increases this lag because the OS is having to vSync 2 desktops (copying back buffers to surfaces).
To my knowledge, there is no way DX can get the time that the refresh happens and even if you could, you'd waste a lot of cycles- looping and doing nothing.
Is there a specific reason you want to use vSync? What are you trying to accomplish?
If you have to use it then you'll have to process your mouse and keyboard input in another thread which can be rather tricky because of deadlocks and the main thread (where Present() is ) blocked by vSync.
In my experience, I never use vSync. I lock the frame rate in code, measuring the elapsed time between each frame and use that ‘elapsed time’ to calculate the speed of movements and animations. Since I never use vSync I'm only familiar with what the documentation says it does not how it actually functions with other objects / code / situations.
In my applications I force a frame-rate limit and measure the elapsed time to smooth movement and animations. Fluctuating frame rates (within reason) are not uncommon. Without seeing any code I can only assume.
If you really want to know what is going on I'd recomned you profile your code with PIX.
To narrow down your problem I would remove everything from your render loop, then add functionality back one by one, testing for the lag. There is really only a couple of things that can be the problem:
Garbage collection (if the lag is random)
CPU is outrunning the GPU (consider a new thread for a message queue that can process the input)
In any event you should use PIX to profile your code and see where the delay is coming from.
I have 2 monitors and have never had this problem. When I ask DX for a list of adapters there are 2 adaptors returned with the same name and if I choose the second one the device creation fails so I use the default without issue. DX will give you the default adaptor.
It also might be helpful if you post more information about your rig, like your graphic card, monitors and OS info.
I think the MSDN docs are referring to when your game install has to install the DirectX runtime because it is not present on the end users machine. If the end user already has DirectX installed there is no need to display the EULA because they've already agreed to it when they installed DirectX.
To get 100% clarity I'd suggest contacting Microsoft directly.
As for the format type, it depends on what your game engine's importer supports. Natively, DX 10 and lower versions support the X file format. However, DX11+ doesn't. There are 3rd party plug-ins for 3DS Max that will allow you to export in the X file format.
When modeling vegetation most large studios use SpeedTree. It's expensive and I don’t think there is a demo or any licensing for indies. Usually when I model a tree I start out with a cylinder and then extrude, move and scale till I get the trunk and branches the way I want them. I add the leaves with a separate material and my game engine imports this as a subset and allows me to tag them as transparent, thus setting the appropriate blending stage when rendering.
There are also several free (low cost) tree generators that let you quickly create trees and export them in the X file format. I think Tree[d] is a good one. Just make sure you read the license and usage terms carefully.
You could be clearing the target before rendering the next iteration.
Your blending operations could be wrong.
If you're using shaders, the shader might not be updated with the new data on the next draw call.
Or there is an issue with your loop that draws.
Use PIX to debug so you can pin point where the issue is happening in code.
DirectX - SlimDx - XNA, thery're all based off the same platform pretty much so the only different is the language used to access them and who is resonsible for memory clean up. In other words, you can take C++ directx code and easily turn it into XNA or SlimDX assembly. Just keep in mind that the XNA platform uses the right handed cordinate system.
Again, no help from this forum… not complaining, just saying. Am I on a black list or is my question not worthy or worded correctly?
Anyway, I solved my problem on my own. For anyone who is having the same/similar issue:
My rendering process uses multi-passes to render lighting- this means I’m using additive blending for each light. When I started to add shadow mapping I was rigging my light shader to also calculate the shadow. This left me with a decent shadow for the first light but after that each shadow was blended using the same blending algorithm used by my additional lights- making the shadow too light (barely visible) and not consistent with the first shadow. I also started noticing how limiting the coupling of lights and shadows were. To decouple them and solve my issue I first render the scene with ambient light only. Then I render my shadows in the next pass (for each light) using additive blending. Source blend = ZERO and Destination Blend = SOURCE COLOR. In my shadow rendering shader I returned white for non shadowed areas and black for shadowed areas. Then I render each light with Blend ONE for both Destination and Source. This is working well but there’s more tweaking that needs to be done (i.e. handing transparency for the shadow as well as billboards with transparency).
I've figured this out on my own. For those of you struggeling with cubic shadow mapping, TiagoCosta's PS code does work but it's the basic building block. You'll have to massage the code to work with your existing engine/app. To keep my box from being completely encapsulated in a shadow, I added ObjectID's in the green channel of the texture and then checked the ID's when rendering the box normally. This gives me the flexability to mark an object as caster and/or receiver and also allows for objects to shadow each other.
I'm still having issues with jagged edges and trying to soften the shadows without increasing memory usage but it's a fine line you have to ballance.
I chose Shadow Mapping because of it's simplicity, low-CPU overhead and it's ability to handle trasnparency like tree's and billboards - something very difficult with shadow volumes. On the down side, shadow mapping is very hardware dependant - R32 texture format and cube texture for point lights - (that's 6 textures). The larger your textures, the better the shadow accuracy but at a cost.
Thanks to TiagoCosta for his help. I'll continue to watch this post in the event I could help someone else.