But, as I've used it I've started to fall out with D3D; the biggest problem is that whatever I'm doing is broken and I've no idea why.
I've got "if(FAILED()) { }" around ALL my drawing calls and nothing is turning up. Switching D3D to debug mode with all boxes checked and max debug output gives me a couple of warnings about VB not being write only and two infos about my index buffers not being supported in hardware or some such. Neither of these are show stopping errors however things still aren't doing what I expect.
The theory is that I'm pulling two vertex streams from two blank textures (created, bound as render targets, cleared and then unbound again); the practise is that instead of a lovely flat plane as I was expecting (from a height map which should be zero) I instead get a jagged mess. Swapping it to visualise as colours by writing to the red channel gives me a series of red smudges which get bigger as you move from bottom left to top right. Swapping to visualise the normal buffer, which should also be black, as a 3 colour component gives a black bottom left with the top right being white and the top and left sides being green and purple (iirc).
There is no way in hell this makes any sense, so it's cheesed me off some what as I've no errors but it's not remotely playing ball. I guess I'll have to try and visualise it as a normal texture (I'm producing texture coords as well as vertex coords) and see what I get; if that comes out black then I might just cry.
I'm also not getting along too well with various D3D concepts;
- the render targets seem like a joke when coming from OGL's FBO
- .fx/HLSL and me have hard a barney due to a slight case of unclear docs. Oh, and the semantics crap is just that; crap. GLSL's 'bind stream to varying' system is MUCH better imo. It also doesn't lead to dumb situations where the compiler can't tell the difference between an incoming stream and an outgoing one (POSITION semantic on SM3.0 I'm looking at you....) because you have certain things you have to write to.
- Vertex declarations make me cry. The fact you have to create them seperate from the use point just makes fiddling a small nightmare. OK, great, I've said that stream 0 will have two vec2s and that the 2nd has a certain offset in the buffer, now explain to me why having declared this earlier and set said declaration as being used I then have to resubmit the stride for the thing? Surely this could have been set at declaration creation time? Heck, an overload which does the work for me based on the current declaration would have been nice. But no, if I want to change the vertex format for testing I have to remember to edit one extra place.
Knock OGL's bind system all you like, as least you don't end up doing things in two locations and repeating yourself with it.
That said, some of the stuff behind the .fx system I do like and an .fx-a-like for OGL would be nice... just preferably without the retardedness I encounted when trying to use it with HLSL.
I'm going to have one, maybe two, my attempts at working out how to debug this (anyone know of a system where by each frame a copy of all the textures can be grabbed and the framebuffer? I'm thinking something like glIntercept but for D3D) and if I can't get it to work then I'm running back to OGL and my own windowing framework; at least I know how that works.
D3D was only winning over OGL for this project because of it's
- R2VB
- DXUT GUI
- Novality factor of learning a new API
However, ATI now have PBO in their drivers which nullifies the R2VB advantage and the novality factor is all well and good when you don't have a deadline and things aren't being strange at your head. The lack of GUI for the final product is a tad annoying but, worse comes to worse, I'll just do what the previous person did and embed the renderer into a Win32 app, or make a second window to control it... assuming I can't make one of the free GUI's play ball in a sane amount of time.
My rantings are done, D3D can bite me, I'm going to bed.