Since Borland is such a pain to get to work with DirectX9, and isn't even supported, I've decided to dive into learning DirectX9 at home this weekend with Visual Studio, which I much prefer, so I have some support, intellisense, documentation, and actual tutorials to go by to build what I need up from the ground up here, where I know it works, and then try and do the convert at work next week.
I've been diving through the samples and tutorials provided in the SDK, and so far I have easily created and initialized the device, displayed the solid blue background, and then also started working with the vertex example to get a triangle to display on the screen. My next step is to make that triangle into a square, which I know I'll simply need to add another vertex so that instead of drawing 3, I'll be drawing 4 to make a square. Once I have that I'll try and load an image texture into the vertex.
So I'm hoping for some advice and suggestions along my progression. My first one, is can someone explain the vertexes and the coordinate system to me a little better?
The example defines their vertex for the triangle as:
Vertex vertices[] = { { 150.0f, 50.0f, 0.5f, 1.0f, 0xffff0000, }, // x, y, z, rhw, color { 250.0f, 250.0f, 0.5f, 1.0f, 0xff00ff00, }, { 50.0f, 250.0f, 0.5f, 1.0f, 0xff00ffff, }, };
Which is simply using the structure:
struct Vertex{ FLOAT x, y, z, rhw; // The transformed position for the vertex DWORD color; // The vertex color};
How do the x,y,z, rhw work with this? Are the x,y,z the pixel coordinates on the screen? So say 150.0f for the x is merely 150 pixels from the top left, which is 0,0?