Sign in to follow this  
Sync Views

direct 3d design questsions

Recommended Posts

Sync Views    139
1) Which of these two methods for drawing basic shapes(eg a rectangle for a sprite) is better: a) Have a static vertex that stores all the basic infomation for triangles, rectangles, etc and use "D3dDevice->SetTransform(D3DTS_WORLD, &Matrix);" to draw these where I want eg:
//...vertex buffer populated with vertexes for a 200*200 sqaure around 0,0

//draw a rotated square in the centre of a 800*600 screen
D3DXMATRIX Rot, Trans;
D3DXMatrixTranslation(&Trans, 400,300,0);
D3DXMatrixRotationZ  (&Rot  , (float)DegToRad(45));
//rotate and THEN translate
D3dDevice->SetTransform(D3DTS_WORLD, &Rot*Trans);

D3dDevice->SetStreamSource(0, VextexBufferPrim, 0, sizeof(D3dVertex));
D3dDevice->SetFVF(D3dVertex::FVF);
D3dDevice->DrawPrimitive(D3DPT_TRIANGLEFAN, RectOffset, 2);
b) Use a dynamic vertex buffer to "directly" draw the shapes
//draw a rotated rectangle in the centre of a 800*600 screen
D3dDevice->SetStreamSource(0, VextexBufferDynamic, 0, sizeof(D3dVertex));
D3dDevice->SetFVF(D3dVertex::FVF);
D3dVertex* vertices;
VextexBufferDynamic->Lock(0, 0, (void**)&vertices, D3DLOCK_DISCARD);
vertices[0] = D3dVertex(400,250, 2, cWhite, 0,0);
vertices[1] = D3dVertex(450,300, 2, cWhite, 1,0);
vertices[2] = D3dVertex(400,350, 2, cWhite, 1,1);
vertices[3] = D3dVertex(350,300, 2, cWhite, 0,1);
VextexBufferDynamic->Unlock();
D3dDevice->DrawPrimitive(D3DPT_TRIANGLEFAN, 0, 2);
2)Are buttons and stuff on in game menus generaly implemented with winAPI or by drawing the buttons with Direct3D and using custom code to check if the button has been cliked on etc?

Share this post


Link to post
Share on other sites
MJP    19755
1) Definitely a). If you can get away with using transformation matrices, then do it. Making dynamic vertex buffers work well involves being very careful about how and when you lock your buffers, in order to avoid a performance hit.

2) Usually the GUI is custom. Making the Win32 controls work in fullscreen D3D apps can be a pain from a performance standpoint, and most games want their UI to fit with their overall "look" and feel.

[Edited by - MJP on May 16, 2008 4:24:45 PM]

Share this post


Link to post
Share on other sites
Sync Views    139
OK with a is there a way to use a diffrent colour (say I have a grey scale sprite I want to use for the red and the blue team or will I need 2 sets in the buffer, one for each colour?

Share this post


Link to post
Share on other sites
Evil Steve    2017
Quote:
Original post by Sync Views
OK with a is there a way to use a diffrent colour (say I have a grey scale sprite I want to use for the red and the blue team or will I need 2 sets in the buffer, one for each colour?
Assuming you're not using shaders, you can do the following:

DWORD dwColor = D3DCOLOR_XRGB(255, 0, 0); // Red
pDevice->SetRenderState(D3DRS_TEXTUREFACTORP, dwColor);

pDevice->SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_MODULATE);
pDevice->SetTextureStageState(0, D3DTSS_COLORARG1, D3DTA_TEXTURE);
pDevice->SetTextureStageState(0, D3DTSS_COLORARG2, D3DTA_TFACTOR);



That sets the "TFactor" stage to the colour you want to use, and then modulates the texture colour with that colour.

Share this post


Link to post
Share on other sites
Sync Views    139
Is it practicle to have lots of smaller buffers (eg one for each object type) rather than having one large one or does that really start to hit proformance when switching between serval diffrent buffers during rendering?

Share this post


Link to post
Share on other sites
Evil Steve    2017
Quote:
Original post by Sync Views
Is it practicle to have lots of smaller buffers (eg one for each object type) rather than having one large one or does that really start to hit proformance when switching between serval diffrent buffers during rendering?
The number of buffers doesn't matter so much, it's the number of draw calls you make that does. You shouldn't be making more than 500 DrawPrimitive (Or DrawIndexedPrimitive) calls per frame or so, and putting more objects into one vertex buffer means you can draw multiple objects in one call.

However, for simplicity it's perfectly fine to stick with one vertex buffer per object. But if you find you need more performance, you'll want multiple objects per buffer.

Share this post


Link to post
Share on other sites
Sync Views    139
Ok. Before I start doing any code I just want to check this plan is a good one-

-I'm making a basic 2d space shooter just to learn how the basic aspects of Direct3D work (2d cause I fail at making models but can draw half decent sprites lol).

-The current plan is to call DrawPrimitive/SetTexture for every instance of each object. I can only see two ways of doing this and this seems the better of the two.
a)Rebuild the vertex buffer everyframe doing all the transformations myself rather than with matrixes so every instance is in the buffer and can be drawn at once - seems a bad idea to me
b)Have a vertex buffer containing the vertices for that object and call DrawPrimitive for each intance with apporiate transform matrix each time - Alot of DrawPrimitive calls but I suppose it's better than doing all the maths on the CPU and rebuilding the entire buffer every frame
Quote:

and putting more objects into one vertex buffer means you can draw multiple objects in one call.

I only see the above two methods. This seems like a method c but I have no idea how it's meant to work :(

-Background objects with be batched into static buffers of all the same texture (So everything with the star texture, then everything with the blueplanet texture). This can be done at the game start and will just need a few DrawPrimitive calls each frame

Few other points:
If I call DrawPrimitive on a set of vertexes with a transform matrix that is clearly out of the view (eg the view is 800*600 from 0,0 and the matrix translates the vertexes by 2000,5000 from the origin) will DirectX just discard it then or should I check the location of the object myself and not call the DirectX functions at all if it's clear the object is completly offscreen?

Share this post


Link to post
Share on other sites
MJP    19755
Any particular reason why you don't want to use ID3DXSprite for this? It will handle batching of sprites for you.

Quote:
Original post by Sync Views
Few other points:
If I call DrawPrimitive on a set of vertexes with a transform matrix that is clearly out of the view (eg the view is 800*600 from 0,0 and the matrix translates the vertexes by 2000,5000 from the origin) will DirectX just discard it then or should I check the location of the object myself and not call the DirectX functions at all if it's clear the object is completly offscreen?


The vertices will get transformed on the GPU, and will be clipped when they're determined to be off-screen. It's better if you can cull that quad yourself, as you'll save yourself the time of a DrawPrimitive call.

Share this post


Link to post
Share on other sites
Sync Views    139
Quote:

Any particular reason why you don't want to use ID3DXSprite for this? It will handle batching of sprites for you.

Once ive got something working I intend to start using 3D grafics which a mate of mine said he woould make IF I had a game built to use them in, with an angled view for the game rather than a plain top down view (eg like space empires V combat uses). I'm still going to need to solve the same problems then only about how best to render lots of small models rather than small sprites...

Anyways Ive got the sprite stuff working now by having a 1*1 quad around 0,0 in the vertex buffer and transforming it as needed. The question now is about drawing stuff that I can't see a way to "predefine".

1)For beam lasers I want the texture to repeat along the quad between the start and end. This means the u,v's need to be diffrent pretty much every frame though.
2)Drawing line lists. I can't really see a way to transform a list of say 11 vertexs to get my line to follow the right path.

Is dynamic buffers the way to go here and just checking if I actauly need to rebuild them before doing so (eg if any of the data there built from has changed) the way to go for these things?

EDIT: I'm still not sure what the best way of reducing the number of calls is:
a)batch everything using the same texture into one dynamic buffer doing the world transforms myself and drawing the whole lot with a single DrawPrimitive call
b)Use the static vertex buffer and do everything with a ton of DrawPrimitive calls (eg one for every sprite because they need diffrent transforms...)

[Edited by - Sync Views on May 17, 2008 4:14:03 AM]

Share this post


Link to post
Share on other sites
Sync Views    139
So am I basicly stuck with using DrawPrimitive once for every texture in every object then or is there some way to batch several objects each with there own transform matrix?

Share this post


Link to post
Share on other sites
MJP    19755
If you want to batch sprites that use different textures, you'll have to combine those textures into a single texture atlas and handle the adressing. Also if you want to render multiple quads with a single DrawPrimitive call but without using a dynamic vertex buffer, you should look into the various methods of instancing available in D3D9. There's a sample in the SDK, and also a good one on Humus's page.

However I really don't think you're going to need to worry about this too much...unless you're rendering 1000's of sprites a frame I doubt you're going to incur too much CPU overhead. In 99% of cases you should be fine with the batching ID3DXSprite offers, which batches together sprites of the same texture.

Share this post


Link to post
Share on other sites
Sync Views    139
I don't meen bacting diffrent textures just objects in diffrent places.
Eg is there a way in a single DrawPrimitive call I could draw these quads but also without having to build a new vertex/index buffer every frame?

x y rot
100 50 30
-50 200 300
200 75 180
180 70 100


[Edited by - Sync Views on May 18, 2008 4:16:05 AM]

Share this post


Link to post
Share on other sites
superpig    1825
Use a custom vertex shader, and build a vertex buffer that has multiple rectangles in it. Each vertex of each rectangle would include an integer to tell you which "number" rectangle it was a part of, and then you could use that number to choose which set of transform params to use.



Share this post


Link to post
Share on other sites
Sync Views    139
ok... I have 0 idea where to start and how what your saying works because I thaught vertex shaders were just the colours of something not where something is?

So you saying I can use a custom shader so I can send say 20 diffrent transform matrixes and have them transform diffrent quads rather than one matrix that effects all the quads in the buffer I'm drawing?

Share this post


Link to post
Share on other sites
superpig    1825
Quote:
Original post by Sync Views
ok... I have 0 idea where to start and how what your saying works because I thaught vertex shaders were just the colours of something not where something is?
Pixel shaders can only affect the colour (and depth), but vertex shaders can mess with any aspect of the vertex. Everything that the fixed-function transform and lighting pipeline does - SetTransform/SetLight stuff - can be done in vertex shaders.

Quote:
So you saying I can use a custom shader so I can send say 20 diffrent transform matrixes and have them transform diffrent quads rather than one matrix that effects all the quads in the buffer I'm drawing?
Yep, pretty much.

Share this post


Link to post
Share on other sites
Sync Views    139
Ok well ive starting looking at how vertex shaders work and how to create use them.

What I don't get is how I'm meant to add "custom" data to it as there are no "custom" entrys for the D3DDECLUSAGE_* types.

D3DVERTEXELEMENT9 vShaderElements[] =
{
{0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0},//xyz -whats the deal with -1,1 ranges? Should they always use that cause Ive been using like -1000 to 1000 for large stuff like backgrounds
{0, 12, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0},//uv
D3DDECL_END()
};



It looks as if I can only draw as many vertices as are in the buffer (eg I can't draw a quad for every matrix using the same 4 input vertices). Should I just create like 100 quads in the buffer or something then so I'm pretty sure I'm unlikly to run out of quads and have to split drawing for that texture into 2 blocks? Also how fast is sending matrixes and stuff for use in a vertex shader?

Share this post


Link to post
Share on other sites
superpig    1825
Quote:
Original post by Sync Views
What I don't get is how I'm meant to add "custom" data to it as there are no "custom" entrys for the D3DDECLUSAGE_* types.
From the docs:
Quote:
D3DDECLUSAGE_TEXCOORD can be used for user-defined fields (which don't have an existing usage defined).


-1..+1 ranges are only involved if the data type is one of the normalized formats (e.g. D3DDECLTYPE_UBYTE4N). None of the float formats are normalized.

Quote:
It looks as if I can only draw as many vertices as are in the buffer (eg I can't draw a quad for every matrix using the same 4 input vertices). Should I just create like 100 quads in the buffer or something then so I'm pretty sure I'm unlikly to run out of quads and have to split drawing for that texture into 2 blocks?
Yes, that's the idea. If you happen to have > 100 quads, then you do it in two batches; but this won't happen often if you've got enough quads in the buffer.

Quote:
Also how fast is sending matrixes and stuff for use in a vertex shader?
It's pretty fast. However, if the quads are always the same size and always have the same texture coordinates, then you don't need to send a full matrix; if you only need to send the position on screen, that's just an x/y pair - two float values. Packing in rotations is a little more involved, but still, if this is 2D then you should only need a 2x3 matrix, which you can pack into two registers. As a general rule, the smaller the size of the data you're sending to the vertex shader, the faster it will be to send.

You could also look into the possibility of preserving data from frame to frame - you don't necessarily have to reupload /all/ the params again every time, you might be able to get away with only reuploading those that have changed.

Share this post


Link to post
Share on other sites
Sync Views    139
Does writing my own vertex shader by pass the world, projection and view transforms or is it just the first two? (I'm not complelty sure what the diffrence between the last two is anyways...)

Share this post


Link to post
Share on other sites
superpig    1825
Quote:
Original post by Sync Views
Does writing my own vertex shader by pass the world, projection and view transforms or is it just the first two?
It bypasses all three. It also bypasses the Direct3D lighting engine (D3DLIGHT9, SetLight/LightEnable).

Quote:
(I'm not complelty sure what the diffrence between the last two is anyways...)
Take a look at the SDK docs - DirectX Graphics -> Programming Guide -> Getting Started -> Transforms.

Loosely speaking, the view transform is used to account for the camera's position and orientation in the world, while the projection transform is used to simulate the camera's "lens."

Share this post


Link to post
Share on other sites
Sync Views    139
So I need my shader to:
-Use the appropiate world transform arcording to the vertexes quad index
-Use the current view transfrom (so camra xyz, angle)
-Use the current projection transfrom (perspective?)

Also all the 2d D3D stuff Ive seen has drawn struff in the order it's wanted.
Is there a reason people do it that way rather than:
-Setting a projection with no perspective
-Drawing quads using the z axis so the grafics card can decide if things are above or below each other thus allowing me to draw everything with a single texture in one go.

Share this post


Link to post
Share on other sites
superpig    1825
Quote:
Original post by Sync Views
So I need my shader to:
-Use the appropiate world transform arcording to the vertexes quad index
-Use the current view transfrom (so camra xyz, angle)
-Use the current projection transfrom (perspective?)
Correct. Given that this is 2D, both the world and view transforms can probably just be 2x3 matrices; if you pack that into two 4-vector constants, that leaves you two components free that you could pack a Z offset and one other piece of information into (alpha value?). Projection would probably want to be an orthographic projection rather than a perspective projection.

Quote:

Also all the 2d D3D stuff Ive seen has drawn struff in the order it's wanted.
Is there a reason people do it that way rather than:
-Setting a projection with no perspective
-Drawing quads using the z axis so the grafics card can decide if things are above or below each other thus allowing me to draw everything with a single texture in one go.
Using the Z buffer means you're allocating a Z buffer (more memory usage) and in theory it'd be possible for a card to render non-Z-tested stuff slightly faster than Z-tested stuff (because it doesn't have to do the Z-test, natch). In practice, though, the extra memory usage isn't an issue and the speed difference is actually probably zero, because non-Z-tested stuff is often treated as a Z-test of 'always pass'. Being able to use a single texture and skipping the sort-by-depth is a bigger win.

Share this post


Link to post
Share on other sites
Sync Views    139
ok well ive written this vertex shader, now just to find out how to actualy use it...


struct VS_INPUT
{
vector position : POSITION;
vector uv : TEXCOORD;
vector quad : TEXCOORD;
};
struct VS_OUTPUT
{
vector position : POSITION;
vector uv : TEXCOORD;
vector diffuse : COLOR;
};

matrix WorldMatrix[100];//max 100 quads per batch
matrix ProjMatrix;
matrix ViewMatrix;

VS_OUTPUT Main(VS_INPUT input)
{
VS_OUTPUT output = (VS_OUTPUT)0;

//move the vector to screen space
vector position = input.position;
position = mul(position, WorldMatix[input.quad]);
position = mul(position, ProjMatix);
position = mul(position, ViewMatix);

output.position = position;
output.diffuse = {1.0f, 1.0f, 1.0f, 1.0f};//white
output.uv = input.uv;
return output;
}

A is this the right vertex declaration for that?

struct VertexQuad
{
float x, y, z;//float3 position0
float u,v; //float2 texcords0
float quad; //int texcords1
VertexQuad(const float x, const float y, const float z, const float u, const float v, const float quad);
}
...
D3DVERTEXELEMENT9 decl[] =
{
{0, 0 , D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0},//x,y,z
{0, 12, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0},//u,v
{0, 24, D3DDECLTYPE_FLOAT1, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 1},//quad
D3DDECL_END()
};





[Edited by - Sync Views on May 21, 2008 1:14:07 AM]

Share this post


Link to post
Share on other sites
superpig    1825
Not quite. The offset and data type for the third element are wrong. (Failing that, the type of the third component of the structure is wrong).

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now

Sign in to follow this