• Announcements

    • khawk

      Download the Game Design and Indie Game Marketing Freebook   07/19/17

      GameDev.net and CRC Press have teamed up to bring a free ebook of content curated from top titles published by CRC Press. The freebook, Practices of Game Design & Indie Game Marketing, includes chapters from The Art of Game Design: A Book of Lenses, A Practical Guide to Indie Game Marketing, and An Architectural Approach to Level Design. The GameDev.net FreeBook is relevant to game designers, developers, and those interested in learning more about the challenges in game development. We know game development can be a tough discipline and business, so we picked several chapters from CRC Press titles that we thought would be of interest to you, the GameDev.net audience, in your journey to design, develop, and market your next game. The free ebook is available through CRC Press by clicking here. The Curated Books The Art of Game Design: A Book of Lenses, Second Edition, by Jesse Schell Presents 100+ sets of questions, or different lenses, for viewing a game’s design, encompassing diverse fields such as psychology, architecture, music, film, software engineering, theme park design, mathematics, anthropology, and more. Written by one of the world's top game designers, this book describes the deepest and most fundamental principles of game design, demonstrating how tactics used in board, card, and athletic games also work in video games. It provides practical instruction on creating world-class games that will be played again and again. View it here. A Practical Guide to Indie Game Marketing, by Joel Dreskin Marketing is an essential but too frequently overlooked or minimized component of the release plan for indie games. A Practical Guide to Indie Game Marketing provides you with the tools needed to build visibility and sell your indie games. With special focus on those developers with small budgets and limited staff and resources, this book is packed with tangible recommendations and techniques that you can put to use immediately. As a seasoned professional of the indie game arena, author Joel Dreskin gives you insight into practical, real-world experiences of marketing numerous successful games and also provides stories of the failures. View it here. An Architectural Approach to Level Design This is one of the first books to integrate architectural and spatial design theory with the field of level design. The book presents architectural techniques and theories for level designers to use in their own work. It connects architecture and level design in different ways that address the practical elements of how designers construct space and the experiential elements of how and why humans interact with this space. Throughout the text, readers learn skills for spatial layout, evoking emotion through gamespaces, and creating better levels through architectural theory. View it here. Learn more and download the ebook by clicking here. Did you know? GameDev.net and CRC Press also recently teamed up to bring GDNet+ Members up to a 20% discount on all CRC Press books. Learn more about this and other benefits here.
Sign in to follow this  
Followers 0
noodleBowl

Loading texture issue help

7 posts in this topic

I currently have a sprite font texture that is a non power 2 texture. On disk its dimensions are 256 x 96

 

I have heard that DirectX 9 will attempt to resize it to become a Power of Two texture. To prevent this from happening I'm using this code

HRESULT hr = D3DXCreateTextureFromFileEx(device,
						texture.c_str(),
						D3DX_DEFAULT_NONPOW2,
						D3DX_DEFAULT_NONPOW2,
						D3DX_DEFAULT,
						NULL,
						D3DFMT_UNKNOWN,
						D3DPOOL_MANAGED,
						D3DX_DEFAULT,
						D3DX_DEFAULT,
						0,
						NULL,
						NULL,
						&texture);

Which should load the texture without resizing it, leaving it at 256 x 96. But when it is rendered it looks distorted, specifically if you look at the 'Y', '!', and '4' characters. Any ideas why this maybe happening?

 

I have included the original image and screen capture of the rendered texture

 

Original Image

[attachment=18279:origSpriteFont.png]

 

Distorted rendered

[attachment=18280:renderedSpriteFont.PNG]

 

0

Share this post


Link to post
Share on other sites

Your artifact happens at the top-left-to-bottom-right diagonal and at the bottom. I rather suspect something wrong with how you setup your quad vertices than a pow2 glitch. Things to try:

 

- Check with a pow2 texture first (a photograph/test picture rather than a bitmap font, just something that hasn't empty space) to exclude a pow2 glitch

- Make sure you have considered this (a classic D3D9 pitfall): Directly Mapping Texels to Pixels (Direct3D 9).

 

Also show us how you setup your vertices (and what transformations you use).

Edited by unbird
2

Share this post


Link to post
Share on other sites

Your artifact happens at the top-left-to-bottom-right diagonal and at the bottom. I rather suspect something wrong with how you setup your quad vertices than a pow2 glitch. Things to try:

 

- Check with a pow2 texture first (a photograph/test picture rather than a bitmap font, just something that hasn't empty space) to exclude a pow2 glitch

- Make sure you have considered this (a classic D3D9 pitfall): Directly Mapping Texels to Pixels (Direct3D 9).

 

Also show us how you setup your vertices (and what transformations you use).

 

I used this texture which is 256 x 256 and everything seems to be normal.

 

Original 256 x 256 Texture

[attachment=18306:picture.png]

 

Screen Cap

[attachment=18307:Capture.PNG]

 

So this must mean that it is not a pow2 issue correct? Which I believe is the case since I read that link and subtracted 0.5 from my X and Y positions and it drew with no distortion.

 

Screen Cap of same texture with -0.5 offset

[attachment=18308:Capture2.PNG]

 

Assuming that was the correct fix, does this mean that I should apply the -0.5 offset only when I setup my vertexes or should it be before I pass in the X and Y positions? Is there any impact on my position calculations (eg collision detection) if I were to do it one way over the other?

 

This is my current function to store my vertex data (I have no transformations applied)

void vBatcher::draw(float x, float y, float width, float height, LPDIRECT3DTEXTURE9 texture)
{

	#pragma region Vertex Data

	//Vertex 0
	quadData.verts[0].x = x;
	quadData.verts[0].y = y;
	quadData.verts[0].z = 1.0f;
	quadData.verts[0].rhw = 1.0f;
	quadData.verts[0].color = D3DCOLOR_XRGB(0, 0, 0);
	quadData.verts[0].u = 0.0f;
	quadData.verts[0].v = 0.0f;

	//Vertex 1
	quadData.verts[1].x = x + width;
	quadData.verts[1].y = y;
	quadData.verts[1].z = 1.0f;
	quadData.verts[1].rhw = 1.0f;
	quadData.verts[1].color = D3DCOLOR_XRGB(0, 0, 0);
	quadData.verts[1].u = 1.0f;
	quadData.verts[1].v = 0.0f;


	//Vertex 2
	quadData.verts[2].x = x + width;
	quadData.verts[2].y = y + height;
	quadData.verts[2].z = 1.0f;
	quadData.verts[2].rhw = 1.0f;
	quadData.verts[2].color = D3DCOLOR_XRGB(0, 0, 0);
	quadData.verts[2].u = 1.0f;
	quadData.verts[2].v = 1.0f;

	//Vertex 3
	quadData.verts[3].x = x;
	quadData.verts[3].y = y + height;
	quadData.verts[3].z = 1.0f;
	quadData.verts[3].rhw = 1.0f;
	quadData.verts[3].color = D3DCOLOR_XRGB(0, 0, 0);
	quadData.verts[3].u = 0.0f;
	quadData.verts[3].v = 1.0f;

	#pragma endregion

	#pragma region texture

	quadData.texture = texture;

	#pragma endregion

	drawData.push_back(quadData);
}

0

Share this post


Link to post
Share on other sites

Assuming that was the correct fix, does this mean that I should apply the -0.5 offset only when I setup my vertexes or should it be before I pass in the X and Y positions?


Doesn't really matter, that's rather a design question for your batcher. I for one wouldn't hardwire it: How about providing dedicated functions - or a optional bool parameter like [tt]pixelTexelCorrection[/tt] - for this ?

As an aside: If you weren't using pretransformed vertices but a orthographic projection instead you wouldn't even need to do the above but could instead offset the pixels globally. This might be worth considering anyway. Not only do you want to get familiar with transformations it also saves you bandwidth for your vertices (for 2D you don't even need a z coordinate).
 

Is there any impact on my position calculations (eg collision detection) if I were to do it one way over the other?


Collision (or physics) should not really depend on how you render anyway, so no. Your game logic (model) operates decoupled in its "game coordinate system". Rendering is only visualization.

Also: This pixel-texel offset is only sensible for pixel-perfect rendering (e.g. text), i.e. when using integer (screen) coordinates. One can "render at subpixel positions" though and combined with linear texture filtering objects (sprites) will appear smooth. Depends on your goal and artwork I guess. For a retro look you probably want such a pixel/integer snapping.
0

Share this post


Link to post
Share on other sites

Doesn't really matter, that's rather a design question for your batcher. I for one wouldn't hardwire it: How about providing dedicated functions - or a optional bool parameter like [tt]pixelTexelCorrection[/tt] - for this ?


As an aside: If you weren't using pretransformed vertices but a orthographic projection instead you wouldn't even need to do the above but could instead offset the pixels globally. This might be worth considering anyway. Not only do you want to get familiar with transformations it also saves you bandwidth for your vertices (for 2D you don't even need a z coordinate).
 

 

Eventually I want to create a 2D Camera, I assume I would need a orthographic projection or should I be using a persepctive projection for this?

Can you provide steps on how these are setup as I am not sure how this should be done properly.

 

Since we are on this subject could you also go over transforms as well. I understand what they do and what they are for but I cannot get them to work. I assume this is because of  my render / draw function or the way my CUSTOMFVF is setup.

 

Also you say I don't need my Z cord, which is true unless I want to support 3D models in the future. But assuming I will only be doing 2D, I would think that it is not as simple as just removing the z float from my vertex structure. What changes do I need to make? Also if I did decide to keep it for 3D use would this effect the type of projection that I end up using?

 

Here is my current vertex structure and CUSTOMFVF

//Cusotm FVF
CUSTOMFVF = D3DFVF_XYZRHW | D3DFVF_TEX1 | D3DFVF_DIFFUSE;

//Vertex structure used
struct vertex
{
	float x;
	float y;
	float z;
	float rhw;
	D3DCOLOR color;
	float u;
	float v;
};

Here is my Complete code just for good measure:

main.cpp - http://pastebin.com/qrYP1urN

vBatcher.cpp - http://pastebin.com/1J8dwFS3

vBatcher.h - http://pastebin.com/88D1DrWZ

Edited by noodleBowl
0

Share this post


Link to post
Share on other sites
It's actually quite scary how similar your font texture is to mine, same font, same shadowing, same layout and I've been having an almost identical issue.

Bizarre.
0

Share this post


Link to post
Share on other sites

Eventually I want to create a 2D Camera, I assume I would need a orthographic projection or should I be using a persepctive projection for this?

Nope, perspective is for 3D.
 

Can you provide steps on how these are setup as I am not sure how this should be done properly.

Sorry not inclined to give a full tutorial (and you definitively need one, a forum post is just not enough). A good starting point though is in the MSDN entry about Transforms. But it essentially boils down to this: Your vertex positions get transformed (duh?) to finally be in screen space (pixel coordinates). With the so-called fixed function pipeline the individual transformation stages are set with IDirect3DDevice9::SetTransform and IDirect3DDevice9::SetViewport. Alernatively you use a vertex shader to perform the transformation calculation yourself. Which brings us to...
 

Since we are on this subject could you also go over transforms as well. I understand what they do and what they are for but I cannot get them to work. I assume this is because of  my render / draw function or the way my CUSTOMFVF is setup.

Precisely. You're currently using pretransformed vertices (D3DFVF_XYZRHW). Such vertex positions won't be changed by the SetTransform (or Vertex Shader) at all, they're supposed to be in screen space already.
 

Also you say I don't need my Z cord, which is true unless I want to support 3D models in the future. But assuming I will only be doing 2D, I would think that it is not as simple as just removing the z float from my vertex structure. What changes do I need to make?

Hmmm, maybe I've gone a bit too far now. Doesn't look like FVF supports 2D coords. You could switch to vertex declarations (they're more flexible, but less easy to setup). I also wonder if the fixed function pipeline supports 2D coords at all. Use D3DFVF_XYZ for now. (*)
 

Also if I did decide to keep it for 3D use would this effect the type of projection that I end up using?

For 3D one can use both perspective or orthographic. For 2D only orthographics makes sense IMO. As said. Stick with 3D and set e.g. z = 0.
 

Here is my current vertex structure and CUSTOMFVF...


Change it to
CUSTOMFVF = D3DFVF_XYZ | D3DFVF_TEX1 | D3DFVF_DIFFUSE;
struct vertex
{
    float x;
    float y;
    float z;
    D3DCOLOR color;
    float u;
    float v;
};
It still won't be that simple, though. Your first shot might result in no rendering. Get used to it (if you haven't already), that's what the first steps with a low level graphics API is all about. Know what the individial transformations mean (world, view, projection) and use PIX/graphics debugger. Then make a camera, e.g. with a zoom/pan functionality.

(*) In the long run you will want to switch to proper vertex declarations and vertex and pixel shaders. Fixed function is maybe ok to get familar with transformations, but as soon as you want to play with more complex texturing you want to use shaders.


I'm really a forum addict. Even after 10 hours of exhausting work I still help around here tongue.png
0

Share this post


Link to post
Share on other sites

 

Can you provide steps on how these are setup as I am not sure how this should be done properly.

Sorry not inclined to give a full tutorial (and you definitively need one, a forum post is just not enough). A good starting point though is in the MSDN entry about Transforms. But it essentially boils down to this: Your vertex positions get transformed (duh?) to finally be in screen space (pixel coordinates). With the so-called fixed function pipeline the individual transformation stages are set with IDirect3DDevice9::SetTransform and IDirect3DDevice9::SetViewport. Alernatively you use a vertex shader to perform the transformation calculation yourself. Which brings us to...
 

Since we are on this subject could you also go over transforms as well. I understand what they do and what they are for but I cannot get them to work. I assume this is because of  my render / draw function or the way my CUSTOMFVF is setup.

Precisely. You're currently using pretransformed vertices (D3DFVF_XYZRHW). Such vertex positions won't be changed by the SetTransform (or Vertex Shader) at all, they're supposed to be in screen space already

 

If you do give me a tutorial how will I learn? tongue.png

 

Anyway, I have been reading up on this all day since this is a very annoying issue. But there are somethings I'm unsure about.

Here is what I understand so far, so please correct anything if I'm wrong.

 

[+]--------------------------------------------------------[+]

 

First lets start with the transforms as everything is based on this

 

So we have 4 types of spaces:

Local: Literally this space is how a object is positioned and etc in its own space.

World: Where everything is in the world. A model or quad in my case is transformed here from its  original local space.

View: The space that the belongs to the current view ( camera ). All items in the world space are ajusted based on this space.

Projection: How the camera functions. This space is controled by the view space and handles the  Field of View (FoV) and far and near clipping planes

 

If we use this sampleish code:

//Somewhere at the top where the variables are

//UN-transformed vertices
DWORD CUSTFVF = D3DFVF_XYZ | D3DFVF_DIFFUSE;

void initDirectX()
{
    /* Create the DirectX 9 device here and etc*/

    //Set the render states we use
    device->SetRenderState(D3DRS_LIGHTING, FALSE);
    device->SetRenderState(D3DRS_CULLMODE, D3DCULL_NONE);
}

void render()
{
	static float x = 0.0f;
	static float y = 0.0f;
	static float z = 0.0f;
	static float w = 32.0f;
	static float h = 32.0f;

    vertex vertices[] = 
    {
        { x, y, z, D3DCOLOR_XRGB(0, 0, 255), },
        { x+w, y, z, D3DCOLOR_XRGB(0, 255, 0), },
        { x+w, y+h, z, D3DCOLOR_XRGB(255, 0, 0), },
	{ x, y, z, D3DCOLOR_XRGB(0, 0, 255), },
        { x, y+h, z, D3DCOLOR_XRGB(0, 255, 0),},
        { x+w, y+h, z, D3DCOLOR_XRGB(255, 0, 0), }
    };

    device->CreateVertexBuffer(6 *sizeof(vertex),
			       0,
                               CUSTOMFVF,
                               D3DPOOL_MANAGED,
                               &vB,
                               NULL);

    VOID* pVoid;
    vB->Lock(0, 0, (void**)&pVoid, 0);
    memcpy(pVoid, vertices, sizeof(vertices));
    vB->Unlock();

    device->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0, 40, 100), 1.0f, 0);
    device->BeginScene();

    device->SetFVF(CUSTFVF);
    device->SetStreamSource(0, vB, 0, sizeof(vertex));
    device->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 2);

    device->EndScene();
    device->Present(NULL, NULL, NULL, NULL);
}

We will see this rendered on the screen:

[attachment=18330:Capture.PNG]

 

Which makes sense because the camera is currently not setup; So it points at position 0,0

[attachment=18331:cam.png]

 

Here is where I am unsure of somethings

 

Things I understand:

1. Direct X uses a left hand system BUT

2. D3DFVF_XYZRHW vertices are already transformed

3. When using D3DFVF_XYZRHW everything is based at the top left 0,0

4. How transformations work at the local and world space level

5. Orthographic projections convert 3D space to 2D space

 

Things I don't know / don't understand:

1. How to manipulate the view space

2. How to manipulate and setup the projection space

 

Other questions:

1. Are these the only things I need to do for orthographic projections? The projections seem to work. The only thing that does not work is rotation, its is rotateing around the world's perspective. How can I rotate in local space?

 

Working code with no rotation involved

/* Clear and begin scene code */

//Sets up a orthographic projections where the origin is the center of the screen
D3DXMATRIX out;
D3DXMatrixOrthoLH(&out, SCREEN_WIDTH, SCREEN_HEIGHT, 0.0f, 1.0f);
device->SetTransform(D3DTS_PROJECTION, &out);

/*Draw primative and endScene code */


Alternative Orthographic projection code (Base on Top Left as 0, 0 )

/* Clear and begin scene code */

//Sets up an orthographic projection based on the top left (0,0) using a 800 x 600 screen
D3DXMATRIX out;
D3DXMatrixOrthoOffCenterLH(&out, 0.0f, 800.0f, 600.0f, 0.0f, 0.0f, 1.0f);
device->SetTransform(D3DTS_PROJECTION, &out);

/*Draw primative and endScene code */

Code when trying to rotate (Works, but rotates around the 0,0 of the projection)

How do I get it to rotate around the local space so the object is rotated about its center

/* Clear and begin scene code */

D3DXMATRIX rot;
D3DXMatrixRotationX(&rot, D3DXToRadian(45.0f));
device->SetTransform(D3DTS_WORLD, &rot);

D3DXMATRIX out;
D3DXMatrixOrthoOffCenterLH(&out, 0.0f, 800.0f, 600.0f, 0.0f, 0.0f, 1.0f);
device->SetTransform(D3DTS_PROJECTION, &out);

/*Draw primative and endScene code */

2. In order to make a movable 2D Camera would I be moving everything based on the camera's position like this?

 

Is there a better way?

//Transform everything in the world based on the camera's position
D3DXMATRIX tran;
D3DXMatrixTranslation(&tran, Object.x - cam.x, Object.y - cam.y, 0.0f);
device->SetTransform(D3DTS_WORLD, &tran);

3. What would I do to rotate models or quads on a indivisual? Surely it can't be:

//Preform a rotation on the X-Axis
D3DXMATRIX rot;
D3DXMatrixRotationX(&rot, D3DXToRadian(45.0f));
device->SetTransform(D3DTS_WORLD, &rot);

This would rotate everything the transform applys to. I would think doing something like

1. If we need to rotate a quad / model use the rotation transform; the otherwise do not do the transform

2. Draw it the quad

3. Move onto the next quad to draw; check to see if it needs to be rotated or etc

 

Would be highly inefficient

 

4. The original problem is fixed with setting a -0.5 pixel offset. So now that I have an Othrographic Projection how do I set global offset? Or because I use D3DFVF_XYZ for my vertexs and I can perform transformations?

 

Something like this:

D3DXMATRIX tran;
D3DXMatrixTranslation(&tran, Object.x - 0.5f, Object.y - 0.5f, 0.0f);
device->SetTransform(D3DTS_WORLD, &tran);

How is this applied to multiple objects?

-------------------

 

I'll add more as I think of it, but for now I got to grab some food!

Edited by noodleBowl
0

Share this post


Link to post
Share on other sites

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!


Register a new account

Sign in

Already have an account? Sign in here.


Sign In Now
Sign in to follow this  
Followers 0