Loading texture issue help

Started by
6 comments, last by noodleBowl 10 years, 6 months ago

I currently have a sprite font texture that is a non power 2 texture. On disk its dimensions are 256 x 96

I have heard that DirectX 9 will attempt to resize it to become a Power of Two texture. To prevent this from happening I'm using this code


HRESULT hr = D3DXCreateTextureFromFileEx(device,
						texture.c_str(),
						D3DX_DEFAULT_NONPOW2,
						D3DX_DEFAULT_NONPOW2,
						D3DX_DEFAULT,
						NULL,
						D3DFMT_UNKNOWN,
						D3DPOOL_MANAGED,
						D3DX_DEFAULT,
						D3DX_DEFAULT,
						0,
						NULL,
						NULL,
						&texture);

Which should load the texture without resizing it, leaving it at 256 x 96. But when it is rendered it looks distorted, specifically if you look at the 'Y', '!', and '4' characters. Any ideas why this maybe happening?

I have included the original image and screen capture of the rendered texture

Original Image

[attachment=18279:origSpriteFont.png]

Distorted rendered

[attachment=18280:renderedSpriteFont.PNG]

Advertisement

Your artifact happens at the top-left-to-bottom-right diagonal and at the bottom. I rather suspect something wrong with how you setup your quad vertices than a pow2 glitch. Things to try:

- Check with a pow2 texture first (a photograph/test picture rather than a bitmap font, just something that hasn't empty space) to exclude a pow2 glitch

- Make sure you have considered this (a classic D3D9 pitfall): Directly Mapping Texels to Pixels (Direct3D 9).

Also show us how you setup your vertices (and what transformations you use).

Your artifact happens at the top-left-to-bottom-right diagonal and at the bottom. I rather suspect something wrong with how you setup your quad vertices than a pow2 glitch. Things to try:

- Check with a pow2 texture first (a photograph/test picture rather than a bitmap font, just something that hasn't empty space) to exclude a pow2 glitch

- Make sure you have considered this (a classic D3D9 pitfall): Directly Mapping Texels to Pixels (Direct3D 9).

Also show us how you setup your vertices (and what transformations you use).

I used this texture which is 256 x 256 and everything seems to be normal.

Original 256 x 256 Texture

[attachment=18306:picture.png]

Screen Cap

[attachment=18307:Capture.PNG]

So this must mean that it is not a pow2 issue correct? Which I believe is the case since I read that link and subtracted 0.5 from my X and Y positions and it drew with no distortion.

Screen Cap of same texture with -0.5 offset

[attachment=18308:Capture2.PNG]

Assuming that was the correct fix, does this mean that I should apply the -0.5 offset only when I setup my vertexes or should it be before I pass in the X and Y positions? Is there any impact on my position calculations (eg collision detection) if I were to do it one way over the other?

This is my current function to store my vertex data (I have no transformations applied)


void vBatcher::draw(float x, float y, float width, float height, LPDIRECT3DTEXTURE9 texture)
{

	#pragma region Vertex Data

	//Vertex 0
	quadData.verts[0].x = x;
	quadData.verts[0].y = y;
	quadData.verts[0].z = 1.0f;
	quadData.verts[0].rhw = 1.0f;
	quadData.verts[0].color = D3DCOLOR_XRGB(0, 0, 0);
	quadData.verts[0].u = 0.0f;
	quadData.verts[0].v = 0.0f;

	//Vertex 1
	quadData.verts[1].x = x + width;
	quadData.verts[1].y = y;
	quadData.verts[1].z = 1.0f;
	quadData.verts[1].rhw = 1.0f;
	quadData.verts[1].color = D3DCOLOR_XRGB(0, 0, 0);
	quadData.verts[1].u = 1.0f;
	quadData.verts[1].v = 0.0f;


	//Vertex 2
	quadData.verts[2].x = x + width;
	quadData.verts[2].y = y + height;
	quadData.verts[2].z = 1.0f;
	quadData.verts[2].rhw = 1.0f;
	quadData.verts[2].color = D3DCOLOR_XRGB(0, 0, 0);
	quadData.verts[2].u = 1.0f;
	quadData.verts[2].v = 1.0f;

	//Vertex 3
	quadData.verts[3].x = x;
	quadData.verts[3].y = y + height;
	quadData.verts[3].z = 1.0f;
	quadData.verts[3].rhw = 1.0f;
	quadData.verts[3].color = D3DCOLOR_XRGB(0, 0, 0);
	quadData.verts[3].u = 0.0f;
	quadData.verts[3].v = 1.0f;

	#pragma endregion

	#pragma region texture

	quadData.texture = texture;

	#pragma endregion

	drawData.push_back(quadData);
}

Assuming that was the correct fix, does this mean that I should apply the -0.5 offset only when I setup my vertexes or should it be before I pass in the X and Y positions?


Doesn't really matter, that's rather a design question for your batcher. I for one wouldn't hardwire it: How about providing dedicated functions - or a optional bool parameter like pixelTexelCorrection - for this ?

As an aside: If you weren't using pretransformed vertices but a orthographic projection instead you wouldn't even need to do the above but could instead offset the pixels globally. This might be worth considering anyway. Not only do you want to get familiar with transformations it also saves you bandwidth for your vertices (for 2D you don't even need a z coordinate).

Is there any impact on my position calculations (eg collision detection) if I were to do it one way over the other?


Collision (or physics) should not really depend on how you render anyway, so no. Your game logic (model) operates decoupled in its "game coordinate system". Rendering is only visualization.

Also: This pixel-texel offset is only sensible for pixel-perfect rendering (e.g. text), i.e. when using integer (screen) coordinates. One can "render at subpixel positions" though and combined with linear texture filtering objects (sprites) will appear smooth. Depends on your goal and artwork I guess. For a retro look you probably want such a pixel/integer snapping.

Doesn't really matter, that's rather a design question for your batcher. I for one wouldn't hardwire it: How about providing dedicated functions - or a optional bool parameter like pixelTexelCorrection - for this ?


As an aside: If you weren't using pretransformed vertices but a orthographic projection instead you wouldn't even need to do the above but could instead offset the pixels globally. This might be worth considering anyway. Not only do you want to get familiar with transformations it also saves you bandwidth for your vertices (for 2D you don't even need a z coordinate).

Eventually I want to create a 2D Camera, I assume I would need a orthographic projection or should I be using a persepctive projection for this?

Can you provide steps on how these are setup as I am not sure how this should be done properly.

Since we are on this subject could you also go over transforms as well. I understand what they do and what they are for but I cannot get them to work. I assume this is because of my render / draw function or the way my CUSTOMFVF is setup.

Also you say I don't need my Z cord, which is true unless I want to support 3D models in the future. But assuming I will only be doing 2D, I would think that it is not as simple as just removing the z float from my vertex structure. What changes do I need to make? Also if I did decide to keep it for 3D use would this effect the type of projection that I end up using?

Here is my current vertex structure and CUSTOMFVF


//Cusotm FVF
CUSTOMFVF = D3DFVF_XYZRHW | D3DFVF_TEX1 | D3DFVF_DIFFUSE;

//Vertex structure used
struct vertex
{
	float x;
	float y;
	float z;
	float rhw;
	D3DCOLOR color;
	float u;
	float v;
};

Here is my Complete code just for good measure:

main.cpp - http://pastebin.com/qrYP1urN

vBatcher.cpp - http://pastebin.com/1J8dwFS3

vBatcher.h - http://pastebin.com/88D1DrWZ

It's actually quite scary how similar your font texture is to mine, same font, same shadowing, same layout and I've been having an almost identical issue.

Bizarre.

Eventually I want to create a 2D Camera, I assume I would need a orthographic projection or should I be using a persepctive projection for this?

Nope, perspective is for 3D.

Can you provide steps on how these are setup as I am not sure how this should be done properly.

Sorry not inclined to give a full tutorial (and you definitively need one, a forum post is just not enough). A good starting point though is in the MSDN entry about Transforms. But it essentially boils down to this: Your vertex positions get transformed (duh?) to finally be in screen space (pixel coordinates). With the so-called fixed function pipeline the individual transformation stages are set with IDirect3DDevice9::SetTransform and IDirect3DDevice9::SetViewport. Alernatively you use a vertex shader to perform the transformation calculation yourself. Which brings us to...

Since we are on this subject could you also go over transforms as well. I understand what they do and what they are for but I cannot get them to work. I assume this is because of my render / draw function or the way my CUSTOMFVF is setup.

Precisely. You're currently using pretransformed vertices (D3DFVF_XYZRHW). Such vertex positions won't be changed by the SetTransform (or Vertex Shader) at all, they're supposed to be in screen space already.

Also you say I don't need my Z cord, which is true unless I want to support 3D models in the future. But assuming I will only be doing 2D, I would think that it is not as simple as just removing the z float from my vertex structure. What changes do I need to make?

Hmmm, maybe I've gone a bit too far now. Doesn't look like FVF supports 2D coords. You could switch to vertex declarations (they're more flexible, but less easy to setup). I also wonder if the fixed function pipeline supports 2D coords at all. Use D3DFVF_XYZ for now. (*)

Also if I did decide to keep it for 3D use would this effect the type of projection that I end up using?

For 3D one can use both perspective or orthographic. For 2D only orthographics makes sense IMO. As said. Stick with 3D and set e.g. z = 0.

Here is my current vertex structure and CUSTOMFVF...


Change it to

CUSTOMFVF = D3DFVF_XYZ | D3DFVF_TEX1 | D3DFVF_DIFFUSE;
struct vertex
{
    float x;
    float y;
    float z;
    D3DCOLOR color;
    float u;
    float v;
};
It still won't be that simple, though. Your first shot might result in no rendering. Get used to it (if you haven't already), that's what the first steps with a low level graphics API is all about. Know what the individial transformations mean (world, view, projection) and use PIX/graphics debugger. Then make a camera, e.g. with a zoom/pan functionality.

(*) In the long run you will want to switch to proper vertex declarations and vertex and pixel shaders. Fixed function is maybe ok to get familar with transformations, but as soon as you want to play with more complex texturing you want to use shaders.


I'm really a forum addict. Even after 10 hours of exhausting work I still help around here tongue.png

Can you provide steps on how these are setup as I am not sure how this should be done properly.

Sorry not inclined to give a full tutorial (and you definitively need one, a forum post is just not enough). A good starting point though is in the MSDN entry about Transforms. But it essentially boils down to this: Your vertex positions get transformed (duh?) to finally be in screen space (pixel coordinates). With the so-called fixed function pipeline the individual transformation stages are set with IDirect3DDevice9::SetTransform and IDirect3DDevice9::SetViewport. Alernatively you use a vertex shader to perform the transformation calculation yourself. Which brings us to...

Since we are on this subject could you also go over transforms as well. I understand what they do and what they are for but I cannot get them to work. I assume this is because of my render / draw function or the way my CUSTOMFVF is setup.

Precisely. You're currently using pretransformed vertices (D3DFVF_XYZRHW). Such vertex positions won't be changed by the SetTransform (or Vertex Shader) at all, they're supposed to be in screen space already

If you do give me a tutorial how will I learn? tongue.png

Anyway, I have been reading up on this all day since this is a very annoying issue. But there are somethings I'm unsure about.

Here is what I understand so far, so please correct anything if I'm wrong.

[+]--------------------------------------------------------[+]

First lets start with the transforms as everything is based on this

So we have 4 types of spaces:

Local: Literally this space is how a object is positioned and etc in its own space.

World: Where everything is in the world. A model or quad in my case is transformed here from its original local space.

View: The space that the belongs to the current view ( camera ). All items in the world space are ajusted based on this space.

Projection: How the camera functions. This space is controled by the view space and handles the Field of View (FoV) and far and near clipping planes

If we use this sampleish code:


//Somewhere at the top where the variables are

//UN-transformed vertices
DWORD CUSTFVF = D3DFVF_XYZ | D3DFVF_DIFFUSE;

void initDirectX()
{
    /* Create the DirectX 9 device here and etc*/

    //Set the render states we use
    device->SetRenderState(D3DRS_LIGHTING, FALSE);
    device->SetRenderState(D3DRS_CULLMODE, D3DCULL_NONE);
}

void render()
{
	static float x = 0.0f;
	static float y = 0.0f;
	static float z = 0.0f;
	static float w = 32.0f;
	static float h = 32.0f;

    vertex vertices[] = 
    {
        { x, y, z, D3DCOLOR_XRGB(0, 0, 255), },
        { x+w, y, z, D3DCOLOR_XRGB(0, 255, 0), },
        { x+w, y+h, z, D3DCOLOR_XRGB(255, 0, 0), },
	{ x, y, z, D3DCOLOR_XRGB(0, 0, 255), },
        { x, y+h, z, D3DCOLOR_XRGB(0, 255, 0),},
        { x+w, y+h, z, D3DCOLOR_XRGB(255, 0, 0), }
    };

    device->CreateVertexBuffer(6 *sizeof(vertex),
			       0,
                               CUSTOMFVF,
                               D3DPOOL_MANAGED,
                               &vB,
                               NULL);

    VOID* pVoid;
    vB->Lock(0, 0, (void**)&pVoid, 0);
    memcpy(pVoid, vertices, sizeof(vertices));
    vB->Unlock();

    device->Clear(0, NULL, D3DCLEAR_TARGET, D3DCOLOR_XRGB(0, 40, 100), 1.0f, 0);
    device->BeginScene();

    device->SetFVF(CUSTFVF);
    device->SetStreamSource(0, vB, 0, sizeof(vertex));
    device->DrawPrimitive(D3DPT_TRIANGLELIST, 0, 2);

    device->EndScene();
    device->Present(NULL, NULL, NULL, NULL);
}

We will see this rendered on the screen:

[attachment=18330:Capture.PNG]

Which makes sense because the camera is currently not setup; So it points at position 0,0

[attachment=18331:cam.png]

Here is where I am unsure of somethings

Things I understand:

1. Direct X uses a left hand system BUT

2. D3DFVF_XYZRHW vertices are already transformed

3. When using D3DFVF_XYZRHW everything is based at the top left 0,0

4. How transformations work at the local and world space level

5. Orthographic projections convert 3D space to 2D space

Things I don't know / don't understand:

1. How to manipulate the view space

2. How to manipulate and setup the projection space

Other questions:

1. Are these the only things I need to do for orthographic projections? The projections seem to work. The only thing that does not work is rotation, its is rotateing around the world's perspective. How can I rotate in local space?

Working code with no rotation involved


/* Clear and begin scene code */

//Sets up a orthographic projections where the origin is the center of the screen
D3DXMATRIX out;
D3DXMatrixOrthoLH(&out, SCREEN_WIDTH, SCREEN_HEIGHT, 0.0f, 1.0f);
device->SetTransform(D3DTS_PROJECTION, &out);

/*Draw primative and endScene code */


Alternative Orthographic projection code (Base on Top Left as 0, 0 )


/* Clear and begin scene code */

//Sets up an orthographic projection based on the top left (0,0) using a 800 x 600 screen
D3DXMATRIX out;
D3DXMatrixOrthoOffCenterLH(&out, 0.0f, 800.0f, 600.0f, 0.0f, 0.0f, 1.0f);
device->SetTransform(D3DTS_PROJECTION, &out);

/*Draw primative and endScene code */

Code when trying to rotate (Works, but rotates around the 0,0 of the projection)

How do I get it to rotate around the local space so the object is rotated about its center


/* Clear and begin scene code */

D3DXMATRIX rot;
D3DXMatrixRotationX(&rot, D3DXToRadian(45.0f));
device->SetTransform(D3DTS_WORLD, &rot);

D3DXMATRIX out;
D3DXMatrixOrthoOffCenterLH(&out, 0.0f, 800.0f, 600.0f, 0.0f, 0.0f, 1.0f);
device->SetTransform(D3DTS_PROJECTION, &out);

/*Draw primative and endScene code */

2. In order to make a movable 2D Camera would I be moving everything based on the camera's position like this?

Is there a better way?


//Transform everything in the world based on the camera's position
D3DXMATRIX tran;
D3DXMatrixTranslation(&tran, Object.x - cam.x, Object.y - cam.y, 0.0f);
device->SetTransform(D3DTS_WORLD, &tran);

3. What would I do to rotate models or quads on a indivisual? Surely it can't be:


//Preform a rotation on the X-Axis
D3DXMATRIX rot;
D3DXMatrixRotationX(&rot, D3DXToRadian(45.0f));
device->SetTransform(D3DTS_WORLD, &rot);

This would rotate everything the transform applys to. I would think doing something like

1. If we need to rotate a quad / model use the rotation transform; the otherwise do not do the transform

2. Draw it the quad

3. Move onto the next quad to draw; check to see if it needs to be rotated or etc

Would be highly inefficient

4. The original problem is fixed with setting a -0.5 pixel offset. So now that I have an Othrographic Projection how do I set global offset? Or because I use D3DFVF_XYZ for my vertexs and I can perform transformations?

Something like this:


D3DXMATRIX tran;
D3DXMatrixTranslation(&tran, Object.x - 0.5f, Object.y - 0.5f, 0.0f);
device->SetTransform(D3DTS_WORLD, &tran);

How is this applied to multiple objects?

-------------------

I'll add more as I think of it, but for now I got to grab some food!

This topic is closed to new replies.

Advertisement