Jump to content

  • Log In with Google      Sign In   
  • Create Account


TheFallenOne

Member Since 28 Oct 2010
Offline Last Active Oct 01 2013 02:09 AM

Topics I've Started

Mipmaps and Texture Seams

31 May 2013 - 12:35 AM

Hey guys,

 

I'm having a bit of an issue with texture mipmaps.

 

I have a model that uses a diffuse texture that doesn't stretch to the edge of the image file. To clarify, the edges of the texture are padded with a solid black color, and the texture coordinates on the model never hit either 0 or 1, they use a smaller subset of the texture.

 

When I disable mipmaps, everything works great, but with mipmaps, the edge of the texture is showing a line from the black pixels bordering the texture, making it appear like the model has a seam in it.

 

My guess is either the mipmaps are sampling part of the black pixels, or the texture coordinates are coming out with a different result on the mipmap'd texture.

 

Does anyone have any suggestions on how to fix this?

 

I can post images and code if needed.


Sprite Sheets and Texture Splitting

14 March 2013 - 10:52 PM

Hey guys, need some opinions/input on the best way to handle this.

 

Using DirectX 9, for reference.

 

I've recently started implementing animated textures into my engine, which included the addition of sprite sheets.

 

I've made it to the point that the resource hierarchy (texture management) will let me reference static textures and animated textures interchangeably, and the sprite sheet textures (plus separation data) are loaded properly.

 

Now I've come to a bit of a crossroads, and am hoping there's a better way to handle this.

 

So, I've got my sprite sheet loaded as a texture, and now need to split it off into individual frames.

 

I can setup a system that will modify texture coordinates and rotation properties to grab the proper image off of the sprite sheet, without cutting the texture up in memory at all. This, however, would add some overhead on per-frame calculations and increase the amount of data being sent down the shader pipeline a bit, especially considering I'll need to re-add transparent pixel data around the edges of most of the grabbed sections. Plus it'll be a decent bit of work coding wise.

 

The second option is to split the textures up when the sprite sheet is loaded, copying them to new texture resources, so I don't have to modify texture coordinates (et cetera) at all, I just have to pass a different texture pointer. This would add some additional memory overhead, though, and essentially eliminate 80% of the purpose of using sprite sheets in the first place.

 

So, I'm hoping there's a third option I'm missing. I was really hoping there was a way to simply set up a texture "reference" (that points to a specific subset of a texture) without having to physically make a new texture to store it, but I haven't seen anything of the sort.

 

 

Anyways, any input is greatly appreciated. Thanks in advance!


Skybox Rendering Issue

05 January 2013 - 02:06 AM

Hey guys,

 

In the process of writing some code to render a cube mapped skybox, used to render a star map in space.

 

Running into some issues, the texture isn't cooperating, as you can see in the following screenshot:

 

http://img502.imageshack.us/img502/4474/skyboxbug.jpg

 

 

Top and right hand sides seem to be rendering okay, but the rest isn't.

 

I've looked over everything ten times, double checked all the vertex and index definitions, rewritten the shader... to no avail. So obviously I'm missing something.

 

It feels like a misplaced texture coordinate, but I've tried replacing the texCUBE in the pixel shader with a color determined by tex coords, and all of them come out how they're intended, so I'm going crazy here. Figured I'd get some outside help.

 

 

I'm only sending texture coordinates and building vertex coordinates in the vertex shader.

 

 

The following is my cube definition (note that my camera viewing vector is down the Y axis, instead of the usual Z axis, hence the comments on the position of the vertices may seem... misleading):

 

	pPrim->LockVertices((void **)&pVertex);
	pPrim->LockIndices((void **)&pIndex);

	// 0 = Top far left
	pVertex[0].fU = 0.0f;
	pVertex[0].fV = 0.0f;
	pVertex[0].fW = 1.0f;

	// 1 = Bottom far left
	pVertex[1].fU = 0.0f;
	pVertex[1].fV = 0.0f;
	pVertex[1].fW = 0.0f;

	// 2 = Top close left
	pVertex[2].fU = 0.0f;
	pVertex[2].fV = 1.0f;
	pVertex[2].fW = 1.0f;

	// 3 = Bottom close left
	pVertex[3].fU = 0.0f;
	pVertex[3].fV = 1.0f;
	pVertex[3].fW = 0.0f;

	// 4 = Top far right
	pVertex[4].fU = 1.0f;
	pVertex[4].fV = 0.0f;
	pVertex[4].fW = 1.0f;

	// 5 = Bottom far right
	pVertex[5].fU = 1.0f;
	pVertex[5].fV = 0.0f;
	pVertex[5].fW = 0.0f;

	// 6 = Top close right
	pVertex[6].fU = 1.0f;
	pVertex[6].fV = 1.0f;
	pVertex[6].fW = 1.0f;

	// 7 = Bottom close right
	pVertex[7].fU = 1.0f;
	pVertex[7].fV = 1.0f;
	pVertex[7].fW = 0.0f;

	// 0 = Top far left
	// 1 = Bottom far left
	// 2 = Top close left
	// 3 = Bottom close left
	// 4 = Top far right
	// 5 = Bottom far right
	// 6 = Top close right
	// 7 = Bottom close right

	// Left Side
	pIndex[0] = 0;
	pIndex[1] = 1;
	pIndex[2] = 2;
	pIndex[3] = 1;
	pIndex[4] = 3;
	pIndex[5] = 2;

	// Right side
	pIndex[6] = 4;
	pIndex[7] = 6;
	pIndex[8] = 5;
	pIndex[9] = 5;
	pIndex[10] = 6;
	pIndex[11] = 7;

	// Top Side
	pIndex[12] = 0;
	pIndex[13] = 2;
	pIndex[14] = 4;
	pIndex[15] = 2;
	pIndex[16] = 6;
	pIndex[17] = 4;

	// Bottom Side
	pIndex[18] = 1;
	pIndex[19] = 5;
	pIndex[20] = 3;
	pIndex[21] = 3;
	pIndex[22] = 5;
	pIndex[23] = 7;

	// Far Side
	pIndex[24] = 0;
	pIndex[25] = 4;
	pIndex[26] = 1;
	pIndex[27] = 1;
	pIndex[28] = 4;
	pIndex[29] = 5;

	// Near Side
	pIndex[30] = 2;
	pIndex[31] = 3;
	pIndex[32] = 6;
	pIndex[33] = 3;
	pIndex[34] = 7;
	pIndex[35] = 6;

	pPrim->UnlockIndices();
	pPrim->UnlockVertices();

 

 

 

And... My shader:

 

 

struct VertexOut
{
    float4 Position     : POSITION;    
    float3 TexCoords    : TEXCOORD0;
};


float4x4 xProjection;
float4x4 xView;

float3	xCameraPos;

Texture Texture;
samplerCUBE TextureSampler = sampler_state{ texture = <Texture> ; magfilter = LINEAR; minfilter = LINEAR; mipfilter=LINEAR; AddressU = CLAMP; AddressV = CLAMP; AddressW = CLAMP;};


VertexOut CubeMapVS(float3 TexCoords : TEXCOORD0)
{
    VertexOut Output = (VertexOut)0;
    float4x4 Final = (float4x4)0;
    
    Output.Position.x = (((TexCoords.x - 0.5f) * 2) * 50000.0f) + xCameraPos.x;
    Output.Position.y = (((TexCoords.y - 0.5f) * 2) * 50000.0f) + xCameraPos.y;
    Output.Position.z = (((TexCoords.z - 0.5f) * 2) * 50000.0f) + xCameraPos.z;
    Output.Position.w = 1.0f;
    
    Final = mul(xView, xProjection);
        
    Output.Position = mul(Output.Position, Final);
    Output.TexCoords = TexCoords;

    return Output;
}


float4 CubeMapPS(float3 TexCoords : TEXCOORD0) : COLOR0
{
    return texCUBE(TextureSampler, TexCoords);
}


technique CubeMapShader
{
    pass Pass0
    {
         VertexShader = compile vs_2_0 CubeMapVS();
         PixelShader = compile ps_2_0 CubeMapPS();
    }
}

 

 

 

Any assistance would be greatly appreciated!!

 

Thanks in advance. smile.png


DX9 Ngon Handling (Mesh Creation)

14 October 2011 - 03:51 PM

Hey guys,

I've recently written my own custom Wavefront OBJ model importer (in C++) that creates a DX9 mesh based on the OBJ/MTL information supplied.

The only problem I'm having is with faces with more than 4 vertices (Ngons). I can handle quads just fine, but more than that I'm not 100% sure how I should handle it when creating a new mesh, and information seems to be pretty scarce on the web.

Does anyone have any pointers on where to start?

Creating a window at proper resolution

28 October 2010 - 07:09 PM

Hey all,

First post here, browsed the sections of the forums and this seemed like the appropriate one, I apologize if I dropped it into the wrong section by accident.

Anyways, I'm currently working on a game engine and recently noticed an issue, have thus far been unable to solve it (with several hours of searching, nonetheless). The engine is written in C++ in conjunction with DirectX, for reference.

So, here's the problem. When creating a window using CreateWindow or CreateWindowEx, I've discovered that you *cannot* create a window that extends past the edge of the screen. Normally this is a non-issue, however, I'll give a quick example of where I hit a snag.

Say a user selects Windowed mode, and decides to use the same resolution as the desktop. Let's say... 1280x800. Since it's windowed mode, I'd like a title bar to be available, so I've included the following style flags:

WS_CAPTION | WS_MINIMIZEBOX | WS_SYSMENU

Now, using AdjustWindowRect gives me the proper size that I should pass in to get the desired client area just fine. However, when I pass it in, CreateWindow returns a window of size 1280x786, because the title bar increases the size of the window to the point that if the client area of the window was 1280x800, it would be offscreen. Even if I attempt to pass in something like 1280x9000 for size, it will still create a window of size 1280x786, so that it doesn't go offscreen.

So, I'm missing 14 pixels. This causes some major texture issues when I go to render and present the backbuffer in DirectX, as the backbuffer has a height of 800, not 786. Obviously I could resize the backbuffer to match the client area, but I would rather not throw off the aspect ratio just because Windows decides it doesn't want to give me the client area I want.

Is there ANY way to get Windows to give me the properly sized window? I've tried calculating the proper size myself (using GetClientRect and comparing pre and post-CreateWindow call sizes) and then modifying it after the CreateWindow call using SetWindowPos, but that still won't let me exceed the screen size.

Any help is greatly appreciated! Thanks in advance,
-Kyle

PARTNERS