Jump to content
  • Advertisement

kamal7

Member
  • Content Count

    32
  • Joined

  • Last visited

Community Reputation

140 Neutral

About kamal7

  • Rank
    Member
  1.   I fixed it thanks to this suggestion! I adjusted my projection matrix and vertex positions to range from 0x0 to 640x480 instead of the original (-1,-1) to (1,1) and then made the -0.5 offsets in the vertex position values. This solved my issue, I can't tell you how thankful I am!
  2. Changing my matrix transformations don't seem to be affecting anything at all :S I know 100% that my vertex/pixel shaders are used though. What could possibly be overwriting my matrix transformations? Is it the fact that I handle transformations outside of my shaders so that they have no effect at all? Will this make the difference in loss of quality if transformations aren't being applied by my calculations? If they're not being applied what is performing these transformations, is it mere luck that I so happen to be able to see the sprites rendered without calculating the matrices myself?   Sorry for asking so many questions, I just want to understand how these things work so I won't run into anymore problems in the future.
  3.   I'm using DX9. Also I don't think the issue is mapping texels to pixels although I cannot guarantee that since this is my first project using a vertex buffer. Here is how I do it: //Here rtWidth and rtHeight are the width/height of the render target respectively. //position.x/y are the pixel positions of the sprite //sSpriteWidth/sSpriteHeight are the width/height of the sprite // If you notice the texels range from -1.0 to +1.0 on the render screen; position (0,0) being the center of the screen Vertex Positions Calculations: positionLeft = (position.x - (rtWidth/2.0f))/(rtWidth/2.0f); positionRight = (position.x+(float)sSpriteWidth - (rtWidth/2.0f))/(rtWidth/2.0f); positionTop = ((rtHeight/2.0f) - position.y)/(rtHeight/2.0f); // top/bottom flipped positionBottom = ((rtHeight/2.0f) - (position.y+(float)sSpriteHeight))/(rtHeight/2.0f); // top/bottom flipped //m_wWidth/m_wHeight are the width/height of the spritesheet (1024x1024 after being resized into a power-2-texture from its original 619x598) //here the texture coordinates range from 0.0 to 1.0 Texture Coordinate Calculations: srcLeft = (float)srcRect.left/(float)m_wWidth; srcTop = (float)srcRect.top/(float)m_wHeight; srcRight = (float)srcRect.right/(float)m_wWidth; srcBottom = (float)srcRect.bottom/(float)m_wHeight; Do I still need to offset the vertex position and texture coordinates even though they range from -1.0 to +1.0 and 0.0 to 1.0 respectively? If so how much should I offset it by? Do I need to change them to range from the size of the backbuffer/spritesheet to implement a 0.5 offset?   Here is my vertex buffer array: CUSTOM2DVERTEX vertices[] = { { D3DXVECTOR3(positionLeft, positionTop, 1.0f), D3DXVECTOR2(srcLeft, srcTop), D3DXCOLOR(255, 255, 255, alpha) }, // left top { D3DXVECTOR3(positionLeft, positionBottom, 1.0f), D3DXVECTOR2(srcLeft, srcBottom), D3DXCOLOR(255, 255, 255, alpha) }, // left bottom { D3DXVECTOR3(positionRight, positionTop, 1.0f), D3DXVECTOR2(srcRight, srcTop), D3DXCOLOR(255, 255, 255, alpha) }, // right top { D3DXVECTOR3(positionRight, positionBottom, 1.0f), D3DXVECTOR2(srcRight, srcBottom), D3DXCOLOR(255, 255, 255, alpha) }, // right bottom }; // here is the draw call, I just skipped the vertex buffer lock/unlock (memory copying) and SetTexture steps for simplicity DrawPrimitive(D3DPT_TRIANGLESTRIP, 0, 2); Also to clarify that my vertex position/texture coordinate calculations are correct, this is what is calculated for the specific "Character Info" dialog box sprite:   positionLeft = -0.687500 positionRight = 0.156250 positionTop = 0.791667 positionBottom = -0.775000   Since the actual size of the "Character Info" sprite is 270x376 pixels and the render screen/backbuffer is 640x480 pixels: abs(-0.687500)*(640/2) + abs(0.156250)*(640/2) = 270.00000 pixels AND abs(0.791667)*(480/2) + abs(-0.775000)*(480/2) = 376.00008 pixels. Therefore the vertex positions are calculated properly, right?   srcLeft = 0.000000 srcTop = 0.000000 srcRight = 0.263672 srcBottom = 0.367188   Since the actual size of the "Character Info" sprite is 270x376 pixels and the spritesheet is 1024x1024 pixels after being extended: 0.263672*1024 = 270.000128 pixels AND 0.367188*1024 = 376.000512 pixels. Therefore the texture coordinates are calculated properly, right?     I haven't used AdjustWindowRect, this is my first time hearing about this. Anyways I did what you told me so now I adjusted the window size to make the client size the same as the backbuffer (640x480). This didn't resolve the issue unfortunately , also the issue applies in fullscreen as well as window mode so it makes sense that this wasn't the problem. But I still thank you for letting me know about this function since I would prefer the client size to be the same as the backbuffer in window mode anyhow    Also for further investivation here is how it looks like when passing D3DX_DEFAULT_NONPOW2 into the image/texture size:   As you can see it looks perfect but other issues arise with the tiles in the background (black outlines, etc.). All the images here use the same process to render them so I don't see what could be causing this either when using non-power-2 textures because my graphics card supports all texture sizes without limitations (checked with device TextureCaps) although that could be a sticky statement since I expect nothing to be perfectly accurate at this point.   But if you look closely it seems as though something is wrong with the projection which may be causing these lines/rigids? It's as if everything is on an angle but not really, maybe more so magnified at a very small interval. Should I post my matrix transformations?
  4. Hello everyone,   I am trying to fix an issue that causes some (not all) sprites to have smudges/rigids when rendered. I am using a vertex buffer to render the sprites in 3D space. They all render with the correct size, etc. so that no resizing occurs during render-time. Here is an image of the issue I am clueless about (see "Character Info" dialog box):     Here is the texture/spritesheet that the game is reading from (as you can see the original doesn't have these rigids on the "Character Info" dialog box):   I have been testing and discovered this issue may be related to non-power of 2 textures being resized to power 2 sizes. That being said since this texture (above spritesheet) is 619x598 and is forced to be resized to 1024x1024 when the texture is created, I decided to manually change the size of the spritesheet image file to 1024x1024 to see if resizing during texture creation was the issue; however, the issue still remained with no improvements.   Also the position of the vertexes on the screen and on the spritesheet are all consistent with the size of the dialog box (270x376): Screen(-0.687500~0.156250, -0.775000~0.791667) Spritesheet(0.000000~0.263672, 0.000000~0.367188) Both the screen position and texture coordinates equate to 270x376 being the dialog box size so it is not stretching at all but it is definitely losing quality for some unknown reason. If you want to verify that it's not being stretched, the screen size is 640x480 in this case and the texture size is 1024x1024 after "extending" (not stretching) to a power-of-2 size.   I am using the following line to create the texture: D3DXCreateTextureFromFileInMemoryEx(device, imageBuffer, m_imageSize, D3DX_DEFAULT, D3DX_DEFAULT, 1, 0, format, D3DPOOL_DEFAULT, D3DX_FILTER_NONE, D3DX_FILTER_NONE, 0, NULL, NULL, &m_texture); Since I am passing D3DX_FILTER_NONE as the filter parameters "No scaling or filtering will take place. Pixels outside the bounds of the source image are assumed to be transparent black." as stated by the official MSDN website. That being said I don't know why the image is losing quality even though its not being stretched rather only extended and no harm should be done, right? The weird thing is when I passed D3DX_DEFAULT_NONPOW2 as the image sizes as opposed to D3DX_DEFAULT it looks perfect. But other issues arise and besides I don't want to use non-power-2 textures since its not entirely cross-compatible at least with older graphics cards which is what I'm targeting.   Any help on the matter is greatly appreciated. Also feel free to ask any more questions to clear things up!   Edit: This just came to mind, is it possible the rigids/smudges are caused from a vertical flip of the sprites since the bottom and top are flipped to properly display the sprites? If so how can I approach this another way so DirectX won't flip the sprites upside-down?
  5. I found out how to do it. The hint I needed was that the max alpha value in the shader is represented by 1.0
  6. How do I go about changing the transparency of a texture in my pixel shader? Which parameters can I manupulate (does the COLOR0 contain an alpha value)? Do I need to do some "blending" for it to take effect or is blending the only aspect to it (if so does this imply that its impossible to do this in pixel shaders to blend with the render target)?
  7. I resolved the issue. One of the main problems I had before was the fact that directx was resizing the textures to have power 2 dimensions.
  8.   My projection matrix is set during the application initialization process on the following lines of code: D3DXMATRIX matOrthoProj; D3DXMatrixOrthoLH(&matOrthoProj, (float)g_wX, (float)g_wY, 0.0f, 1.0f); D3DXMatrixOrthoOffCenterLH(&matOrthoProj, 0.0f, (float)g_wX, (float)g_wY, 0.0f, 0.0f, 1.0f); g_pD3DDevice->SetTransform(D3DTS_PROJECTION, &matOrthoProj); The g_wX and g_wY variables are just the resolution of the application (in this case assume g_wX = 640, and g_wY = 480). Or perhaps you meant what is posted in the first post and are referring to my D3DTS_VIEW setting. D3DXMatrixIdentity(&matView); g_pD3DDevice->SetTransform(D3DTS_VIEW, &matView); In that case, no I am not setting it to anything really. Am I supposed to?   The position values used in the vertex parameters are the relative coordinates (pixel coordinates) of the sprite frame RECT on the individual texture image referenced. I am not 100% sure this is the correct way to do this as this may be part of the issue at hand.   Also, thanks for clarifying that I don't need to use all XYZRHW.
  9. Sorry for taking a long while to respond as I have been busy with school. Anyways here is the changes made (I haven't been using the vertex/pixel shaders before, instead I transformed beforehand although since you guys asked for it then I presume it may be required so I set them up).   Here is the following changes made to include the vertex/pixel shaders (the issue still stands after implementing the vertex/pixel shaders): /// here are the declarations/initializations for the vertex/pixel shaders LPDIRECT3DVERTEXDECLARATION9 vbDecl; D3DVERTEXELEMENT9 VertexPosElements[] = { {0, 0, D3DDECLTYPE_FLOAT3, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_POSITION, 0}, // x, y, z {0, 12, D3DDECLTYPE_FLOAT2, D3DDECLMETHOD_DEFAULT, D3DDECLUSAGE_TEXCOORD, 0}, // u, v D3DDECL_END() }; if (FAILED(hr = g_pD3DDevice->CreateVertexDeclaration(VertexPosElements, &vbDecl))) return hr; if (FAILED(hr = g_pD3DDevice->SetVertexDeclaration(vbDecl))) return hr; // reverted from using SetFVF if (FAILED(hr = g_pD3DDevice->SetStreamSource(0, g_pVertexBuffer, 0, sizeof(CUSTOM2DVERTEX)))) return hr; LPD3DXBUFFER pBuffer, pErrors; if (FAILED(hr = D3DXCompileShaderFromResource( NULL, MAKEINTRESOURCE(IDR_RCDATA3), NULL, // CONST D3DXMACRO* pDefines, NULL, // LPD3DXINCLUDE pInclude, "RenderSceneVS", "vs_1_1", D3DXSHADER_USE_LEGACY_D3DX9_31_DLL, &pBuffer, &pErrors, // error messages &g_pConstantTableVS))) { MessageBox(g_hWnd, (LPCSTR)pErrors->GetBufferPointer(), "Vertex Shader Compile Error!", MB_ICONEXCLAMATION | MB_OK); SendMessage(g_hWnd, WM_DESTROY, NULL, NULL); return hr; } if(FAILED(hr = g_pD3DDevice->CreateVertexShader((DWORD *)pBuffer->GetBufferPointer(), &g_pVertexShader))) return hr; pBuffer->Release(); if(FAILED(hr = g_pD3DDevice->SetVertexShader(g_pVertexShader))) return hr; D3DXMATRIX matOrtho; D3DXMatrixOrthoLH(&matOrtho, (float)g_wX, (float)g_wY, 0.0f, 1.0f); D3DXMatrixOrthoOffCenterLH(&matOrtho, 0.0f, (float)g_wX, (float)g_wY, 0.0f, 0.0f, 1.0f); g_pD3DDevice->SetTransform(D3DTS_PROJECTION, &matOrtho); if (FAILED(hr = D3DXCompileShaderFromResource( NULL, MAKEINTRESOURCE(IDR_RCDATA2), NULL, // CONST D3DXMACRO* pDefines, NULL, // LPD3DXINCLUDE pInclude, "RenderScenePS", "ps_1_1", D3DXSHADER_USE_LEGACY_D3DX9_31_DLL, &pBuffer, &pErrors, // error messages &g_pConstantTablePS))) { MessageBox(g_hWnd, (LPCSTR)pErrors->GetBufferPointer(), "Pixel Shader Compile Error!", MB_ICONEXCLAMATION | MB_OK); SendMessage(g_hWnd, WM_DESTROY, NULL, NULL); return hr; } if(FAILED(hr = g_pD3DDevice->CreatePixelShader((DWORD *)pBuffer->GetBufferPointer(), &g_pPixelShader))) return hr; pBuffer->Release(); if(FAILED(hr = g_pD3DDevice->SetPixelShader(g_pPixelShader))) return hr; // here are the matrix transformations, these calculations are currently done every time a single sprite frame is being rendered D3DXMATRIX scaleMatrix, transMatrix, matWorld, matView, matProj; D3DXMatrixScaling(&scaleMatrix, scale.x, scale.y, scale.z); D3DXMatrixTranslation(&transMatrix, position.x, position.y, position.z); D3DXMatrixMultiply(&matWorld, &scaleMatrix, &transMatrix); DxDraw.g_pD3DDevice->SetTransform(D3DTS_WORLD, &matWorld); DxDraw.g_pD3DDevice->GetTransform(D3DTS_WORLD, &matWorld); DxDraw.g_pD3DDevice->GetTransform(D3DTS_VIEW, &matView); DxDraw.g_pD3DDevice->GetTransform(D3DTS_PROJECTION, &matProj); D3DXMATRIXA16 matWorldViewProj = matWorld * matView * matProj; DxDraw.g_pConstantTableVS->SetMatrix(DxDraw.g_pD3DDevice, "WorldViewProj", &matWorldViewProj); // here is the vertex shader .hlsl file contents // Vertex shader input structure struct VS_INPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Vertex shader output structure struct VS_OUTPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Global variables float4x4 WorldViewProj; VS_OUTPUT RenderSceneVS(in VS_INPUT In) { VS_OUTPUT Out; Out.Position = mul(In.Position, WorldViewProj); //apply vertex transformation Out.Texture = In.Texture; //copy original texcoords return Out; } // here is the pixel shader .hlsl file contents sampler2D Tex0; // Pixel shader input structure struct PS_INPUT { float4 Position : POSITION; float2 Texture : TEXCOORD0; }; // Pixel shader output structure struct PS_OUTPUT { float4 Color : COLOR0; }; PS_OUTPUT RenderScenePS(in PS_INPUT In) { PS_OUTPUT Out; //create an output pixel Out.Color = tex2D(Tex0, In.Texture); //do a texture lookup return Out; }  I also changed D3DFVF_TEX2 to D3DFVF_TEX1 as suggested. I had it set to D3DFVF_TEX2 previously when I was testing something else out, but had changed a lot for simplicity to post it on the forum... I seem to have forgotten to set D3DFVF_TEX1 in the process; nevertheless, it has been changed for correctness.   Also, I am not sure Aardvajk what you mean when you say I am mixing the (u,v) texture coordinates with the position (x,y,z) coordinates. Are you implying I must use the following code instead? CUSTOM2DVERTEX vertices[] = { { D3DXVECTOR2(minwidthFactor, minheightFactor), D3DXVECTOR3((float)srcRect.left-0.5f, (float)srcRect.top-0.5f, 0.0f) }, // left top { D3DXVECTOR2(minwidthFactor, maxheightFactor), D3DXVECTOR3((float)srcRect.left-0.5f, (float)srcRect.bottom-0.5f, 0.0f) }, // left bottom { D3DXVECTOR2(maxwidthFactor, minheightFactor), D3DXVECTOR3((float)srcRect.right-0.5f, (float)srcRect.top-0.5f, 0.0f) }, // right top { D3DXVECTOR2(maxwidthFactor, maxheightFactor), D3DXVECTOR3((float)srcRect.right-0.5f, (float)srcRect.bottom-0.5f, 0.0f) }, // right bottom }; Or perhaps? CUSTOM2DVERTEX vertices[] = { { D3DXVECTOR3(0.0f, 0.0f, 0.0f), D3DXVECTOR2((float)srcRect.left, (float)srcRect.top) }, // left top { D3DXVECTOR3(0.0f, 1.0f, 0.0f), D3DXVECTOR2((float)srcRect.left, (float)srcRect.bottom) }, // left bottom { D3DXVECTOR3(1.0f, 0.0f, 0.0f), D3DXVECTOR2((float)srcRect.right, (float)srcRect.top) }, // right top { D3DXVECTOR3(1.0f, 1.0f, 0.0f), D3DXVECTOR2((float)srcRect.right, (float)srcRect.bottom) }, // right bottom }; I tried both so either I'm misunderstanding you or it just simply doesn't work out as hoped. Please clarify, thanks!   PS: I've noticed that the vertex/pixel shaders won't allow me or will cause even more issues if I use float3 (as opposed to float4) for my POSITION declaration. Does this imply that I must use D3DFVF_XYZRHW instead of D3DFVF_XYZ in my custom vertex?
  10. does anyone have any feedback?
  11. Hello!   I've been having trouble with 3D space transformations in my 2D game in Directx 9. I have tried countless measures to resolve the issue, researched everywhere but nothing I attempt seems to resolve the issue so I decided to come here to receive direct help. Any help is greatly appreciated!   Here is a visual of what is happening in the game currently with the following code:   Here is what it is supposed to look like (im using DX9 Sprite to render this but I don't want to use it in further development which is why I want to use my own sprite rendering system with vertex/pixel shaders):   As you can see the square tiles are not being scaled/framed properly in the first image compared to the second. The tiles are being read from a spritesheet. The reason you see the trees and mouse sprites fine is because they are from single image spritesheet so no scaling is needed. I have tried to scale them properly, although it improves the sprite rendering... it still isn't showing properly because the large images become a little too small and the small images become a little too big. I have tried many different calculations to try and fix the scaling and position of the sprites in the spritesheet but have failed to resolve the issue. I have also tried running the game in fullscreen mode but nothing changes so I'm assuming the screen size of the game in window mode doesn't affect the scaling. What I do know is that the resolution set in DX9 does affect the scaling but I cannot derive a calculation method to handle that... I have searched everywhere, trying many methods but to no prevail.   EDIT: actually the trees aren't even scaled properly as you can see the difference between them when comparing both images. And since the trees are individual spritesheet images then it seems everything is out of proportion. Also to make things clear, the square tiles are showing the whole spritesheet scaled into a single sprite frame size (32x32 pixels) so you can see that its not positioning and scaling itself properly to the location of the spriteframe in the spritesheet.   Here is the code I use for the first image (I'll try to post the key code and not all of it): //Here I set my sampler states for safe/efficiency purposes although I'm not sure if this will cause problems in rendering the sprites g_pD3DDevice->SetRenderState(D3DRS_ALPHABLENDENABLE, true); g_pD3DDevice->SetRenderState(D3DRS_SRCBLEND, D3DBLEND_SRCALPHA); g_pD3DDevice->SetRenderState(D3DRS_DESTBLEND, D3DBLEND_INVSRCALPHA); g_pD3DDevice->SetRenderState(D3DRS_CULLMODE, D3DCULL_NONE); g_pD3DDevice->SetRenderState(D3DRS_LIGHTING, FALSE); g_pD3DDevice->SetRenderState(D3DRS_ZENABLE, FALSE); g_pD3DDevice->SetRenderState(D3DRS_ZWRITEENABLE, FALSE); g_pD3DDevice->SetSamplerState(0, D3DSAMP_ADDRESSU, D3DTADDRESS_WRAP); g_pD3DDevice->SetSamplerState(0, D3DSAMP_ADDRESSV, D3DTADDRESS_WRAP); g_pD3DDevice->SetSamplerState(0 ,D3DSAMP_MINFILTER, D3DTEXF_NONE); g_pD3DDevice->SetSamplerState(0, D3DSAMP_MAGFILTER, D3DTEXF_NONE); g_pD3DDevice->SetSamplerState(0, D3DSAMP_MIPFILTER, D3DTEXF_NONE); // Here are my vertex buffer definitions typedef struct _CUSTOM2DVERTEX { D3DXVECTOR3 pos; // D3DFVF_XYZ (x,y,z) //D3DCOLOR color; D3DXVECTOR2 texCoord; // D3DFVF_TEX1 (u,v) } CUSTOM2DVERTEX, *LPCUSTOM2DVERTEX; #define D3DFVFCUSTOM2DVERTEX (D3DFVF_XYZ | D3DFVF_TEX2/* | D3DFVF_DIFFUSE*/) LPDIRECT3DVERTEXBUFFER9 g_pVertexBuffer; // Here is my vertex buffer creation code if (FAILED(hr = g_pD3DDevice->CreateVertexBuffer(4*sizeof(CUSTOM2DVERTEX), usageProcessing | D3DUSAGE_DYNAMIC | D3DUSAGE_WRITEONLY, D3DFVFCUSTOM2DVERTEX, D3DPOOL_DEFAULT, &g_pVertexBuffer, NULL))) { MessageBox(g_hWnd, "Failed to create Vertex buffer.", "ERROR!", MB_ICONEXCLAMATION | MB_OK); SendMessage(g_hWnd, WM_DESTROY, NULL, NULL); return hr; } // I setup my view port here on application startup since it doesn't change, g_wX and g_wY are the screen resolutions (we can assume 640x480) so the scale is based on the resolution size and not on 1.0x1.0 D3DXMATRIX matOrtho, matProj; D3DXMatrixOrthoLH(&matOrtho, (float)g_wX, (float)g_wY, 0.0f, 1.0f); D3DXMatrixOrthoOffCenterLH(&matOrtho, 0.0f, (float)g_wX, (float)g_wY, 0.0f, 0.0f, 1.0f); g_pD3DDevice->SetTransform(D3DTS_PROJECTION, &matOrtho); D3DXMatrixIdentity(&matProj); g_pD3DDevice->SetTransform(D3DTS_VIEW, &matProj); // as you can see I recycle my vertex buffer since I only set it once for efficiency so you can expect that I don't recreate it each render period g_pD3DDevice->SetStreamSource(0, g_pVertexBuffer, 0, sizeof(CUSTOM2DVERTEX)); g_pD3DDevice->SetFVF(D3DFVFCUSTOM2DVERTEX); // and finally here is the code that renders each sprite // you can assume the scale.xyz values are all 1.0f // the position.xyz values are all the position of the sprite frame relative to the screen; however, position.z will always be 0.0f in my code D3DXMatrixScaling(&scaleMatrix, scale.x, scale.y, scale.z); D3DXMatrixTranslation(&transMatrix, position.x, position.y, position.z); D3DXMatrixMultiply(&matWorld, &scaleMatrix, &transMatrix); DxDraw.g_pD3DDevice->SetTransform(D3DTS_WORLD, &matWorld); // as you can see the scaling is at its default, so no scaling is done to "select" the sprite frame from a spritesheet to simplify the code as I tried to scale it before with no success float minwidthFactor = 0.0f; float minheightFactor = 0.0f; float maxwidthFactor = 1.0f; float maxheightFactor = 1.0f; // the srcRect variable holds the RECT position data of the sprite frame relative to the spritesheet CUSTOM2DVERTEX vertices[] = { { D3DXVECTOR3((float)srcRect.left-0.5f, (float)srcRect.top-0.5f, 0.0f), D3DXVECTOR2(minwidthFactor, minheightFactor) }, // left top { D3DXVECTOR3((float)srcRect.left-0.5f, (float)srcRect.bottom-0.5f, 0.0f), D3DXVECTOR2(minwidthFactor, maxheightFactor) }, // left bottom { D3DXVECTOR3((float)srcRect.right-0.5f, (float)srcRect.top-0.5f, 0.0f), D3DXVECTOR2(maxwidthFactor, minheightFactor) }, // right top { D3DXVECTOR3((float)srcRect.right-0.5f, (float)srcRect.bottom-0.5f, 0.0f), D3DXVECTOR2(maxwidthFactor, maxheightFactor) }, // right bottom }; LPVOID lpVertices; DxDraw.g_pVertexBuffer->Lock(0, sizeof(vertices), &lpVertices, D3DLOCK_DISCARD); memcpy(lpVertices, vertices, sizeof(vertices)); DxDraw.g_pVertexBuffer->Unlock(); DxDraw.g_pD3DDevice->SetTexture(0, m_texture); DxDraw.g_pD3DDevice->DrawPrimitive(D3DPT_TRIANGLESTRIP, 0, 2); On a side note, I want to create this sprite rendering system similar to how D3DXSPRITE works (as you can see in the second image example). The reason for this new sprite system is so I can use vertex/pixel shaders with maximum freedom for the next stage of development.   Thanks in advance for any provisions! Please feel free to ask any questions about my code incase I missed something or if it is confusing.
  12. I have many spritesheets as .bmp image format located in memory within a .pak file. I write other information like rectangle cords for individual sprites, and bitmap header data all within one .pak file. I am currently working towards implementing .png image support; however I am having trouble. I will try to explain... I render all the sprites within my game program using DirectX 9 with no problem, but when I try to render the .png images I get a Data Error returned value when I read the .png image memory. It is clear to me that the image is not read completely and the image header data is incorrect (i.e. image size/height/width). Here is how I write my image files into the .pak file using a seperate form app in C language: fs = new FileStream(file, FileMode.Open, FileAccess.Read); //reads .pak file BinaryReader br = new BinaryReader(fs); //.pak file as binary data //...//some data read in between to get each sprite frame cords for each sprite in spritesheet br.BaseStream.Seek(bitmapStart, SeekOrigin.Begin); //beginning of spritesheet/image header BitmapFile BF = new BitmapFile(); BitmapFileHeader BFH = new BitmapFileHeader(); BFH.type = br.ReadUInt16(); BFH.size = br.ReadUInt32(); BFH.reserved1 = br.ReadUInt16(); BFH.reserved2 = br.ReadUInt16(); BFH.offBits = br.ReadUInt32(); BF.bmpHeader = BFH; br.BaseStream.Seek(-14, SeekOrigin.Current); //pointer to beginning of spritesheet/image header (.bmp image has 14 byte header) byte[] bitmapData = br.ReadBytes((int)BF.bmpHeader.size); //reads spritesheet/image data return new Bitmap(new MemoryStream(bitmapData)); // spritesheet/image data returned and displayed I read in multiple places that .PNG images have 8 byte headers, but I cannot find how it is structured. PS: For some reason the program (Windows Form App) that writes the .pak files can read the .png images within the .pak files after it creates them and it can be displayed on the picture box within the form; however, on my main game written in C++ it cannot be rendered in DirectX 9 and returns invalid data. Unless the image written in the .pak file is a .bmp file, my game cannot render it because it cannot read the data. Here is how I read the .pak file in my game application: BITMAPFILEHEADER fileHeader; BYTE *imageBuffer; HANDLE hPakFile = CreateFile("Sprites\\Interface.pak", GENERIC_READ, NULL, NULL, OPEN_EXISTING, NULL, NULL); //opens .pak file //...//some data read in between to get each sprite frame cords for each sprite in the spritesheet/image which is read correctly so I know the file isn't corrupt but how I write the image in the .pak file SetFilePointer(hPakFile, m_dwBitmapFileStartLoc, NULL, FILE_BEGIN); //pointer to beginning of spritesheet/image header ReadFile(hPakFile, (char *)&fileHeader, 14, &bytesRead, NULL); //reads header data (14 bytes) imageBuffer = new BYTE[fileHeader.bfSize-14]; ReadFile(hPakFile, imageBuffer, fileHeader.bfSize-14, &bytesRead, NULL); //reads raw image data D3DXCreateTextureFromFileInMemoryEx(device, imageBuffer, bytesRead, D3DX_DEFAULT, D3DX_DEFAULT, 1, 0, D3DFMT_A8R8G8B8, D3DPOOL_DEFAULT, D3DX_FILTER_NONE, D3DX_FILTER_NONE, 0, NULL, NULL, &texture); //sets image data to texture When I try to read .png file data D3DXCreateTextureFromFileInMemoryEx returns an invalid data error and the image size, height, width, etc. is all wrong. When I read a .bmp image it works perfectly. So obviously the way I am writing (within the form application that handles writing my .pak files) or reading (from my game application) the .png image data is incorrect. But I am curious as to why my form app still outputs the .png image data correctly after reading it from a .pak file that it wrote... and that my main game app written in c++ doesnt read it properly. I know this may seem long to read but I hope that someone can guide my to a proper solution. Thanks a lot I will appreciate any help very much! I tried to be as straight to the point as possible, but if you guys need any more info please ask.
  13. Hello I want to know how I can remove all the black shades of the following sprites in the provided spritesheet. I have alpha blending (color keying method) enabled which renders as it shows in the image below; however, I wish to achieve the end result shown at the bottom of the image. How can I achieve/approach this? I have done hours of research but I am still unsure how to approach this problem. I know this is possible but I don't know if it is possible to easily achieve this end result. Thanks to anyone who can guide me in the correct path. I know it is recommended to use an alpha channel for this but I am working on an old game with thousands of sprites using .bmp files and I just want to convert the game to DirectX 9 as it uses DirectDraw7 to achieve this. I am unsure how to do this in DirectX 9 without using an alpha channel. It is impossible for me to use an alpha channel, rather converting the black to alpha channel is the way to go given the circumstances. I use color keying method in alpha blending to remove any pixel with opaque black ( D3DCOLOR_ARGB(255,0,0,0) ). Is there a way to somehow remove any black shades but keep the other color in that pixel like it shows in the image? In other words remove any black contained in a single pixel that has an alpha range of 1~255 without altering any other color within that pixel ...and the more black removed, the more transparent the color contained within that pixel becomes.
  14. I believe the solution I need to achieve this is through color keying hense all the variations of (colorkey) variable names in the code I just posted, lol. Heres is a similar thread I found here on these forums; however, it is based on OpenGL: http://www.gamedev.net/topic/419337-opengl-and-sprites/ Any suggestions or guidance is appreciated as I continue to research color keying. Thanks a lot for all your help.
  • Advertisement
×

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!