Jump to content
  • Advertisement


  • Content Count

  • Joined

  • Last visited

Community Reputation

161 Neutral

About Shenjoku

  • Rank
  1. Did you ever get around to creating the server+client example? I've been banging my head against a wall for days trying to setup a simple proxy application that just connects on, does nothing with the data, and sends it off to wherever it needs to go. It seems like this could be just what I've been looking for if it was only a bit more complete.
  2. Shenjoku

    Fringes around textures

    Well that seems to have done it! I changed the texture address mode to clamp, since wrap is the default, and added code to copy the last row/column as suggested and the problem is gone. Thanks a lot for the help. This was a very annoying bug.
  3. Shenjoku

    Fringes around textures

    Interesting. I didn't know that. I'll give that a shot and see what happens.   I know. It's just a test texture that copies the size of another image that the bug was reported with.   I'm using DirectX 9. I guess I should have put that in the original post. I'll update it now. Thanks for the tip though. The default for texture addressing mode is wrap so we were wrapping. Changing it to clamp fixed part of the problem.
  4. I'm getting some strange fringes around some textures at specific sizes that I cannot figure out how to get rid of. I'm rendering the primitives to an off-screen surface then rendering the result of that texture on another primitive on the screen. The off-screen rendering supports MSAA but with or without it the problem persists. It doesn't seem to affect all textures either. If the texture is larger than 512 in either dimension then it doesn't have the problem.   You can see the fringes around the outside of the two textures in this screenshot. One is a solid white 311x100 JPEG and the other is a 152x95 PNG. The image format doesn't seem to matter, just the size. Whether or not the image has an alpha channel doesn't affect it either.   [attachment=15272:RenderingError.png]   I've tried debugging the problem in PIX and noticed that there is a bunch of garbage data around the textures, but even if I zero out the texture memory it doesn't get rid of it for some of them so I'm not sure what's going on there. You can see in the screenshot below that there's a bunch of white where there should be nothing but empty space to the right of the texture since it's only using 311x100 of the available space.   [attachment=15274:PixTextureData.png]   P.S. This is all with DirectX 9 btw.
  5. Interesting, I didn't think you could draw a single pixel thick quad. I tried it and it seems to be working perfectly. I had an idea last night that I'd like to try first before committing to that option: Is there an easy way to detect what kind of card is being used? That way I can just modify the code to do one thing for NVIDIA cards and another for AMD or whatever else that is different. EDIT: Never mind that last part. I just tried on a computer with integrated Intel chips and the only thing that looks correct is creating a quad so I think I'm going to go with that solution. Thanks a lot Hodgman :)
  6. I was hoping for a solution that didn't involve shaders since the engine doesn't have any shader support at all, and I would have to figure out how to do it for OpenGL as well.   I guess it's worth a shot though. I'll try to throw a simple test together and see how it goes.
  7. I'm having some strange issues rendering single-pixel thick lines with DirectX 9 and MSAA, but only on AMD cards. I'm already adjusting the coordinates to convert them from texels to pixels by subtracting 0.5, which works perfectly fine and fixes rendering bugs on NVIDIA cards, but when doing the same on a system using an AMD card the line ends up rendering partially transparent because it's being anti-aliased when it shouldn't be.   Here's some sample code to show yout how the coordinates are calculated and how the line is being rendered:   // v is the locked vertex data which contains two points and is using an FVF of D3DFVF_XYZ | D3DFVF_DIFFUSE. // // x,y,w,h is the position and size of the line. For one thickness lines, either w or h is zero depending on whether it's horizontal or vertical. // // kVertexModifier is used to convert from texel to pixel coordinates based on the documentation found here: http://msdn.microsoft.com/en-us/library/windows/desktop/bb219690%28v=vs.85%29.aspx const float kVertexModifier = -0.5f; if (h == 0.0f) { x += kVertexModifier; w -= kVertexModifier; } if (w == 0.0f) { y += kVertexModifier; h -= kVertexModifier; } v[0].mSet(fColor, x, y); v[1].mSet(fColor, x + w, y + h); // Rendering is done like the following: device->SetFVF(fFVF); device->SetStreamSource(0, fVertexBuffer, 0, fVertexSize); device->DrawPrimitive(D3DPT_LINELIST, 0, 1);   I took some screenshots from a test project to show what the problem looks like. Pay attention to the border around the blue rectangle (you'll most likely have to view the full-size image and zoom in to be able to see it):    This is how it looks on a machine with an NVIDIA card. [attachment=14874:one_thickness_correct.png]   This is how it looks on a machine with an AMD card. Notice the border is very transparent compared to the other one. [attachment=14873:one_thickness_broken.png]   I'm banging my head against a brick wall trying to figure out this problem, mainly because any fix that I figure out to make it work for AMD cards doesn't work for NVIDIA cards, and vice-versa. If anyone has any information or leads on how to fix this it would be greatly appreciated.
  8. I need to implement clipping of primitives at any rotation but I'm unsure of how to do it. Currently I'm using IDirect3dDevice9::SetScissorRect to clip things, but that obviously won't work at any rotation. So my question to all of you is how do I clip using a rotated rectangle? I've been searching for a while and found some people suggesting to use iDirect3DDevice9::SetClipPlane, but I'm not 100% sure how to use that or if it would be the best thing to use. The important thing to note is that the scene is not 3D at all, this is for a completely flat 2D application. EDIT: Feel free to close this thread. Spent the whole day getting clipping planes to work, which works fine until I realized that clipping needs to use weird funky shapes so it got a LOT more complicated really fast. I did, however, find a free third-party library that handles clipping all sorts of things that I'm going to try to use when I have free time in the future. Library is called Clipper.
  9. Shenjoku

    DirectX 9 Device Creation Failing

    Sure thing. I'm just going to post the parameters being used though since a lot of the initialization stuff is engine code I really shouldn't be posting. Taken directly from the watch window in VC9. BackBufferWidth 3862 BackBufferHeight 1222 BackBufferFormat D3DFMT_X8R8G8B8 BackBufferCount 1 MultiSampleType D3DMULTISAMPLE_NONMASKABLE MultiSampleQuality 2 SwapEffect D3DSWAPEFFECT_DISCARD hDeviceWindow 0x000201c2 {unused=0 } Windowed 1 EnableAutoDepthStencil 1 AutoDepthStencilFormat D3DFMT_D16 Flags 0 FullScreen_RefreshRateInHz 0 PresentationInterval 2147483648 Though these are just the values I'm getting on my system. I can't really know what they are getting, but I don't see any reason it would be different. I want to remind you also that this is only happening on their computers. We have plenty of other computers running Windows XP and Windows 7 and none of them have issues. So it has to be some hardware limitation but I don't really know where to look for something like that since I'm not really a harware guru or anything. Also of note is that I thought it might be the AutoDepthStencilFormat, but I already added error checking before setting the value that queries the device via CheckDeviceFormat and CheckDepthStencilMatch to make damn sure it's supported and that isn't failing on their system. Unless my error checking function is broken somehow. I copied it directly from MSDN though: iTruth mIsDepthFormatSupported(D3DFORMAT adapterFormat, D3DFORMAT renderTargetFormat, D3DFORMAT depthStencilFormat) { iTruth result = kFalse; if (f_pD3D == NULL) { f_pD3D = Direct3DCreate9(D3D_SDK_VERSION); } if (f_pD3D != NULL) { // Verify that the depth format exists. HRESULT hr = f_pD3D->CheckDeviceFormat(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, adapterFormat, D3DUSAGE_DEPTHSTENCIL, D3DRTYPE_SURFACE, depthStencilFormat); if (SUCCEEDED(hr)) { // Verify that the depth format is compatible. hr = f_pD3D->CheckDepthStencilMatch(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, adapterFormat, renderTargetFormat, depthStencilFormat); result = SUCCEEDED(hr); } } return result; } P.S. In case you're wondering about the massive backbuffer size, it's because I have rather large dual monitors and our engine is creating the buffer to fill the entire space in order to avoid having to recreate it every time the window gets resized.
  10. A client is having some issues trying to run our app on a Windows XP machine, wherein the device creation process is repeatedly failing. I don't have access to the computer yet so I know nothing about the hardware specs other than that they said, "The graphics card is rather new". I'm a little skeptical but that's a different problem They sent me a log and the error messages they're getting look like this: Direct3D9: (INFO) :Direct3D9 Debug Runtime selected. Direct3D9: (WARN) :driver set D3DDEVCAPS_TEXTURENONLOCALVIDMEM w/o DDCAPS2_NONLOCALVIDMEM:turning off D3DDEVCAPS_TEXTURENONLOCALVIDMEM D3D9 Helper: Enhanced D3DDebugging disabled; Application was not compiled with D3D_DEBUG_INFO Direct3D9: (WARN) :driver set D3DDEVCAPS_TEXTURENONLOCALVIDMEM w/o DDCAPS2_NONLOCALVIDMEM:turning off D3DDEVCAPS_TEXTURENONLOCALVIDMEM Direct3D9: (ERROR) :Device cannot perform hardware processing. ValidateCreateDevice failed. Direct3D9: (WARN) :driver set D3DDEVCAPS_TEXTURENONLOCALVIDMEM w/o DDCAPS2_NONLOCALVIDMEM:turning off D3DDEVCAPS_TEXTURENONLOCALVIDMEM Direct3D9: (WARN) :driver set D3DDEVCAPS_TEXTURENONLOCALVIDMEM w/o DDCAPS2_NONLOCALVIDMEM:turning off D3DDEVCAPS_TEXTURENONLOCALVIDMEM Direct3D9: (INFO) :======================= Hal SWVP device selected Direct3D9: (INFO) :HalDevice Driver Style 8 Direct3D9: (ERROR) :Failed to create driver surface Direct3D9: (ERROR) :Failed to initialize primary swapchain Direct3D9: (ERROR) :Failed to initialize Framework Device. CreateDevice Failed. Now I've been looking into the problem for a whlie and I can't seem to find any solid conclusions as to what is causing the problem. The device creation is indeed first trying to create with the D3DCREATE_HARDWARE_VERTEXPROCESSING, and then if that fails it's changing it to D3DCREATE_SOFTWARE_VERTEXPROCESSING and trying again, or at least it should be. So can anyone shine some light on this problem? Is this just purely a hardware thing? I'll know more once I get a chance to check out the machine but for now I'm just trying to draw some conclusions so I know what to test out.
  11. Oh that's cool! That would be helpful if I needed them to be exactly the same, which I do for now, but eventually they are going to have to be calculated separately and will have different values potentially.
  12. You are correct. After spending a bit of time looking into the resulting texture data it is indeed being covereted to an equivelant of D3DFMT_A8R8G8B8. So this thread can be closed. Just need to treat the image as a 32-bit ARGB image and it's fine.
  13. I'm working on an application that needs to support using P8 textures, but the way it needs to use them is by converting them into a 32-bit ARGB image. To do this I need to somehow get the palette of the texture after it has been loaded, but I can't find anything after looking through the DirectX docs. So, does anyone know how to get the palette of a texture? That's the last piece of the puzzle I'm missing.
  14. Well that did it, now it's working perfectly. Thanks a lot for you help Nik02. In case anyone else needs help here's some pseudo code for how to get this working: // The vertex struct to use has two sets of texture coordinates, one for the main texture and one for the mask. struct tDXVertexTextureMask { FLOAT x, y, z; D3DCOLOR color; FLOAT u1, v1; FLOAT u2, v2; }; // Define the FVF to use. #define D3DFVF_TEXTURE_MASK (D3DFVF_XYZ | D3DFVF_DIFFUSE | D3DFVF_TEX2) // Create your vertex buffer like usual using the above struct and FVF. When applying the texture coordinates, // make sure to set u1 = u2 and v1 = v2, so the textures will line up exactly. // Time to render! Setup the FVF and texture states. device->SetFVF(D3DFVF_TEXTURE_MASK); device->SetTexture(0, fTexture); device->SetTexture(1, fMaskTexture); device->SetTextureStageState(0, D3DTSS_COLOROP, D3DTOP_MODULATE); device->SetTextureStageState(0, D3DTSS_COLORARG1, D3DTA_TEXTURE); device->SetTextureStageState(0, D3DTSS_COLORARG2, D3DTA_DIFFUSE); device->SetTextureStageState(0, D3DTSS_ALPHAOP, D3DTOP_MODULATE); device->SetTextureStageState(0, D3DTSS_ALPHAARG1, D3DTA_TEXTURE); device->SetTextureStageState(0, D3DTSS_ALPHAARG2, D3DTA_DIFFUSE); // Use the color from the previous texture, and blend the alpha from the mask. device->SetTextureStageState(1, D3DTSS_COLOROP, D3DTOP_SELECTARG1); device->SetTextureStageState(1, D3DTSS_COLORARG1, D3DTA_CURRENT); device->SetTextureStageState(1, D3DTSS_ALPHAOP, D3DTOP_MODULATE); device->SetTextureStageState(1, D3DTSS_ALPHAARG1, D3DTA_TEXTURE); device->SetTextureStageState(1, D3DTSS_ALPHAARG2, D3DTA_CURRENT); // Render your primative and voila! Masked texture!
  15. Well I sort of figured out what the problem is but I don't know how to fix it. After tinkering a bit I noticed that for some reason it's not picking up the alpha in the mask at all. If I set the states to: // Combine the colors and use alpha from the mask only. device->SetTextureStageState(1, D3DTSS_COLOROP, D3DTOP_MODULATE); device->SetTextureStageState(1, D3DTSS_COLORARG1, D3DTA_CURRENT); device->SetTextureStageState(1, D3DTSS_COLORARG2, D3DTA_TEXTURE); device->SetTextureStageState(1, D3DTSS_ALPHAOP, D3DTOP_SELECTARG2); device->SetTextureStageState(1, D3DTSS_ALPHAARG1, D3DTA_CURRENT); device->SetTextureStageState(1, D3DTSS_ALPHAARG2, D3DTA_TEXTURE); The texture gets tinted green as you would expect, except that the areas that are supposed to have zero alpha are also tinted green. Those areas should remain the same shouldn't they? The only logical reason why that would be happening would be because the alpha in the mask isn't getting applied or is getting ignored for some reason. EDIT: After even more investigation I've narrowed it down to what the problem is. For some reason it's only using a single pixel, the top-left pixel at (0, 0), from the mask to apply to the entire texture. Anyone know why it would be doing this? EDIT2: Think I figured it out! Took a lunch break to clear my head and realized something extremely important was mising; A second set of texture coordinates in the vertex data. I haven't tested it yet but I'm pretty sure this is why it isn't working. I'll post again once I confirm.
  • Advertisement

Important Information

By using GameDev.net, you agree to our community Guidelines, Terms of Use, and Privacy Policy.

GameDev.net is your game development community. Create an account for your GameDev Portfolio and participate in the largest developer community in the games industry.

Sign me up!