Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 28 Aug 2006
Offline Last Active Sep 13 2013 06:22 PM

Posts I've Made

In Topic: Fringes around textures

29 April 2013 - 03:39 PM

Well that seems to have done it! I changed the texture address mode to clamp, since wrap is the default, and added code to copy the last row/column as suggested and the problem is gone. Thanks a lot for the help. This was a very annoying bug.

In Topic: Fringes around textures

29 April 2013 - 01:08 PM

Linear filtering means that, for the border pixels, it's probably taking a sample from just outside the edge of the texture. I think when you create your texture, you need to "continue" the border into the "empty" areas of the texture. Or else, when you're drawing your texture, push your texture coordinates just inside the valid part of the texture (of course, then when you draw at smaller sizes, the artifact may creep in again).

Interesting. I didn't know that. I'll give that a shot and see what happens.


(why are you using a 311x100 solid white texture? A 1x1 white texture would serve just as well).

I know. It's just a test texture that copies the size of another image that the bug was reported with.

Are you sure AddressU and AddressV in D3D11_SAMPLER_DESC for your sampler are set to D3D11_TEXTURE_ADDRESS_CLAMP? I had a border issue with a skybox but when I set it to clamp the border went away.

I'm using DirectX 9. I guess I should have put that in the original post. I'll update it now. Thanks for the tip though. The default for texture addressing mode is wrap so we were wrapping. Changing it to clamp fixed part of the problem.

In Topic: DirectX 9 MSAA single-pixel thick line rendering problems

16 April 2013 - 12:29 PM

It's inefficient, but you could also try replacing every line with a quad (made of 2 triangles) placed/shaped exactly as you want.
The triangle-covers-pixel/sample tests are very well defined, so it should work the same on every device/driver.

Interesting, I didn't think you could draw a single pixel thick quad. I tried it and it seems to be working perfectly.

I had an idea last night that I'd like to try first before committing to that option: Is there an easy way to detect what kind of card is being used? That way I can just modify the code to do one thing for NVIDIA cards and another for AMD or whatever else that is different.

EDIT: Never mind that last part. I just tried on a computer with integrated Intel chips and the only thing that looks correct is creating a quad so I think I'm going to go with that solution. Thanks a lot Hodgman :)

In Topic: DirectX 9 MSAA single-pixel thick line rendering problems

15 April 2013 - 09:18 PM

I was hoping for a solution that didn't involve shaders since the engine doesn't have any shader support at all, and I would have to figure out how to do it for OpenGL as well.


I guess it's worth a shot though. I'll try to throw a simple test together and see how it goes.

In Topic: DirectX 9 Device Creation Failing

11 August 2012 - 06:44 PM

Sure thing. I'm just going to post the parameters being used though since a lot of the initialization stuff is engine code I really shouldn't be posting. Taken directly from the watch window in VC9.

  BackBufferWidth 3862
  BackBufferHeight 1222
  BackBufferFormat D3DFMT_X8R8G8B8
  BackBufferCount 1
  MultiSampleQuality 2
  hDeviceWindow 0x000201c2 {unused=0 }
  Windowed 1
  EnableAutoDepthStencil 1
  AutoDepthStencilFormat D3DFMT_D16
  Flags 0
  FullScreen_RefreshRateInHz 0
  PresentationInterval 2147483648

Though these are just the values I'm getting on my system. I can't really know what they are getting, but I don't see any reason it would be different.

I want to remind you also that this is only happening on their computers. We have plenty of other computers running Windows XP and Windows 7 and none of them have issues. So it has to be some hardware limitation but I don't really know where to look for something like that since I'm not really a harware guru or anything.

Also of note is that I thought it might be the AutoDepthStencilFormat, but I already added error checking before setting the value that queries the device via CheckDeviceFormat and CheckDepthStencilMatch to make damn sure it's supported and that isn't failing on their system. Unless my error checking function is broken somehow. I copied it directly from MSDN though:

iTruth mIsDepthFormatSupported(D3DFORMAT adapterFormat, D3DFORMAT renderTargetFormat, D3DFORMAT depthStencilFormat)
iTruth result = kFalse;

if (f_pD3D == NULL)
   f_pD3D = Direct3DCreate9(D3D_SDK_VERSION);

if (f_pD3D != NULL)
   // Verify that the depth format exists.
   HRESULT hr = f_pD3D->CheckDeviceFormat(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, adapterFormat,  D3DUSAGE_DEPTHSTENCIL, D3DRTYPE_SURFACE, depthStencilFormat);

   if (SUCCEEDED(hr))
	// Verify that the depth format is compatible.
	hr = f_pD3D->CheckDepthStencilMatch(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, adapterFormat, renderTargetFormat,  depthStencilFormat);

	result = SUCCEEDED(hr);
return result;

P.S. In case you're wondering about the massive backbuffer size, it's because I have rather large dual monitors and our engine is creating the buffer to fill the entire space in order to avoid having to recreate it every time the window gets resized.