Jump to content

  • Log In with Google      Sign In   
  • Create Account


dealing with driver bugs


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
5 replies to this topic

#1 Geoffrey   Members   -  Reputation: 454

Like
0Likes
Like

Posted 13 April 2012 - 10:01 AM

Hi,

I've found a texture corruption issue in my game (DirectX 9.0c) that occurs on just two of many machines I've tried it on. After some investigation I believe this to be caused by a graphics driver bug, and I can work around it by regularly flushing the pipeline (using CreateQuery...). Obviously this has a negative effect on performance so, I think, I want to do this on just the affected cards. I have two questions:

1. How do I identify whether a particular graphics card or family of cards is present? I'm aware that you can get a D3DADAPTER_IDENTIFIER9 structure from IDirect3D9::GetAdapterIdentifier, but how do you use this information?

2. Is what I'm doing remotely sane? Should I be looking for a different kind of fix? I would really appreciate some advice about how to handle this kind of issue.

Thanks.

---
The Trouble With Robots - http://www.facebook....285075818204908
The Trouble With Robots - www.digitalchestnut.com/trouble

Sponsor:

#2 Adam_42   Crossbones+   -  Reputation: 2179

Like
2Likes
Like

Posted 13 April 2012 - 05:03 PM

You probably want to compare the following data from the D3DADAPTER_IDENTIFIER9 structure:

- VendorId - Integer representing ATI / NVIDIA / Intel / etc.
- DeviceId / SubSysId / Revision - Integers representing the specific card model. The more of them you check the more specific you're being.
- DriverVersion - Compare this if it's not broken in the latest driver.

You might also want to keep the DeviceIdentifier around, and redo your checks if it ever changes. This lets you know when the user has installed a new card or driver.

My suggestion would be:

- On startup detect if you think it's a buggy driver and turn on the workaround.
- Have a documented setting somewhere so that users can override the auto detection to either fix a card you didn't test, or disable it when the driver is fixed.

As for #2:

Make sure the debug runtimes don't report any errors on the hardware it's failing on, and try testing with the reference rasterizer just in case.

It may be worth contacting the appropriate developer support people to report the issue, if it's happening with the latest drivers.

If you're using the 12.2 ATI drivers to do the testing, the corruption may well be fixed in 12.3 - I saw corruption problems in two games with 12.2 myself.

#3 Bacterius   Crossbones+   -  Reputation: 7030

Like
1Likes
Like

Posted 16 April 2012 - 05:43 AM

Personally I would just leave it as is. If the driver has a bug, that sucks, but it's not your game's responsibility to hack around it. At best, I would detect the card/driver and inform the user that his card model/driver version may result in texture corruption, possibly ask them to update their drivers, then carry on regardless, without any special fix. For all you know there's a hundred different card/driver combinations out there for which the fix will not work or that will have a completely different bug. It's just not worth the trouble IMHO.

You should assume correct operation from the driver and API. If this assumption turns out to be invalid on some systems, your game can't do anything about it in general. That you can work around *this* particular issue by doing such and such is good to know, but you won't always be able to patch up every driver or hardware bug. That's the driver developers' jobs.

Of course, you could include a hidden switch in your program (/safemode or /failsafe perhaps) which would perform a check against every bug/failure the users have reported, possibly querying them from the internet, and attempt to fix them using a slightly different code path, and hope for the best. But that should not be done automatically. And it's still a hack.

"The best comment is a deleted comment."


#4 Geoffrey   Members   -  Reputation: 454

Like
1Likes
Like

Posted 19 April 2012 - 05:16 PM

Thanks for the advice, it took me a while to get around to implementing this (it's almost like I have more enjoyable tasks on my TODO list than resolving driver issues...) and I've yet to go back and fully test this on the affected systems.

I've gone for Adam_42's suggestion, identifying the offending device but having a setting that can override this to either always on or always off (in an easy to edit .txt file).  My logic only includes the VendorID and DeviceID because I don't have enough data points to select more narrowly - and the issue does occur on the latest drivers.

Bacterius, your attitude is appealing and it's true that there could be hundreds of variations I don't know about (it's a shame I don't have the resources to test many more systems).  However, I feel that I should do my best to make the game work on as many systems as possible, and since the fix shouldn't cause much harm when it's wrongly applied, I think I want it.
The Trouble With Robots - www.digitalchestnut.com/trouble

#5 Zoner   Members   -  Reputation: 232

Like
0Likes
Like

Posted 04 May 2012 - 06:17 PM

I would argue flushing the pipeline improves performance as it dramatically reduces latency Posted Image . The cost is making the rendering less async-y, so the CPU spins its wheels a while waiting on the GPU to finish, but then you get things like the most up-to date user input before rendering the next frame and whatnot.


We've long since stopped looking at the adapter identifier string except for logging/troubleshooting purposes. In D3D9 land you can get most of the useful information from AMD and NVIDIA format extensions and you can figure out which kind of card you have (more or less) from that:


here is some code I can post to explain things a bit more quickly:


///////////////////////////////////////////////////////////////////////
// Radeon defines from Advanced DX9 Capabilities for ATI Radeon Cards
// http://developer.amd.com/gpu_assets/Advanced%20DX9%20Capabilities%20for%20ATI%20Radeon%20Cards_v2.pdf

#define ATI_FOURCC_INTZ ((D3DFORMAT)(MAKEFOURCC('I','N','T','Z')))
#define ATI_FOURCC_NULL ((D3DFORMAT)(MAKEFOURCC('N','U','L','L')))
#define ATI_FOURCC_RESZ ((D3DFORMAT)(MAKEFOURCC('R','E','S','Z')))
#define ATI_FOURCC_DF16 ((D3DFORMAT)(MAKEFOURCC('D','F','1','6')))
#define ATI_FOURCC_DF24 ((D3DFORMAT)(MAKEFOURCC('D','F','2','4')))
#define ATI_FOURCC_ATI1N ((D3DFORMAT)MAKEFOURCC('A','T','I','1'))
#define ATI_FOURCC_ATI2N ((D3DFORMAT)MAKEFOURCC('A','T','I','2'))
#define ATI_ALPHA_TO_COVERAGE_ENABLE (MAKEFOURCC('A','2','M','1'))
#define ATI_ALPHA_TO_COVERAGE_DISABLE (MAKEFOURCC('A','2','M','0'))
#define ATI_FETCH4_ENABLE ((DWORD)MAKEFOURCC('G','E','T','4'))
#define ATI_FETCH4_DISABLE ((DWORD)MAKEFOURCC('G','E','T','1'))

///////////////////////////////////////////////////////////////////////
// Nvidia defines (GPU Programming Guide G80)
// http://developer.download.nvidia.com/GPU_Programming_Guide/GPU_Programming_Guide_G80.pdf

#define NVIDIA_FOURCC_INTZ ((D3DFORMAT)(MAKEFOURCC('I','N','T','Z')))
#define NVIDIA_FOURCC_RAWZ ((D3DFORMAT)(MAKEFOURCC('R','A','W','Z')))
#define NVIDIA_FOURCC_NULL ((D3DFORMAT)(MAKEFOURCC('N','U','L','L')))
#define NVIDIA_DEPTH_BOUND ((D3DFORMAT)(MAKEFOURCC('N','V','D','B')))

void d3d9Render::DetectHardwareSpecificOptions()
{
D3DDISPLAYMODE mode;
VERIFYD3D9RESULT(D3D->GetAdapterDisplayMode(D3DADAPTER_DEFAULT, &mode));

HRESULT HasBC4 = D3D->CheckDeviceFormat(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, mode.Format, 0, D3DRTYPE_TEXTURE, ATI_FOURCC_ATI1N);
DeviceSupports_BC4 = SUCCEEDED(HasBC4);

HRESULT HasBC5 = D3D->CheckDeviceFormat(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, mode.Format, 0, D3DRTYPE_TEXTURE, ATI_FOURCC_ATI2N);
DeviceSupports_BC5 = SUCCEEDED(HasBC5);

HRESULT HasFetch4 = D3D->CheckDeviceFormat(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, mode.Format, D3DUSAGE_DEPTHSTENCIL, D3DRTYPE_TEXTURE, ATI_FOURCC_DF24);
DeviceSupports_Fetch4 = SUCCEEDED(HasFetch4);

RENDER_COMPILE_ASSERT(ATI_FOURCC_NULL == NVIDIA_FOURCC_NULL, ATI_And_NVIDIA_FOURCC_For_NULL_IsIdentical);
HRESULT HasNULL = D3D->CheckDeviceFormat(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, mode.Format, D3DUSAGE_RENDERTARGET, D3DRTYPE_SURFACE, ATI_FOURCC_NULL);
DeviceSupports_NullColorBuffer = SUCCEEDED(HasNULL);

HRESULT HasDepthBounds = D3D->CheckDeviceFormat(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, mode.Format, 0, D3DRTYPE_SURFACE, NVIDIA_DEPTH_BOUND);
DeviceSupports_DepthBounds = SUCCEEDED(HasDepthBounds);

// GBX:Zoner - NVDB is the best test for hardware PCF, since the alternatives are to guess and scan the adapater id string
HRESULT HasHardwarePCF = D3D->CheckDeviceFormat(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, mode.Format, 0, D3DRTYPE_SURFACE, NVIDIA_DEPTH_BOUND);
DeviceSupports_HardwarePCF = SUCCEEDED(HasHardwarePCF);

HRESULT FilteringFP16 = D3D->CheckDeviceFormat(D3DADAPTER_DEFAULT, D3DDEVTYPE_HAL, mode.Format, D3DUSAGE_RENDERTARGET | D3DUSAGE_QUERY_FILTER, D3DRTYPE_TEXTURE, D3DFMT_A16B16G16R16F);
DeviceSupports_FilteringFP16 = SUCCEEDED(FilteringFP16);

}


Basically :

Fetch4 = radeon (but not some of the older ones)
Depth Bounds = NVIDIA

#6 kubera   Members   -  Reputation: 760

Like
0Likes
Like

Posted 05 May 2012 - 05:14 AM

Maybe you would concact the driver's and GPU's vendor...
Such issue would be reported.
Every producer has the support through the Internet.




Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS