D3D + Intel 945 - Display corruption?

Started by
8 comments, last by arbitus 15 years, 4 months ago
I got a bug report for Gorgon (which uses SlimDX and in turn uses Direct3D 9 - Nov 2k8) that claimed they were getting a corrupted display on their Intel 945 express chipset: They claim that the problem goes away when the device is reset (or as they said, the window is resized/switched to fullscreen). I've never seen this happen on any of the chipsets I've tried (Intel included, although I am unable to try a 945). Has anyone else had this issue when dealing with that particular chipset and DirectX 9? Could this be an issue with my D3D code or could it be a result of drivers (he claims to have the latest)? If it is code, any idea what would cause this? Since this appears on startup I can't help but think that the faulting code would be in the Device setup. But my present parameters are pretty bare bones (e.g. no multisampling or anything) and if it were the presentation parameters, why would resize/mode switch work? They reuse those same parameters. It's all very confusing. Also, if someone with an Intel 945 could reproduce this issue for me (by grabbing the Gorgon Examples from my page), that'd be swell.
Advertisement
That looks a lot like uninitialized data. Are you checking that every single SlimDX call is succeeding? Is there a chance that locks are failing (for whatever reason) and data is not being correctly sent to the card?

Also, perhaps the rendertarget is being incorrectly set on device creation, which is causing you to render to the wrong target. Then a reset might create a new buffer and correctly set it as the target.

If you have the ability to apply a fix on their machine, I'd try to set the backbuffer as the rendertarget at startup (GetBackbuffer -> SetRendertarget). I guess it's worth a shot.

That's all I can think of.
Sirob Yes.» - status: Work-O-Rama.
Quote:Original post by sirob
That looks a lot like uninitialized data. Are you checking that every single SlimDX call is succeeding? Is there a chance that locks are failing (for whatever reason) and data is not being correctly sent to the card?

Also, perhaps the rendertarget is being incorrectly set on device creation, which is causing you to render to the wrong target. Then a reset might create a new buffer and correctly set it as the target.

If you have the ability to apply a fix on their machine, I'd try to set the backbuffer as the rendertarget at startup (GetBackbuffer -> SetRendertarget). I guess it's worth a shot.

That's all I can think of.


Thanks sirob, I'll look into those suggestions. The part about the setting of the initial render target might be what I'm looking for.

[Edit]
Apparently it works with older versions of the library. This is puzzling to me because that code hadn't changed that much in the lifetime of the library.

[Edited by - Tape_Worm on December 2, 2008 4:31:11 PM]
I would suggest you ask them to check their drivers are absolutely up to date - I remember having a DirectX 9 app garbled and looking weird on some old crappy onboard chipset, and the drivers were something like 4 years out of date. Downloading the latest drivers fixed it.
Construct (Free open-source game creator)
I've managed to solve the problem. It seems that if I bind a vertex buffer to the device via SetStreamSource, modify the buffer contents via a Discard locking operation and then draw, it'll keep using what looks to me to be an "older" copy of the buffer (I confirmed this with PIX by reading the VB data). Calling SetStreamSource just before every frame render seems to make the problem go away. I've never seen this behaviour before, I've always been able to bind the vertex buffer the one time (and after any subsequent device resets) and forget about it. This video chipset uses software vertex processing, so that might have something to do with it, but it'd be nice to know if I should be calling SetStreamSource every frame?
Quote:Original post by Tape_Worm
I've managed to solve the problem. It seems that if I bind a vertex buffer to the device via SetStreamSource, modify the buffer contents via a Discard locking operation and then draw, it'll keep using what looks to me to be an "older" copy of the buffer (I confirmed this with PIX by reading the VB data). Calling SetStreamSource just before every frame render seems to make the problem go away. I've never seen this behaviour before, I've always been able to bind the vertex buffer the one time (and after any subsequent device resets) and forget about it. This video chipset uses software vertex processing, so that might have something to do with it, but it'd be nice to know if I should be calling SetStreamSource every frame?
When you lock with Discard(), you're telling the driver that you're not going to use any of the vertices in the buffer for subsequent draw calls until you re-fill it.

So, your code may be doing the following:
Frame 0:   Lock vertices 0-3 as DISCARD   Draw a quad using vertices 0-3Frame 1:   Lock vertices 4-7 as NOOVERWRITE   Draw a quad using vertices 0-3   Draw a quad using vertices 4-7Frame 2:   Lock vertices 0-3 as DISCARD   Draw a quad using vertices 0-3   Draw a quad using vertices 4-7

In that case, the second quad is using undefined vertices. The Debug Runtimes should help against this sort of thing, since they tend to deliberately crap up the entire buffer when you lock it - some drivers will let you "get away with it".

Actually, now I re-read the post, the above is probably irrelivant if calling SetStreamSource() every frame fixes it.

In theory, SetStreamSource calls should be cached by the driver, so if the stream source doesn't actually change, there won't be a performance problem. What might be happening, is the driver could be buggy and having some problems with buffer renaming - when you lock with DISCARD, some drivers will actually lock a completely new internal chunk of memory, and give you that (Which is why all the other vertices in the buffer become invalid). It's possible that the driver is doing something wrong here, and not updating the pointer to the new internal buffer when you lock with DISCARD.

As far as I know, you shouldn't need to bind the buffer more than once, so long as it's the same buffer (You haven't destroyed it and created a new one).
Quote:Original post by Evil Steve
Actually, now I re-read the post, the above is probably irrelivant if calling SetStreamSource() every frame fixes it.

In theory, SetStreamSource calls should be cached by the driver, so if the stream source doesn't actually change, there won't be a performance problem. What might be happening, is the driver could be buggy and having some problems with buffer renaming - when you lock with DISCARD, some drivers will actually lock a completely new internal chunk of memory, and give you that (Which is why all the other vertices in the buffer become invalid). It's possible that the driver is doing something wrong here, and not updating the pointer to the new internal buffer when you lock with DISCARD.

As far as I know, you shouldn't need to bind the buffer more than once, so long as it's the same buffer (You haven't destroyed it and created a new one).

Yeah I thought it was unusual to have to force it to call SetStreamSource all the time. The vertex buffer used by Gorgon never changes, it only ever needs the one, and thus I figured calling SetStreamSource more than once was a waste. Seeing as this error occurs on an Intel chipset I'm leaning towards "driver is broke".
Quote:Original post by Tape_Worm
Yeah I thought it was unusual to have to force it to call SetStreamSource all the time. The vertex buffer used by Gorgon never changes, it only ever needs the one, and thus I figured calling SetStreamSource more than once was a waste. Seeing as this error occurs on an Intel chipset I'm leaning towards "driver is broke".
Yup, Intel drivers are generally shite.

You could try running with the debug runtimes and the reference rasterizer. If it works fine using both of them, you can almost definitely write it off as a driver bug.

Unfortunately, this is one of the annoying cases where developers have to compensate for crappy drivers and assume to worst case - I.e. call SetStreamSource every frame (Although it shouldn't make a noticible performance difference at all)
Quote:Original post by Evil Steve
Quote:Original post by Tape_Worm
Yeah I thought it was unusual to have to force it to call SetStreamSource all the time. The vertex buffer used by Gorgon never changes, it only ever needs the one, and thus I figured calling SetStreamSource more than once was a waste. Seeing as this error occurs on an Intel chipset I'm leaning towards "driver is broke".
Yup, Intel drivers are generally shite.

You could try running with the debug runtimes and the reference rasterizer. If it works fine using both of them, you can almost definitely write it off as a driver bug.

Unfortunately, this is one of the annoying cases where developers have to compensate for crappy drivers and assume to worst case - I.e. call SetStreamSource every frame (Although it shouldn't make a noticible performance difference at all)


Yeah I just tried using the reference device + debug runtimes. Everything worked like a charm (setting SetStreamSource only once). So it's likely a buggy Intel driver. I know, I'm as shocked as the rest of the world... [rolleyes]
I stopped supporting Intel integrated video chipsets a long time ago and convinced my company to do likewise. They stretch the truth about their specs and make buggy drivers.

This topic is closed to new replies.

Advertisement