Floating point precision issues on AMD Cards

Started by
6 comments, last by Lewa 7 years, 7 months ago

Note: Changed the title of the topic as we were able to figure out what potential cause is (issue is still not fixed though.)

(Previously it was called "Texture judder on AMD Cards")

A little backstory before i start:

Up to this point i was using a very old PC (for todays standards) to work on my project.

It had a Nvidia Geforce 9600GT (512MB GDDR3) in it.

Now i build a new PC Rig with an AMD GPU (Radeon RX 480) and here is where the problem starts.

I experience a very weird texture-judder problem on my new Rig which wasn't really present on my old Geforce 9600GT (and neither on my Laptop with an IntelHD 4000)

Note: I'm using DX9 (limited to that as that's what the engine is using.)

I made a short video showing this issue: (Play it in fullscreen to see it properly)

[media]https:

[/media]

Note that the white wall is literally one qube (no subdivides) while each "tile" is the texture (using texture repeat on this wall)

The floor on the other hand is subdivided.

I know that issues like that can occur if you use very high uv-coordinates (lots of repeats) and if your vertices of one polygon are very far away. (Floating point precision issues.) This can also happen on my old Geforce GPU but i really had to push it in order to achieve this effect. (Scaling a polygon by nearly a million and using A LOT of repeats.)

Other than that my Geforce didn't had this issues at all.

On my AMD rig this issue crops up everywhere. I can even notice it with smaller polygons and less repeats (it's enough to do like 100-200 repeats on a smaller qube and you can still notice minor judder.)

I tested it on other rigs (one also had an RX 480 and another one had a GPU from the Radeon 7xxx Series) and both had the exact same issue (both were AMD cards.)

What's weird is that the judder is completely gone when facing it up front. (Maybe AMD is having issues with texture filtering? I'm using Mipmapping with 16x anisotropic filtering.)

I tried to subdivide the geometry more, but it just reduces the judder (it's still noticeable and isn't eliminated completely.) It seems like i have to create a very high-poly mesh in order to get rid of it with somewhat "acceptable" results. (i literally subdivided the floor like a madman and it was still noticeable.)

Is this a known GPU or a Driver problem? (Are there any known AMD issues in that regard?)

Would appreciate any kind of help. :)

/Edit:

Just noticed that this is only really happening (at least on the floor) if you place the floor mesh and the camera away from the origin (0/0/0).

But still, the old Geforce doesn't have nearly the same issues compared to the AMD. (The precision problems start to appear waaayyyyyy sooner compared to the Nvidia Geforce.)

/Edit2:

Did a bit of digging and debugging. Seems that the UV-koordinates for the textures (when they arrive in the pixelshader stage) already seem to have the wrong/imprecise values which get fed into the texture function (so the texture rendering is not the cause, it's the attribute interpolation between the 3 vertices of the polygon which are calculated this way.)

No idea how to fix this though.

Advertisement

If I'm not mistaken, DX9 has less strict requirements for floating point precision. It's likely that AMD does what the spec requires it to do, while NV just enables the same precision as you have for later DX versions.

Sorry, couldn't find data to back this up, it's been a while since I did DX code.

shaken, not stirred

I see...

If that's the case, is it somehow possible to force the AMD card to use 32 Bit floating point calculations on the GPU?

I made further tests and it turns out that it isn't a texturemapping issue, but a precision issue in the shader calculations.

Values which are passed from the vertex to the pixelshader (which should be interpolated) are interpolated with precision loss which causes this judder problem.

This is a huge problem as this creates really nasty artifacts on AMD cards. (at least on my project.)

How big are your UV values? How big is your position?

If your UVs are just tiling/repeating then try sending from vertex shader fmod( uv, 1.0 ); or frac( uv ); (you'll need to duplicate a few vertices because when it wraps from 1 to 0 you'll get mirrored sampling)

Try also scaling then down in vertex shader (e.g. send uv * 0.002) then scale them back (e.g. receive uv * 500)

Even in DX11 you don't get much precision either with AMD cards.

How are you passing the values from VS to PS, exactly?

How big are your UV values? How big is your position?

Position is around 3500 and the uv-coordinates range from 0-375 on the x-axis (width) and 0-1 on the y-axis.

If your UVs are just tiling/repeating then try sending from vertex shader fmod( uv, 1.0 ); or frac( uv ); (you'll need to duplicate a few vertices because when it wraps from 1 to 0 you'll get mirrored sampling)

I tried that too. It works (more or less.) The problem is that i need to subdivide the geometry A LOT in order to reduce this effect. It would immensely increase the polycount.

I only use simple "Blocks" as the level geometry.

As an example: I already start seeing those precision errors on a block which is (placed in the origin point of the coordinate system 0/0/0) at an scaling of x = 244 (uv coordinate is 122 which means 122 repeats)

Example:

Here i see those errors starting to appear:

amd%20precision%201.png

If i scale this block up, the precision issues seem to be reduced (although they don't go away completely) Note: you may not see those errors on the screenshots, but during camera movement they can be spotted rather easily.

amd%20precision%202.png

Scaling the block up or increasing the distance/location at which this block is placed decreases precision.

Again, my old Nvidia Geforce doesn't seem to have those issues.

Try also scaling then down in vertex shader (e.g. send uv * 0.002) then scale them back (e.g. receive uv * 500)

Didn't work (i think it made it even worse.)

Even in DX11 you don't get much precision either with AMD cards.

Is there a particular reason as to why this is the case?

How are you passing the values from VS to PS, exactly?

This is the shader which i used in the screenshots above (which also exhibits the same precision issues)

Vertex Shader:


struct VS_INPUT { // Input to VS
    float4 Position : POSITION;
    float4 Color    : COLOR0;
    float2 Texcoord : TEXCOORD0;
};


struct VS_OUTPUT { // Output to PS from VS
    float4 Position : SV_POSITION;
    float4 Color    : COLOR0;
    float2 Texcoord : TEXCOORD0;
};



VS_OUTPUT main(VS_INPUT IN)
{
    VS_OUTPUT OUT;
    OUT.Position = mul(gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION], IN.Position);
    OUT.Color = IN.Color;
    OUT.Texcoord = IN.Texcoord;
    return OUT;
}

Fragment Shader:


struct PS_INPUT { // Input from VS to PS
    float4 Color    : COLOR0;
    float2 Texcoord : TEXCOORD0;
};

struct PS_OUTPUT { // Output from PS
    float4 Color0 : SV_TARGET0;
};


PS_OUTPUT main(PS_INPUT IN)
{       
    PS_OUTPUT OUT;
    OUT.Color0 = float4(frac(IN.Texcoord.x),0.0,0.0,1.0);
    
    return OUT;
}

So just to go over the basic points:

- You are confident the matrix gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION] is updated every frame correctly (no silly error in the camera class)?

- You confident this is a UV coordinate issue and not for example vertices moving or a depth buffer problem?

- You don't have any hardware extras turned on in the Catalyst utility, particularly relating to anti-aliasing?

- You are confident this has nothing to do with the reflection mapping you seem to be doing?

Edit: and have you tried turning down vsync and matching the frame rate in catalyst if it lets you?

You are confident the matrix gm_Matrices[MATRIX_WORLD_VIEW_PROJECTION] is updated every frame correctly (no silly error in the camera class)?

Yes, the matrix is updated every frame.

I just spotted this "bug" after upgrading from my Nvidia GPU to the new Radeon RX 480.

(Intel GPUs like the IntelHD4000 also don't have this problem.)

- You confident this is a UV coordinate issue and not for example vertices moving or a depth buffer problem

I rendered a wireframe over the block mesh to see if the vertices also start moving when the values in the fragment shader start to loose precision.

The vertices don't seem to have this issue. (at least they aren't spazzing out like the the colors on the blocks.)

I can place the block pretty much at the center of the world coordinate system (where such errors shouldn't happen as the precision is high enough) and still experience this issue.

It seems as if the interpolation of the values (while they are passed from the vertex to the fragment shader) is either done at lower precision on AMD hardware or the variables are stored in less bits compared to nvidia/Intel hardware causing them to get "truncated" and loose precision this way.

- You don't have any hardware extras turned on in the Catalyst utility, particularly relating to anti-aliasing?

I have everything set to the default settings.

But i already tried to change all available settings (including AA options) before i posted this issue here to see if one of the options was causing this behaviour.

So far, nothing fixed that. I can't get rid of it. (Tested it on my RX 480, a friends RX480 and an older Raden 7xxx series GPU. All had the same exact behaviour.)

Meanhwile my laptop with an IntelHD and my old Nvidia GPU don't have that problem.

- You are confident this has nothing to do with the reflection mapping you seem to be doing?

Those reflections are just the same geometry mirrored on the z-axis with a semi transparent floor mesh rendered over it (to give the illussion of reflectivity.)

It's definitely not the cause.

/Edit: Shouldn't be writing so late (nearly morning)... the amount of typos ...

This topic is closed to new replies.

Advertisement