Explain this Modulo operator weirdness?

Started by
1 comment, last by BFG 7 years, 1 month ago
Hi,
I am working in Unity on pieces of shader code to convert between a memory address and a coordinate in a uniform grid. To do this I use the Modulo operator, but find odd behaviour I cannot explain.
Below is a visualisation of the grid. It simply draws a point at each gridpoint. The locations for each vertex are computed from the offset into the fixed size uniform grid. I.e. The vector cell is computed from the vertex shader instance ID, and this is in turn converted into NDCs and rendered.
I start with the naive implementation:

uint3 GetFieldCell(uint id, float3 numcells)
{
uint3 cell;
uint  layersize = numcells.x * numcells.y;
cell.z = floor(id / layersize);
uint layeroffset = id % layersize;
cell.y = floor(layeroffset / numcells.x);
cell.x = layeroffset % numcells.x;
return cell;
}
And see the following visual artefacts:
[attachment=35344:modulo_1.PNG]
I discover that this is due to the modulo operator. If I replace it with my own modulo operation:

uint3 GetFieldCell(uint id, float3 numcells)
{
uint3 cell;
uint  layersize = numcells.x * numcells.y;
cell.z = floor(id / layersize);
uint layeroffset = id - (cell.z * layersize);
cell.y = floor(layeroffset / numcells.x);
cell.x = layeroffset - (cell.y * numcells.x);
return cell;
}
The artefact disappears:
[attachment=35345:modulo_3.PNG]
I debug one of the errant vertices in the previous shader with RenderDoc, and find that it is implemented using frc, rather than a true integer modulo op, leaving small components that work their way into the coordinate calculations:
[attachment=35346:modulo_2.PNG]
So I try again:

uint3 GetFieldCell(uint id, float3 numcells)
{
uint3 cell;
uint  layersize = numcells.x * numcells.y;
cell.z = floor(id / layersize);
uint layeroffset = floor(id % layersize);
cell.y = floor(layeroffset / numcells.x);
cell.x = floor(layeroffset % numcells.x);
return cell;
}
And it wor...!
Oh...
...That's unexpected:
[attachment=35347:modulo_4.PNG]
Can anyone explain this behaviour?
Is it small remainders of the multiplication of the frc result with the 'integer' as I suspect? If not, what else? If so, why does surrounding the result with floor() not work? (Its not optimised away, I've checked it in the debugger...)
Sj
Advertisement
I'm not very familiar with math in shaders, but I did notice that there are WAY more artifacts than just what you've highlighted:

You can see how some of the planes are bunching together - groups of 2, or 3, and then a gap. Across the entire field.

The only thing I know about % (for CPU programming, anyway) is that if the values involved are negative, you can get a NEGATIVE remainder *closer to zero* instead of a positive remainder "closer to" the negative value. This can sometimes screw up algorithms if you're not exclusively operating on unsigned numbers.

The types you're using in the shader are uints, but the debugger is displaying floating point values. Perhaps the value is occasionally SLIGHTLY on the wrong side of a whole number and causes this type of behavior?


(BTW, I voted you up since you asked your question in a very complete and thorough manner, which makes me happy after seeing countless other posts recently asking for help with essentially no code or info which to even speculate. Other viewers: please use this as an example of how to ask for help!)

cell.x = layeroffset % numcells.x;

In HLSL the modulus operator gives different results for floats and integers. Try casting numcells.x in the above line to a uint.

See: https://msdn.microsoft.com/en-us/library/windows/desktop/bb509631(v=vs.85).aspx#Additive_and_Multiplicative_Operators

This topic is closed to new replies.

Advertisement