Intel Atom / GMA 3650, HLSL object coords to screen fails

This topic is 2116 days old which is more than the 365 day threshold we allow for new replies. Please post a new topic.

Recommended Posts

I'm pulling my hair off with one specific hardware.  This is an Acer Iconia w3-810 tablet, with Windows 8,  Intel Atom Z2760, and GMA "GPU".

Newest driver pack has been installed.

I'm developing with DirectX9 and HLSL. Some objects I render with  DirectX 9 pipeline but some objects requiring advanced coloring render with HLSL vertex and pixel shaders. The problem is coordinate conversion in the HLSL, from object space to screen coords. I suspect this must be some kind of floating point accuracy thing.

In vertex shader, this works with all other computers:

vout.Pos_ps = mul(position, WorldViewProjection);   //WorldViewProjection has all transformations

[attachment=18188:amd.jpg]

It's taken with AMD GPU, this is all OK.

But compare it to Atom's screenshot:

[attachment=18189:spectrogram_iconia.jpg]

All other objects are in place, but not the one  I render with HLSL, the colorful spectrogram surface.

If I convert the coordinates in CPU side with WorldViewProjection matrix, and don't convert it all in GPU side, it renders OK with Atom too. But the matrix multiplication has to be done as follows:

Vector3 TransformCoordinate( Vector3 coord, Matrix transform )
{
Vector4 vector;

vector.X = (((coord.X * transform.M11) + (coord.Y * transform.M21)) + (coord.Z * transform.M31)) + transform.M41;
vector.Y = (((coord.X * transform.M12) + (coord.Y * transform.M22)) + (coord.Z * transform.M32)) + transform.M42;
vector.Z = (((coord.X * transform.M13) + (coord.Y * transform.M23)) + (coord.Z * transform.M33)) + transform.M43;
vector.W = 1.0f / ((((coord.X * transform.M14) + (coord.Y * transform.M24)) + (coord.Z * transform.M34)) + transform.M44);

Vector3 v3 = new Vector3(vector.X * vector.W, vector.Y * vector.W, vector.Z * vector.W );
return v3;
}
which is in fact similar to SlimDx's Vector3.TransformCoordinate method.

Then I tried to implement the similar coordinate conversion in HLSL:

vout.Pos_ps = TransformCoord(position);

float4 TransformCoord(float4 pos)
{
float4 tr;

tr.x = (((pos.x * WorldViewProjection._11) + (pos.y * WorldViewProjection._21)) + (pos.z * WorldViewProjection._31)) + WorldViewProjection._41;
tr.y = (((pos.x * WorldViewProjection._12) + (pos.y * WorldViewProjection._22)) + (pos.z * WorldViewProjection._32)) + WorldViewProjection._42;
tr.z = (((pos.x * WorldViewProjection._13) + (pos.y * WorldViewProjection._23)) + (pos.z * WorldViewProjection._33)) + WorldViewProjection._43;
tr.w = 1.0f / ((((pos.x * WorldViewProjection._14) + (pos.y * WorldViewProjection._24)) + (pos.z * WorldViewProjection._34)) + WorldViewProjection._44);

return float4 (tr.x * tr.w, tr.y * tr.w, tr.z * tr.w, 1.0f);
}

Well, it works fine with other computers, but with Atom it doesn't. The result is even worse than mul(vector, matrix) I used originally, the transformed coordinates are typically in the center of the screen in tip of the needle size, but badly warped.

I really don't want to move all coordinate conversions to CPU side, that would be a massive task as we have so many different data visualization stuff implemented.

What am I missing? Is there any way to improve floating point accuracy on this this machine? Should I forward resolving of this case to Intel?

Share on other sites

If you run your code with the reference rasterizer, does it render correctly?  It could be that you are performing some operation that is allowed to fail gracefully by most drivers, but the Intel one is more strict in its implementation.

If you think the transformation is the issue, you should be able to try out some other sample code on that GPU and see if there is a similar issue.  Are the DirectX SDK samples running ok on that hardware?  Also, have you tried to use PIX/Graphics Debugger to figure out what is going on inside your shaders?

Share on other sites

It renders correctly with the reference rasterizer. HLSL transformations all work correctly. But not with hardware driver.

I'm contacting Intel.

Share on other sites

Have you tried creating the device with D3DCREATE_SOFTWARE_VERTEXPROCESSING instead of D3DCREATE_HARDWARE_VERTEXPROCESSING? I remember some Intel cards having problems with the latter.

Share on other sites

Have you tried creating the device with D3DCREATE_SOFTWARE_VERTEXPROCESSING instead of D3DCREATE_HARDWARE_VERTEXPROCESSING? I remember some Intel cards having problems with the latter.

this, since early atom CPUs had old IGPs...

It renders correctly with the reference rasterizer. HLSL transformations all work correctly. But not with hardware driver.

I'm contacting Intel.

Try also to disable the driver optimization in the intel graphic control panel

Edited by Alessio1989

Share on other sites

A quick search says this isn't really an Intel GPU at all. It's a Power VR SGX545.

What are the actual values in the WorldViewProjection matrix?

Have you tried passing w through as normal out of the vertex shader like this?

float4 TransformCoord(float4 pos)
{
float4 tr;

tr.x = (((pos.x * WorldViewProjection._11) + (pos.y * WorldViewProjection._21)) + (pos.z * WorldViewProjection._31)) + WorldViewProjection._41;
tr.y = (((pos.x * WorldViewProjection._12) + (pos.y * WorldViewProjection._22)) + (pos.z * WorldViewProjection._32)) + WorldViewProjection._42;
tr.z = (((pos.x * WorldViewProjection._13) + (pos.y * WorldViewProjection._23)) + (pos.z * WorldViewProjection._33)) + WorldViewProjection._43;
tr.w = (((pos.x * WorldViewProjection._14) + (pos.y * WorldViewProjection._24)) + (pos.z * WorldViewProjection._34)) + WorldViewProjection._44;

return tr;
}

Share on other sites

Have you tried creating the device with D3DCREATE_SOFTWARE_VERTEXPROCESSING instead of D3DCREATE_HARDWARE_VERTEXPROCESSING? I remember some Intel cards having problems with the latter.

this, since early atom CPUs had old IGPs...

It renders correctly with the reference rasterizer. HLSL transformations all work correctly. But not with hardware driver.

I'm contacting Intel.

Try also to disable the driver optimization in the intel graphic control panel

Thanks for trying to help me.

Setting software vertex processing instead of hardware vertex processing doesn't help.

This Intel graphics control panel doesn't allow changing any settings, it just shows info. This clearly a stripped-down version of the intel grahics control panel found in laptops.

[attachment=18245:intel_settings.png]

A quick search says this isn't really an Intel GPU at all. It's a Power VR SGX545.

GPU-Z shows "Intel(R) Grpahics Media Accelerator" as the name of the GPU. It may be Power VR inside, but is Intel GMA to the DirectX and to me.

[attachment=18244:GPUZ.png]

• 13
• 18
• 29
• 11
• 27