Jump to content

  • Log In with Google      Sign In   
  • Create Account

Intel Atom / GMA 3650, HLSL object coords to screen fails


Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.

  • You cannot reply to this topic
6 replies to this topic

#1 ArcticHammer   Members   -  Reputation: 123

Like
0Likes
Like

Posted 03 October 2013 - 09:31 AM

I'm pulling my hair off with one specific hardware. huh.png This is an Acer Iconia w3-810 tablet, with Windows 8,  Intel Atom Z2760, and GMA "GPU".  

 

Newest driver pack has been installed. 

 

I'm developing with DirectX9 and HLSL. Some objects I render with  DirectX 9 pipeline but some objects requiring advanced coloring render with HLSL vertex and pixel shaders. The problem is coordinate conversion in the HLSL, from object space to screen coords. I suspect this must be some kind of floating point accuracy thing. 

 

In vertex shader, this works with all other computers: 

vout.Pos_ps = mul(position, WorldViewProjection);   //WorldViewProjection has all transformations 

 

amd.jpg

 

It's taken with AMD GPU, this is all OK. 

 

But compare it to Atom's screenshot: 

spectrogram_iconia.jpg

 

All other objects are in place, but not the one  I render with HLSL, the colorful spectrogram surface.

 

 

If I convert the coordinates in CPU side with WorldViewProjection matrix, and don't convert it all in GPU side, it renders OK with Atom too. But the matrix multiplication has to be done as follows: 

 

Vector3 TransformCoordinate( Vector3 coord, Matrix transform )
   {
   Vector4 vector;


   vector.X = (((coord.X * transform.M11) + (coord.Y * transform.M21)) + (coord.Z * transform.M31)) + transform.M41;
   vector.Y = (((coord.X * transform.M12) + (coord.Y * transform.M22)) + (coord.Z * transform.M32)) + transform.M42;
   vector.Z = (((coord.X * transform.M13) + (coord.Y * transform.M23)) + (coord.Z * transform.M33)) + transform.M43;
   vector.W = 1.0f / ((((coord.X * transform.M14) + (coord.Y * transform.M24)) + (coord.Z * transform.M34)) + transform.M44);


            Vector3 v3 = new Vector3(vector.X * vector.W, vector.Y * vector.W, vector.Z * vector.W );
            return v3; 
   }
which is in fact similar to SlimDx's Vector3.TransformCoordinate method. 

 

Then I tried to implement the similar coordinate conversion in HLSL:

 

vout.Pos_ps = TransformCoord(position);



float4 TransformCoord(float4 pos)
{
float4 tr; 


tr.x = (((pos.x * WorldViewProjection._11) + (pos.y * WorldViewProjection._21)) + (pos.z * WorldViewProjection._31)) + WorldViewProjection._41;
tr.y = (((pos.x * WorldViewProjection._12) + (pos.y * WorldViewProjection._22)) + (pos.z * WorldViewProjection._32)) + WorldViewProjection._42;
tr.z = (((pos.x * WorldViewProjection._13) + (pos.y * WorldViewProjection._23)) + (pos.z * WorldViewProjection._33)) + WorldViewProjection._43;
tr.w = 1.0f / ((((pos.x * WorldViewProjection._14) + (pos.y * WorldViewProjection._24)) + (pos.z * WorldViewProjection._34)) + WorldViewProjection._44);


return float4 (tr.x * tr.w, tr.y * tr.w, tr.z * tr.w, 1.0f); 
}

Well, it works fine with other computers, but with Atom it doesn't. The result is even worse than mul(vector, matrix) I used originally, the transformed coordinates are typically in the center of the screen in tip of the needle size, but badly warped. 

 

I really don't want to move all coordinate conversions to CPU side, that would be a massive task as we have so many different data visualization stuff implemented. 

 

What am I missing? Is there any way to improve floating point accuracy on this this machine? Should I forward resolving of this case to Intel?wacko.png  

 

 



Sponsor:

#2 Jason Z   Crossbones+   -  Reputation: 5062

Like
0Likes
Like

Posted 03 October 2013 - 04:25 PM

If you run your code with the reference rasterizer, does it render correctly?  It could be that you are performing some operation that is allowed to fail gracefully by most drivers, but the Intel one is more strict in its implementation.

 

If you think the transformation is the issue, you should be able to try out some other sample code on that GPU and see if there is a similar issue.  Are the DirectX SDK samples running ok on that hardware?  Also, have you tried to use PIX/Graphics Debugger to figure out what is going on inside your shaders?



#3 ArcticHammer   Members   -  Reputation: 123

Like
0Likes
Like

Posted 04 October 2013 - 02:35 AM

It renders correctly with the reference rasterizer. HLSL transformations all work correctly. But not with hardware driver. 

 

I'm contacting Intel.



#4 Mona2000   Members   -  Reputation: 602

Like
0Likes
Like

Posted 05 October 2013 - 04:28 AM

Have you tried creating the device with D3DCREATE_SOFTWARE_VERTEXPROCESSING instead of D3DCREATE_HARDWARE_VERTEXPROCESSING? I remember some Intel cards having problems with the latter.



#5 Alessio1989   Members   -  Reputation: 2003

Like
0Likes
Like

Posted 05 October 2013 - 05:39 AM

Have you tried creating the device with D3DCREATE_SOFTWARE_VERTEXPROCESSING instead of D3DCREATE_HARDWARE_VERTEXPROCESSING? I remember some Intel cards having problems with the latter.

 

this, since early atom CPUs had old IGPs...

 

It renders correctly with the reference rasterizer. HLSL transformations all work correctly. But not with hardware driver. 

 

I'm contacting Intel.

Try also to disable the driver optimization in the intel graphic control panel


Edited by Alessio1989, 05 October 2013 - 05:44 AM.

"Software does not run in a magical fairy aether powered by the fevered dreams of CS PhDs"


#6 Adam_42   Crossbones+   -  Reputation: 2506

Like
0Likes
Like

Posted 05 October 2013 - 12:03 PM

A quick search says this isn't really an Intel GPU at all. It's a Power VR SGX545.

 

What are the actual values in the WorldViewProjection matrix?

 

Have you tried passing w through as normal out of the vertex shader like this?

float4 TransformCoord(float4 pos)
{
float4 tr; 

tr.x = (((pos.x * WorldViewProjection._11) + (pos.y * WorldViewProjection._21)) + (pos.z * WorldViewProjection._31)) + WorldViewProjection._41;
tr.y = (((pos.x * WorldViewProjection._12) + (pos.y * WorldViewProjection._22)) + (pos.z * WorldViewProjection._32)) + WorldViewProjection._42;
tr.z = (((pos.x * WorldViewProjection._13) + (pos.y * WorldViewProjection._23)) + (pos.z * WorldViewProjection._33)) + WorldViewProjection._43;
tr.w = (((pos.x * WorldViewProjection._14) + (pos.y * WorldViewProjection._24)) + (pos.z * WorldViewProjection._34)) + WorldViewProjection._44;

return tr;
}


#7 ArcticHammer   Members   -  Reputation: 123

Like
0Likes
Like

Posted 06 October 2013 - 10:03 AM

 

Have you tried creating the device with D3DCREATE_SOFTWARE_VERTEXPROCESSING instead of D3DCREATE_HARDWARE_VERTEXPROCESSING? I remember some Intel cards having problems with the latter.

 

this, since early atom CPUs had old IGPs...

 

It renders correctly with the reference rasterizer. HLSL transformations all work correctly. But not with hardware driver. 

 

I'm contacting Intel.

Try also to disable the driver optimization in the intel graphic control panel

 

 

Thanks for trying to help me. 

 

Setting software vertex processing instead of hardware vertex processing doesn't help. 

 

This Intel graphics control panel doesn't allow changing any settings, it just shows info. This clearly a stripped-down version of the intel grahics control panel found in laptops. 

intel_settings.png

 

 

A quick search says this isn't really an Intel GPU at all. It's a Power VR SGX545.

 

 

GPU-Z shows "Intel® Grpahics Media Accelerator" as the name of the GPU. It may be Power VR inside, but is Intel GMA to the DirectX and to me. 

GPUZ.png






Old topic!
Guest, the last post of this topic is over 60 days old and at this point you may not reply in this topic. If you wish to continue this conversation start a new topic.



PARTNERS