Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 19 Sep 2009
Offline Last Active Aug 20 2014 07:23 AM

Topics I've Started

Windows RT with DirectX 9?

04 December 2013 - 08:12 AM

Does Windows RT support DirectX 9.0 c (SM3) applications? We are programming scientific graphics with SharpDX and converting our existing mammoth engine for DX10 or 11 is not an option. huh.png



Intel Atom / GMA 3650, HLSL object coords to screen fails

03 October 2013 - 09:31 AM

I'm pulling my hair off with one specific hardware. huh.png This is an Acer Iconia w3-810 tablet, with Windows 8,  Intel Atom Z2760, and GMA "GPU".  


Newest driver pack has been installed. 


I'm developing with DirectX9 and HLSL. Some objects I render with  DirectX 9 pipeline but some objects requiring advanced coloring render with HLSL vertex and pixel shaders. The problem is coordinate conversion in the HLSL, from object space to screen coords. I suspect this must be some kind of floating point accuracy thing. 


In vertex shader, this works with all other computers: 

vout.Pos_ps = mul(position, WorldViewProjection);   //WorldViewProjection has all transformations 


Attached File  amd.jpg   126.65KB   13 downloads


It's taken with AMD GPU, this is all OK. 


But compare it to Atom's screenshot: 

Attached File  spectrogram_iconia.jpg   56.61KB   12 downloads


All other objects are in place, but not the one  I render with HLSL, the colorful spectrogram surface.



If I convert the coordinates in CPU side with WorldViewProjection matrix, and don't convert it all in GPU side, it renders OK with Atom too. But the matrix multiplication has to be done as follows: 


Vector3 TransformCoordinate( Vector3 coord, Matrix transform )
   Vector4 vector;

   vector.X = (((coord.X * transform.M11) + (coord.Y * transform.M21)) + (coord.Z * transform.M31)) + transform.M41;
   vector.Y = (((coord.X * transform.M12) + (coord.Y * transform.M22)) + (coord.Z * transform.M32)) + transform.M42;
   vector.Z = (((coord.X * transform.M13) + (coord.Y * transform.M23)) + (coord.Z * transform.M33)) + transform.M43;
   vector.W = 1.0f / ((((coord.X * transform.M14) + (coord.Y * transform.M24)) + (coord.Z * transform.M34)) + transform.M44);

            Vector3 v3 = new Vector3(vector.X * vector.W, vector.Y * vector.W, vector.Z * vector.W );
            return v3; 
which is in fact similar to SlimDx's Vector3.TransformCoordinate method. 


Then I tried to implement the similar coordinate conversion in HLSL:


vout.Pos_ps = TransformCoord(position);

float4 TransformCoord(float4 pos)
float4 tr; 

tr.x = (((pos.x * WorldViewProjection._11) + (pos.y * WorldViewProjection._21)) + (pos.z * WorldViewProjection._31)) + WorldViewProjection._41;
tr.y = (((pos.x * WorldViewProjection._12) + (pos.y * WorldViewProjection._22)) + (pos.z * WorldViewProjection._32)) + WorldViewProjection._42;
tr.z = (((pos.x * WorldViewProjection._13) + (pos.y * WorldViewProjection._23)) + (pos.z * WorldViewProjection._33)) + WorldViewProjection._43;
tr.w = 1.0f / ((((pos.x * WorldViewProjection._14) + (pos.y * WorldViewProjection._24)) + (pos.z * WorldViewProjection._34)) + WorldViewProjection._44);

return float4 (tr.x * tr.w, tr.y * tr.w, tr.z * tr.w, 1.0f); 

Well, it works fine with other computers, but with Atom it doesn't. The result is even worse than mul(vector, matrix) I used originally, the transformed coordinates are typically in the center of the screen in tip of the needle size, but badly warped. 


I really don't want to move all coordinate conversions to CPU side, that would be a massive task as we have so many different data visualization stuff implemented. 


What am I missing? Is there any way to improve floating point accuracy on this this machine? Should I forward resolving of this case to Intel?wacko.png  



D3DXMatrixLookAtLH exception after windows 7 update

06 December 2012 - 02:28 PM

Dear all,

we have a 3D software component written by using SlimDX January 2012 .NET4. It's a DirectX 9.0c app.

One of our users have now reported a problem after windows update. It's a Win 7 machine. The software was working correctly but after Windows update that was run about 2 weeks ago, the following exception is thrown:
System.Runtime.InteropServices.SEHException (0x80004005): External component has thrown an exception.
at SlimDX.Matrix.LookAtLH(Vector3 eye, Vector3 target, Vector3 up)

The windows updates were uninstalled but it didn't fix the problem. SlimDx runtime for x86 and x64 were both removed and reinstalled. It didn't fix anything.

Newest GPU drivers were also updated for AMD FireGL V7900.

I think you are going to propose reinstalling Windows, but we certainly want to avoid that!

Any clue, anybody? I'm clueless :-(

[SlimDX] Reusing texture as render target, NVidia problem

02 October 2012 - 04:47 PM

I'm rendering into same texture incrementally. Each refresh round I'm adding some details in the texture.

Texture is set as render target. In the end of the round, I'm finally displaying the texture in the screen.

By using Ati/AMD adapters, it works just fine.

By using Nvidia (Quadro FX370 and GT9500 for example), I can only render 1 time for the texture. When I try to render into it again next round, it won't render any additinal graphics on it, is just remains the same.

If I recreate the texture, I can render on it again, but only 1 time. For NVidia, I have made a special logic that it first creates a new texture and then renders old texture in it and then the new graphics. Like this, it works but causes significant extra overhead.

I'm wondering if anybody has tangled with the same issue?

BTW, there's no errors in DirectX debug output. DirectX runtime is in debug mode and working.

Thanks in advance for any help...

[SlimDX] Flexible Vertex Format (FVF) usage with nVidia and Intel?

06 April 2012 - 05:12 AM

Hi everyone,

We are successfully using compact vertex structure for Ati(AMD) Radeon GPUs, they work in all models about 5 years old or newer. We want to minimize the data amount to be written to the GPU for maximizing the performance, we are rendering 2D graphics with D3D9. The same vertex structure type doesn't work with any nVidia or Intel GPU in our app.

We can successfully use the following struct for Ati(AMD):


struct CompactVertex

public float X;

public float Y;

public int Color;

public static VertexFormat Format = VertexFormat.None;

public const int StrideSize = 12;

public static VertexElement[] VertexElements =

new VertexElement(0, 0,




new VertexElement(0, 8,



DeclarationUsage.Color, 0),


And for nVidia and Intel, we have to use larger one:


struct LargeVertex

public float X;

public float Y;

public float Z;

public float RHW;

public int Color;

public static VertexFormat Format = VertexFormat.PositionRhw | VertexFormat.Diffuse;

public const int StrideSize = 20;

public static VertexElement[] VertexElements =

new VertexElement(0, 0,




new VertexElement(0, 16,



DeclarationUsage.Color, 0),


We have tried to detect the compatible type by investigating
bool compactSupported = (caps.FVFCaps & VertexFormatCaps.DoNotStripElements) == 0;

Ati/AMD has flag value set to 0, Intel and nVidia 1.

and also tried to force the compact format into use in nVidia and Intel, but they fail to render it.

It's very difficult for me to believe that nVidia wouldn't support this kind of compact vertex type.

So, can you say what are we doing wrong, and how to get a compact vertex type (x,y,color) working with nVidia?

Thanks for any help...