Jump to content

  • Log In with Google      Sign In   
  • Create Account

IrYoKu1

Member Since 01 Sep 2007
Offline Last Active Oct 23 2013 12:59 PM

Posts I've Made

In Topic: Getting GPU vendor ID

05 September 2013 - 12:23 PM

I think in general it should be repeatable, but in laptops with multi-GPUs this is hacked, and seems to be bogus (after creating a device, the adapter order changes).

 

I think what is happening under the hood is that the driver is exposing the selected GPU as first adapter; but somehow, after device creation this hacked order is being reverted (probably to the real one, putting the NVIDIA GPU first).

 

If I enum the adapters, I'll always only get only one; it seems the driver again hides the second one (in this particular case). I was searching for doing special things depending on the vendor, as some optimizations that improve things in some vendor, make things worse in others.


In Topic: Getting GPU vendor ID

04 September 2013 - 12:36 PM

It seems you are right; in fact I think the way this is implemented is bogus.

 

I tried to get the first adapter available:

factory->EnumAdapters(0, &adapter)

Hoping it would always yield the GPU actually used. Doing it right in the beginning of the app worked out, but after DXUT initialization it was failing. I tracked down the problem and found this:

factory->EnumAdapters(0, &adapter); // Returns the adapter for the Intel card
D3D10CreateDevice(...); // Seems to be the first one in the app, when DXUT creates dummy devices for testing compatibility
factory->EnumAdapters(0, &adapter); // Returns the adapter for the NVIDIA card

So, doing it as I wrote in my first post (getting the adapter from the device) is not reliable, and trying to do in this other way (directly from first adapter ordinal) is also bogus. Unless this is done before device creation, you cannot rely on what you get from EnumAdapters.


In Topic: Getting GPU vendor ID

04 September 2013 - 07:52 AM

The problem is how I can tell which one is the one currently being used, when using EnumAdapters?

 

The code I posted came from this page:

http://msdn.microsoft.com/en-us/library/windows/desktop/bb174526(v=vs.85).aspx

 

I wonder if it's a bug.


In Topic: DXUT and Windows 8, application hanging on first run

27 August 2013 - 09:00 AM

So, it's certainly not a Windows 8 problem, but likely to be a Windows 8 + something else, as only Megamoscha could repo it. I was thinking it was an AMD issue, but seems no to be the case. The weird thing is that this not specific to my applications, but also happen to all DX SDK samples using DXUT.

 

I'm not sure it could be the shader compilation, the whole system became very unresponsive in my case (to the point I've just to sit and wait for it to finish), and CPU utilization is about 16% (a single thread). I think it's something going on in the GPU. For what I debugged past week, it seems to hang in a EnumDisplayDevices loop.

 

Many thanks for the help!


In Topic: Calculate bitangent in vertex vs pixel shader

25 May 2013 - 07:26 AM

Thanks for the reply!

 

My question was motivated by this document:

http://page.mi.fu-berlin.de/block/htw-lehre/wise2012_2013/bel_und_rend/skripte/mikkelsen2008.pdf

 

There is a lot of useful information in there regarding normal mapping, it's amazing how it can break in some many ways I could not even imagine.

 

I discovered that xnormal uses unnormalized normals for the tangent basis (normal, tangent and bitagent), and only uses the cross in the pixel shader when a certain option turned on. But I don't know how much of a difference doing the cross in one place or the other can be (regarding visuals).

 

MJP, you usually normalize normals of the tangent basis in the pixel shader (before transforming the normal from the normal map)? I used to, but now I'm wondering about it as well. It requires a some instructions to do so, and it seems it will only match normal maps generated by some software but not others (being xNormal an important example).


PARTNERS