Jump to content

  • Log In with Google      Sign In   
  • Create Account


Member Since 25 Jul 2008
Offline Last Active Oct 19 2015 02:17 AM

Posts I've Made

In Topic: "Xbox One actually has two graphical queues"

21 September 2015 - 05:30 AM

Thanks for the quick answer!


And how does that work API wise?
Can the gpu/driver deduce which drawcalls are not dependant on previous, or is there some API that let's the developer take care of that?

In Topic: Commands Recording

20 May 2014 - 04:58 AM

D3DPRESENTFLAG_LOCKABLE_BACKBUFFER is not used at the device creation.

In Topic: Commands Recording

20 May 2014 - 01:28 AM

Sorry, I forgot to mention, the ret value is S_OK.


When I switch to Directx debug runtime, it doesn't happen anymore, which leaves me even more puzzled.


When I use the DirectX release RT, the call stack is:


Recorder.dll!IDirect3DSurface9_Recorder::LockRect(_D3DLOCKED_RECT * pLockedRect, const tagRECT * pRect, unsigned long Flags) Line 495	C++
D3DX9_43.dll!D3DXTex::CLockSurface::Lock(struct D3DX_BLT *,struct IDirect3DSurface9 *,struct tagPALETTEENTRY const *,struct tagRECT const *,unsigned long,unsigned long)	Unknown
 D3DX9_43.dll!_D3DXLoadSurfaceFromMemory@40()	Unknown
 D3DX9_43.dll!_D3DXLoadVolumeFromResourceW@36()	Unknown
 D3DX9_43.dll!_D3DXCreateTextureFromFileInMemoryEx@60()	Unknown

In Topic: Measuring Latency [Solved]

08 January 2013 - 10:37 PM

If anyone is interested, i solved the problem of measuring latency.

I had to move outside of the scope of user-mode DirectX.


The solution was injecting "D3DKMT_SIGNALSYNCHRONIZATIONOBJECT2":after presents.



It sounds off topic for this forum so i won't go deep into the description.

In Topic: Measuring Latency [Solved]

02 January 2013 - 02:04 AM

Thanks for your replies =)


Hodgman, as you said - it changes performance and behavior drastically, so it's not acceptable.

The point is to measure current latency, and by doing what you suggest, not only performance is altered, latency itself is altered drastically since i no longer allow queuing of frames.


Sharing the options i can think of:


1. Using GPU time_stamp queries and somehow learning how to match between cpu time_stamp and gpu time_stamp (tricky...)

2. Polling event queries on a dedicated thread in around 1000 getdata queries per minute.
I must check how much it eats from the core it runs on... hopefully not too much since it's not a full busy polling.
3. Probably the best method remains waiting on low level events, the same way as GPUView does.


BTW - this code will not run on my own application, it is injected to other applications. But any solution that works on my own application, without altering the original latency/performance is acceptable as well :)