Jump to content

  • Log In with Google      Sign In   
  • Create Account

Banner advertising on our site currently available from just $5!

1. Learn about the promo. 2. Sign up for GDNet+. 3. Set up your advert!


Member Since 25 Jul 2008
Offline Last Active Sep 28 2014 06:52 AM

Posts I've Made

In Topic: Commands Recording

20 May 2014 - 04:58 AM

D3DPRESENTFLAG_LOCKABLE_BACKBUFFER is not used at the device creation.

In Topic: Commands Recording

20 May 2014 - 01:28 AM

Sorry, I forgot to mention, the ret value is S_OK.


When I switch to Directx debug runtime, it doesn't happen anymore, which leaves me even more puzzled.


When I use the DirectX release RT, the call stack is:


Recorder.dll!IDirect3DSurface9_Recorder::LockRect(_D3DLOCKED_RECT * pLockedRect, const tagRECT * pRect, unsigned long Flags) Line 495	C++
D3DX9_43.dll!D3DXTex::CLockSurface::Lock(struct D3DX_BLT *,struct IDirect3DSurface9 *,struct tagPALETTEENTRY const *,struct tagRECT const *,unsigned long,unsigned long)	Unknown
 D3DX9_43.dll!_D3DXLoadSurfaceFromMemory@40()	Unknown
 D3DX9_43.dll!_D3DXLoadVolumeFromResourceW@36()	Unknown
 D3DX9_43.dll!_D3DXCreateTextureFromFileInMemoryEx@60()	Unknown

In Topic: Measuring Latency [Solved]

08 January 2013 - 10:37 PM

If anyone is interested, i solved the problem of measuring latency.

I had to move outside of the scope of user-mode DirectX.


The solution was injecting "D3DKMT_SIGNALSYNCHRONIZATIONOBJECT2":after presents.



It sounds off topic for this forum so i won't go deep into the description.

In Topic: Measuring Latency [Solved]

02 January 2013 - 02:04 AM

Thanks for your replies =)


Hodgman, as you said - it changes performance and behavior drastically, so it's not acceptable.

The point is to measure current latency, and by doing what you suggest, not only performance is altered, latency itself is altered drastically since i no longer allow queuing of frames.


Sharing the options i can think of:


1. Using GPU time_stamp queries and somehow learning how to match between cpu time_stamp and gpu time_stamp (tricky...)

2. Polling event queries on a dedicated thread in around 1000 getdata queries per minute.
I must check how much it eats from the core it runs on... hopefully not too much since it's not a full busy polling.
3. Probably the best method remains waiting on low level events, the same way as GPUView does.


BTW - this code will not run on my own application, it is injected to other applications. But any solution that works on my own application, without altering the original latency/performance is acceptable as well :)

In Topic: Measuring Latency [Solved]

01 January 2013 - 10:39 PM

Thank you MJP!

I need to do it programatically, it's not some "post-mortem" analysis of an application.


But i can take the following from your idea:

1. What i want to do is obviously possible - If GPUView can do it, so should I =)

2. Maybe i can reverse GPUView a bit, but my guess is that they use kernel code which I don't want to do until I make sure it's not solvable in user code.


Any more ideas?