i posted a link to MJPs blog post and said i used this method to measure my timings..
i forgot that this is actually not totally true^^..
there are two ways..
first: wait in a while loop until the query is finished (this will sync cpu and gpu)
second: do not wait.. instead collect the results in the next frame (this will not sync)
for my measuring i took the second one.. . now i tried to sync, what was slowing down the frame rate but the timing are much closer now.. what is quite interesting (the fact that the timings got closes)
Thanks for your replies! first of all i'll try to explain what i'm doing ;) in general.. . it's an augmented reality application.. so there are some steps to perform..
rendering background layer (at the moment the model of a room from the inside.. to simulate the camera).. the camera does not move and so the afford should be constant
generate shadow maps of the room above and the model below (see next step)
render virtual objects..they are changing for performance measures.. (i restart the application to change the model) ... these models also don't move
render virtual shadows.. therefor the room geometry and the shadow maps are used.. should be fixed time
blend results of 1. 3. and 4. together
these steps are always the same.. (so i always have the same calls from cpu side)
i don't use branches.. all if's are replaced by step() and lerp().. (forcing the gpu to always calculate both paths?) the number of texture fetches per pixel is constant
to measure timings i use queries: http://mynameismjp.wordpress.com/2011/10/13/profiling-in-dx11-with-queries/ so here can be a problem because this timer is not really high precision but the fast GPU works as expected... the slow doesn't
so.. now to measured values: i compare timings of the 5 steps from runs with three different models: (primitive counts: 5000, 30000 and 50000)
on the geforce the three timings of 1 4 and 5 are equal and the timings of 2 and 3 are growing with primitive counts. over time the measurements changing very little.. e.g. 1/100ms
on the amd-chip 2 and 3 are growing with primitive count, too. but steps 1 4 and 5 are changed by -+ 1ms
i think hodgeman's explaination points in the right direction.. the AMD-Chip is a mobile chip without vram .. so texture fetches are very expensive (for cache misses).. is it possible that the driver uses different schemes to predict what texture data blocks are needed.. so that this leads in different hit-miss-ratio ?
yes.. of cause.. the question is how to address them.. from what i read it could be: - the wpf system that is not able to free the memory because of a to fast image invalidation (makes no sense GTX560 is faster with same v-ram and no problems) - the application code.. the sample is very simple, i don't think that there are problems.. there are also no errors in the dx debug log - graphics driver.. hmpf.. - ???