My main concern is these negative numbers, appearing.. what've I done wrong?
This often happens when the operation was completed so quickly that the time calculation underflows, possibly because of clock monotonicity issues as highlighted by previous posters. But your timing code is unusual, lots of manipulations happening there, there could be a bug. Do the negative values ever come up for long operations (those who take a measurable time to execute, say on the order of a second)?
Monotonicity?
They do usually appear with ops between say 1-3 seconds? I agree it's a bit convuluted- essentially each threadable function holds a Timer: a description is composed, then timer.start() called, before the operation that is being timed, immediately following the timed op is timer.stop(), and since these functions use clock_gettime() right at the start respectively (as in the code way above), there shouln't be too much overhead.
My gut tells me that the timespec's 'long tv_secs' isn't able to hold large-enough unsigned numbers as I require maybe
I dropped the RAW from monotonic, and my negative values dissappeared.. wtf? Here's some sample output from the test I just ran, things are neater in the output file - look at nanosecs compared to secs and my confusion is evident.. this is from the non-threaded, sequential-function-call test.
PS I cut down the array size so I can test quicker - waiting 2 hours to see your timing data get smooshed is just way beyond uncool. lol
===============================================================
:: Test: 18
:: Desc: Bubble Sort, in range 0-25000
:: Mins: 0
:: Secs: 8
:: NanoSecs: 518027650
===============================================================
:: Test: 19
:: Desc: Bubble Sort, in range 25000-50000
:: Mins: 0
:: Secs: 18
:: NanoSecs: 326836672
===============================================================
:: Test: 20
:: Desc: Bubble Sort, in range 50000-75000
:: Mins: 0
:: Secs: 27
:: NanoSecs: 18446744073391663619
===============================================================
:: Test: 21
:: Desc: Bubble Sort, in range 75000-100000
:: Mins: 0
:: Secs: 37
:: NanoSecs: 18446744073256571601
===============================================================
:: Test: 22
:: Desc: Sequential function-call Benchmarking test.
:: 01:07:25 Jan 3 2013
:: Testing integer array of size = 100000 @ 6MB.
:: Test Executed Successfully.
:: Mins: 1.55
:: Secs: 93
:: NanoSecs: 48852266
===============================================================
+
+ or from the threaded test:
+
===============================================================
:: Test: 17
:: Desc: Duplicate Search, in range 0-25000
:: 1 duplicates were found.
:: Mins: 0
:: Secs: 2
:: NanoSecs: 18446744072997939107
===============================================================
:: Test: 18
:: Desc: Bubble Sort, in range 0-25000
:: Mins: 0
:: Secs: 12
:: NanoSecs: 3231151
===============================================================
How is the large number of ms smaller than the small no of seconds?
I would use nanoseconds as the unit of time, in a 64-bit integer. It's much harder to make mistakes that way.
like a _uint64 ? How would that work? Also makes me wonder - is my #define BILLION signed or unsigned?