Memory Pool Benchmark

Started by
6 comments, last by vesoljc 19 years, 6 months ago
Hi all, I'd like to know if someone could post a _good_ way to benchmark (code or otherwise) a memory pool system, i've written my own but i have no solid proof whereas it is faster than the traditionnal new/delete system (i hope so!!). Also, what kind of time decrease am i looking for (to be a good memory pool system) ? thank you. --- Millet Florian
Advertisement
for n iterations
mark time
allocate m objects
mark time
delete m objects
mark time

m <= max pool elements
just sum time deltas and make an averege over n iterations.

imho :)

[edit]
well, u could also add third time delta, ie: an operation over that object/piece of mem.
Abnormal behaviour of abnormal brain makes me normal...www.zootfly.com
Actually, that's a horrible bench mark.

1) No fragmentation testing at all.
2) Allocations are rarely contiguous like that, same for deallocations
3) You're not actually using the memory at all. You must access the memory, otherwise it will end up being optimized out. (any decent compiler will eliminate such a NOP)

In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.

a more complex bench can be made out of that "node"
n and m can also be 1 ;)
Abnormal behaviour of abnormal brain makes me normal...www.zootfly.com
Quote:Original post by Washu
2) Allocations are rarely contiguous like that, same for deallocations


heh, bad design
Abnormal behaviour of abnormal brain makes me normal...www.zootfly.com
Quote:Original post by vesoljc
Quote:Original post by Washu
2) Allocations are rarely contiguous like that, same for deallocations


heh, bad design

Not really, it's called real world. Things don't always happen in a nice linear order like that. Especially as applications become larger and larger.

In time the project grows, the ignorance of its devs it shows, with many a convoluted function, it plunges into deep compunction, the price of failure is high, Washu's mirth is nigh.

I would suggest:
Use a benchmark for some library that allows you to choose what allocator to use.
@washu
;)

@florian
time decrease should be visible while allocating objects. as for how much, i'd say that size does matter.
Abnormal behaviour of abnormal brain makes me normal...www.zootfly.com

This topic is closed to new replies.

Advertisement