strange performance patterns

Started by
21 comments, last by larsbutler 12 years, 3 months ago
I have a python script which does some calculations, and I am trying to figure out how fast it is. I invoke it from the command line and it prints out the time taken after completing.

However, there is a mysterious pattern to the timings. When I first run it, it starts out relatively fast. However, subsequent runs become slower and slower. If I leave the computer idle for a while, it becomes fast again. I can't really think of any explanation for this. What is going on?
I trust exceptions about as far as I can throw them.
Advertisement
What is the difference in time? Have you run your code through a profiler? (Have a look at cProfile: http://docs.python.org/library/profile.html.)

What OS?
What version of Python?
Can you give a basic description of what your code is doing?

However, there is a mysterious pattern to the timings. When I first run it, it starts out relatively fast. However, subsequent runs become slower and slower. If I leave the computer idle for a while, it becomes fast again. I can't really think of any explanation for this. What is going on?

Does your script use vast amounts of memory? We sometimes observe this kind of behaviour on scripts that consume 8GB+ of memory (and thus overflow into virtual memory) - it seems the OS often takes a while to free up all that memory back to the next invocation.

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

My os is Windows 7 and I'm using Python 2.6

As far as memory, according to the task manager, the python.exe process peaks around 6mb of memory, so I doubt that that is a problem.
I trust exceptions about as far as I can throw them.

import cProfile
cProfile.run('main()')
If the timing of the overall script with a simple timer is inconsistent, how would in depth profiling help?
I trust exceptions about as far as I can throw them.

If the timing of the overall script with a simple timer is inconsistent, how would in depth profiling help?

You profile several runs, making sure to profile both a quick and a slow run. Compare where the majority of time is spent in each, and you may just have your culprit...

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

While profiling, I noticed that a large amount of time is spent finding the set bits in an integer. Does anyone know the most effecient way to do this? Here's what I have right now. What would be ideal is a c implementation, but there doesn't appear to be one in the standard library, and I don't want to require other libraries.

[source lang='python']def toList(bs):
assert(bs >= 0) #negative numbers have infinitely many bits set
return [i for i,b in enumerate(reversed(bin(bs))) if b=='1']


[/source]
I trust exceptions about as far as I can throw them.
Read all of BitManipulation. In particular, you might be able to build some variation of [color=#000000][font=Arial, Verdana, Geneva,]Kernighan/Knuth's bitCount() method, to iterate bits instead.[/font]

[color=#000000][font=Arial, Verdana, Geneva,]Your current solution uses a string conversion, a reversal, an enumeration and a list comphrehension all together - that can't be very fast.[/font]

Tristam MacDonald. Ex-BigTech Software Engineer. Future farmer. [https://trist.am]

That page looks out of date and inaccurate. Most of the methods involving manual loops are very slow in Python. (This is actually what I tried before I came up with the faster version posted above).
I trust exceptions about as far as I can throw them.

This topic is closed to new replies.

Advertisement