I was trying to get parallel Python to work and I noticed that if I run two Python scripts simultaneously – say, in two different terminals – they use the same core. Hence, I get no speedup from multiprocessing/parallel Python. After some searching around, I found out that in some circumstances importing numpy causes Python to stick all computations in one core. This is an issue with CPU affinity, and apparently it only happens for some mixtures of Numpy and BLAS libraries – other packages may cause the CPU affinity issue as well.
There’s a package called affinity (Linux only AFAIK) that lets you set and get CPU affinity. Download it, run python setup.py install, and run this in Python or ipython:
In [1]: import affinity In [2]: affinity.get_process_affinity_mask(0) Out[2]: 63
This is good: 63 is a bitmask corresponding to 111111 – meaning all 6 cores are available to Python. Now running this, I get:
In [4]: import numpy as np In [5]: affinity.get_process_affinity_mask(0) Out[5]: 1
So now only one core is available to Python. The solution is simply to set the CPU affinity appropriately after import numpy, for instance:
import numpy as np import affinity import multiprocessing affinity.set_process_affinity_mask(0,2**multiprocessing.cpu_count()-1)
2 responses to “Python refuses to use multiple cores – solution”
I wrote a StackOverflow answer on this problem a while back: http://stackoverflow.com/a/15641148/1461210. The usual culprit seems to be OpenBLAS – you can easily disable its annoying affinity-resetting behavior by setting the environment variable `OPENBLAS_MAIN_FREE=1`.
Thanks a lot. helps a lot. I was looking for the solution for while