I am trying to work with large 2D arrays in Python, but it’s very slow. For example:
start = time.time()
result = numpy.empty([5000, 5000])
for i in range(5000):
for j in range(5000):
result[i, j] = (i * j) % 10
end = time.time()
print(end - start) # 8.8 s
Same program in Java is much faster:
long start = System.currentTimeMillis();
int[][] result = new int[5000][5000];
for (int i = 0; i < 5000; i++) {
for (int j = 0; j < 5000; j++) {
result[i][j] = (i * j) % 10;
}
}
long end = System.currentTimeMillis();
System.out.println(end - start); // 121 ms
It’s because Python is interpreted language? Is there any way to improve it? Or why Python is so popular for working with matrices, artificial intelligence, etc.?
6
Answers
Python is very popular for AI for many reasons :
-Easy to prototype
-Lot of ML lib/ big commu
-Uses gpu to do massively parallel computation on tensors with CUDA for example
For our problems try to use native list on python (You are using numpy, it’s probably heavier
You aren’t actually using the power of NumPy – you’re performing your loops manually at Python level. This is roughly analogous to wondering why everyone uses cars if it takes so much longer to walk to the store when you’re dragging a car behind you.
Use native NumPy operations to push your work into C-level loops. For example,
This will go much faster.
Read to the end to see how NumPy can outperform your Java code by 5x.
numpy
‘s strength lies in vectorized computations. Your Python code relies on interpreted loops, and iterpreted loops tend to be slow.I rewrote your Python code as a vectorized computation and that immediately sped it up by a factor of ~16:
Computing
% 10
in place instead of creating a new array speeds things up by another 20%:edit 1: Doing the computations in 32 bits instead of 64 (to match your Java code) basically matches the performance of Java — h/t to @user2357112 for pointing this out:
edit 2: And with a little bit of work we can make this code about 5x faster than your Java implementation (here
ne
refers to thenumexpr
module):edit 3: Please make sure to also take a look at the answer given by @max9111.
Is there any way to improve it?
See the time performance difference:
where
np.inidices
represents the indices of a gridwhy Python is so popular for working with matrices, artificial intelligence, …
Numpy routines are implemented in C (which stays one of the most, if not the one, fastest languages) and uses densely packed arrays. Related topic: https://stackoverflow.com/a/8385658/3185459
You might also imply Pandas, a popular and powerful library for data-analysis/data-science. It is preferred and chosen by many-many specialists for its flexible data representation, concise syntax, extensive set of features and efficient handling large datasets.
Another option to the examples @user2357112 and @NPE already showed would be to use Numba (Jit-compiler). Pure interpreted Python loops are very slow and should be avoided where performance matters.
Example
Timings
dropped in half when replacing Numpy with two dimensional array