I have ML code (e.g. Numpy, Scipy, LightGBM, PyTorch) deployed with Docker. I am using Python with Poetry, installing packages with pip.
What should I do in order to use MKL and MKL-DNN? I know that the most standard way is to use Anaconda, but I cannot (large business, without commercial Anaconda license).
Will pip install mkl
suffice?
How to install MKL-DNN, so that PyTorch will use it?
2
Answers
No, it will not, see the section in the numpy install docs:
So you will need to built numpy from source.
Have you considered using miniforge and miniconda? IANAL, but I am quite certain that you are just not allowed to use the ana-/miniconda distributions and the anaconda channel in large scale commercial products, but conda-forge can still be used free of charge. You should be able to set up all the requirements that you mentioned from conda-forge. At least you would probably have an easier time compiling pytorch from source
I tried to add MKL to my docker container (debian based) reading intel documentation: I failed.
However, there is a docker image OneAPI docker image that comes with numpy (1.21 which is eight month old) and mkl as default BLAS.
Here is what numpy returns on my machine (a laptop with a i7-i10875H )
However, I tried with anaconda and an basic docker image, and to my surprise, the anaconda virtual env used the CBLAS and my docker image used the Openblas BLAS.
I did not perform benchmarks, but since the mkl implementation uses all instruction set architecture except AVX512_ICL, I would expect it to be faster.
Anaconda
I was also surprise to test that in my anaconda environment, and to my surprise, the blas is not mkl.
My
base
environment uses openblas.My docker image based on python image –> Openblas
Dockerfile: