I have replaced my old home server (i3-6100, 51W TDP, 3,7GHz, SSE4.1, SSE4.2, AVX2) with a thin client (Celeron J4105, 10W TDP, 1.5/2.5GHz turbo, SSE4.2).
Can Apache make use of CPU AVX instructions?
I have replaced my old home server (i3-6100, 51W TDP, 3,7GHz, SSE4.1, SSE4.2, AVX2) with a thin client (Celeron J4105, 10W TDP, 1.5/2.5GHz turbo, SSE4.2).
Can Apache make use of CPU AVX instructions?
2
Answers
Glibc automatically uses AVX/AVX2 if available for memcpy, memcmp, strlen, and stuff like that, which is nice for small to medium-length strings hot in L1d or L2 cache. (e.g. maybe twice as fast for strings of 100B to 128KiB). For shorter strings, startup and cleanup overhead are a significant fraction. Hopefully apache doesn’t spend a lot of time looping over strings.
There might possibly be some auto-vectorized loops inside apache itself if you compile with
-O3 -march=native
, but unlikely.I doubt there’s anything in Apache that would be worth manually dispatching based on CPUID (except for libc functions), so you probably won’t find any AVX instructions in the apache binary on your i3 server if you check with a disassembler, unless it was specifically compiled for that machine or for AVX-capable machines. If the whole binary was compiled with AVX enabled, even scalar FP math would use instructions like
vmovsd
/vucomisd
instead ofmovsd
/ucomisd
, so if you see anymovsd
it wasn’t compiled that way.See How to check if compiled code uses SSE and AVX instructions? and note the SIMD (packed) vs. scalar.
One interesting feature of AVX that’s relevant for multithreaded programs: Intel recently documented that the AVX feature flag implies 16-byte aligned load/store is guaranteed atomic. (And I think AMD is planning to do so if they haven’t already, since it’s also true in practice on their CPUs.) Previously the only support for 16-byte lock-free atomics was via
lock cmpxchg16b
, meaning that pure-load cost as much as an RMW. GCC-compiled code can take advantage of this via libatomic, including via updates to a shared libatomic which dispatches to more efficient load/store functions on CPUs with AVX.So anyway, cheaper lock-free atomics for objects the size of two pointers in 64-bit mode. Not a game-changer for code that doesn’t spend a ton of time communicating between threads. And it doesn’t help the kernel because you can’t take advantage of it with
-mgeneral-regs-only
; 16-byte load/store require an XMM reg, unlesscmpxchg16b
without alock
prefix counts. But that could do a non-atomic RMW if the compare succeeds, so that’s unusable.Probably more relevant is that AVX2 support comes with faster memcpy inside the kernel, for
copy_to_user
(from the pagecache) forread
system calls.rep movsb
can work in 32-byte chunks internally in microcode, vs. 16-byte chunks on CPUs whose load/store data paths are only 16 bytes wide.(AVX can be implemented on CPUs with 16-byte load/store paths, like Zen 1 and Ivy Bridge, but your i3 with AVX2 has 32-byte datapaths between execution units and L1d cache. https://www.realworldtech.com/haswell-cpu/5/)
AVX2 can help with some OpenSSL stuff, but probably nothing important for web serving.
Usually you’ll be using AES for encryption, and both CPUs have AES-NI. AVX+AES does enable working on 32 bytes per instruction instead of 16, but IIRC that has to be on 2 separate blocks in parallel, not working twice as fast on one single AES stream. Still, Apache + OpenSSL might manage to take advantage of this.
There’s also a possible speedup for MD5 or SHA512 using AVX2, if I recall correctly.
For SHA1 and SHA256, the new CPU has SHA-NI (new in Goldmont and Ice Lake. The J4105 is Goldmont+, but the old CPU is Skylake so it didn’t have SHA-NI and had to do it manually with SIMD.) There is no VEX encoding of
SHA1RNDS4 xmm
or SHA256 acceleration instructions, let alone one which uses 256-bit vectors to go faster. If you use SHA512 for anything, then that will go somewhat faster with AVX2 than with SSE4.2, all else equal.(And of course a Skylake would run the same asm faster clock-for-clock, with a wider front-end that’s more robust against bottlenecks, and more throughput in the back-end. https://agner.org/optimize/ and https://uops.info/ – compare your old Skylake against your "new" Goldmont+. I put "new" in quotes because it launched at the end of 2017, only a couple years after your Skylake.)
Intel haven’t had AVX support in their low-power cores until Gracemont, the E-cores in Alder Lake. IDK if/when they’re planning a stand-alone low-power chip with only Gracemont cores to replace Tremont, and if they might include AVX in that.
Out of curiosity, and since I have a Gentoo Linux system where I can simply compile Apache with
-O3 -march=native
, I tried looking at the disassembly to see whether AVX vector instructions are generated at all.This gives the following stats:
So there is definitely some use. However, this doesn’t prove that these instructions are executed or that they are effective at improving performance compared to the same at SSE2 or compiled without automatic vectorization.
I find it somewhat curious to see instructions such as
vpgatherdd
in use. That’s not something I would expect a compiler to use on its own. I should also note that this is GCC-11.3.1 on an i7-11800H (Tiger Lake), so this uses AVX-512, not just AVX-1 or 2.As noted by Peter, the more likely candidate for effective usage is in the libc. I might add that OpenSSL will also make use of AVX if available.