skip to Main Content

I am trying to measure at which workloads’ pthreads become useful. So far, I found that workloads need to take around ~3ms for pthreads to contribute positively to the overall progress (on an Alderlake test system).

Is this the correct order of magnitude?

The benchmarking output is below:

BM_dispatch<dispatch>/16/process_time/real_time                1.37 ms         1.37 ms          513
BM_dispatch<dispatch>/32/process_time/real_time                2.75 ms         2.75 ms          252
BM_dispatch<dispatch>/48/process_time/real_time                4.15 ms         4.15 ms          169
BM_dispatch<dispatch>/64/process_time/real_time                5.52 ms         5.52 ms          126
BM_dispatch<dispatch>/80/process_time/real_time                6.89 ms         6.89 ms          101
BM_dispatch<dispatch>/96/process_time/real_time                8.26 ms         8.26 ms           84
BM_dispatch<dispatch>/112/process_time/real_time               9.62 ms         9.62 ms           72

BM_dispatch<dispatch_pthread>/16/process_time/real_time        2.16 ms         4.18 ms          359
BM_dispatch<dispatch_pthread>/32/process_time/real_time        3.76 ms         7.38 ms          200
BM_dispatch<dispatch_pthread>/48/process_time/real_time        3.67 ms         7.18 ms          150
BM_dispatch<dispatch_pthread>/64/process_time/real_time        4.30 ms         8.44 ms          163
BM_dispatch<dispatch_pthread>/80/process_time/real_time        4.38 ms         8.60 ms          176
BM_dispatch<dispatch_pthread>/96/process_time/real_time        4.93 ms         9.69 ms          146
BM_dispatch<dispatch_pthread>/112/process_time/real_time       5.31 ms         10.5 ms          126

I benchmark two functions dispatch and dispatch_pthread under different workload sizes. The function do the same total work, but dispatch_pthreads divides the work among two threads. When the runtime is about ~1 ms, pthreads are not beneficial. Around a workload of ~8ms, two pthreads are about twice as fast as a single thread.

The full program is below:

void find_max(const float* in, size_t eles, float* out) {
    float max{0};
    for (size_t i = 0; i < eles; ++i) {
        if (in[i] > max) max = in[i];
    }
    *out = max;
}

float dispatch(const float* inp, size_t rows, size_t cols, float* out) {
    for (size_t row = 0; row < rows; row++) {
        find_max(inp + row * cols, cols, out + row);
    }
}

struct pthreadpool_context {
    const float* inp;
    size_t rows;
    size_t cols;
    float* out;
};

void* work(void* ctx) {
    const pthreadpool_context* context = (pthreadpool_context*)ctx;
    dispatch(context->inp, context->rows, context->cols, context->out);
    return NULL;
}

float dispatch_pthread(const float* inp, size_t rows, size_t cols, float* out) {
    pthread_t thread1, thread2;
    size_t rows_per_thread = rows / 2;
    const pthreadpool_context context1 = {inp, rows_per_thread, cols, out};
    const pthreadpool_context context2 = {inp + rows_per_thread * cols,
                                          rows_per_thread, cols,
                                          out + rows_per_thread};
    pthread_create(&thread1, NULL, work, (void*)&context1);
    pthread_create(&thread2, NULL, work, (void*)&context2);
    pthread_join(thread1, NULL);
    pthread_join(thread2, NULL);
}


template <auto F>
void BM_dispatch(benchmark::State& state) {
    std::random_device rnd_device;
    std::mt19937 mersenne_engine{rnd_device()};
    std::normal_distribution<float> dist{0, 1};
    auto gen = [&]() { return dist(mersenne_engine); };
    const size_t cols = 1024 * state.range(0);
    constexpr size_t rows = 100;
    std::vector<float> inp(rows * cols);
    std::generate(inp.begin(), inp.end(), gen);
    std::vector<float> out(rows);
    for (auto _ : state) {
        F(inp.data(), rows, cols, out.data());
    }
}

BENCHMARK(BM_dispatch<dispatch>)
    ->DenseRange(16, 112, 16)
    ->MeasureProcessCPUTime()
    ->UseRealTime()
    ->Unit(benchmark::kMillisecond);
BENCHMARK(BM_dispatch<dispatch_pthread>)
    ->DenseRange(16, 112, 16)
    ->MeasureProcessCPUTime()
    ->UseRealTime()
    ->Unit(benchmark::kMillisecond);
BENCHMARK_MAIN();

The program gets compiled with O2 optimization with gcc 13.2.0 on Ubuntu 22.04 with kernel 5.15.

2

Answers


  1. Chosen as BEST ANSWER

    Thanks for all the comments and answers. The main problem of the original code is thread creation within benchmark timing.

    To provide a comprehensive answer, the benchmark results with a threadpool follow.

    I created a threadpool along the lines of Thread pooling in C++11. With two threads, the latency of the entire operation is indeed halved compared to the single-threaded code, already for workloads measured in microseconds.

    The full timings are:

    -------------------------------------------------------------------------------------------
    Benchmark                                                 Time             CPU   Iterations
    -------------------------------------------------------------------------------------------
    BM_dispatch<dispatch>/4/process_time/real_time        0.341 ms        0.341 ms         2062
    BM_dispatch<dispatch>/8/process_time/real_time        0.691 ms        0.691 ms         1007
    BM_dispatch<dispatch>/12/process_time/real_time        1.04 ms         1.04 ms          672
    BM_dispatch<dispatch>/16/process_time/real_time        1.39 ms         1.39 ms          500
    BM_dispatch<dispatch>/20/process_time/real_time        1.75 ms         1.75 ms          399
    BM_dispatch<dispatch>/24/process_time/real_time        2.11 ms         2.11 ms          331
    BM_dispatch<dispatch>/28/process_time/real_time        2.48 ms         2.48 ms          284
    BM_dispatch<dispatch>/32/process_time/real_time        2.84 ms         2.84 ms          246
    
    BM_dispatch_threadpool/4/process_time/real_time       0.176 ms        0.353 ms         3933
    BM_dispatch_threadpool/8/process_time/real_time       0.356 ms        0.717 ms         1947
    BM_dispatch_threadpool/12/process_time/real_time      0.522 ms         1.05 ms         1308
    BM_dispatch_threadpool/16/process_time/real_time      0.713 ms         1.43 ms         1000
    BM_dispatch_threadpool/20/process_time/real_time      0.880 ms         1.77 ms          795
    BM_dispatch_threadpool/24/process_time/real_time       1.04 ms         2.10 ms          650
    BM_dispatch_threadpool/28/process_time/real_time       1.22 ms         2.45 ms          577
    BM_dispatch_threadpool/32/process_time/real_time       1.40 ms         2.82 ms          504
    

    BM_dispatch<dispatch> measures single threaded-code as above. BM_dispatch_threadpool measures latencies when the work is passed to a already created threadpool with two threads.


  2. The pthread of the GLIBC under Linux maintains a pool of stacks. This pool is empty at process startup. When threads are created and then terminated, the stacks are not freed but kept in the pool to be reused by subsequent threads. The default stack size is 8 MB of virtual memory by default and the size of the pool is 40 MB. This means that by default, up to 5 stacks can be cached for reuse.

    As a consequence, the first two threads creation is slower as the stacks are not allocated yet. But for the subsequent ones, this is faster because the stacks are picked from the pool. But the thread creation is not only a stack allocation but also many other things like TLS initialization, guard page setting (for stack overflow detection)…

    Hence, it is better to put the thread creation outside of the benchmark measurement. That is to say, as noticed in the comments, the worker threads should be created first.

    Here is the pthread code snippet concerning the allocation of the stacks from the internal pool:

    /* Maximum size in kB of cache.  */
    static size_t stack_cache_maxsize = 40 * 1024 * 1024; /* 40MiBi by default.  */
    [...]
    /* Get a stack frame from the cache.  We have to match by size since
       some blocks might be too small or far too large.  */
    static struct pthread *
    get_cached_stack (size_t *sizep, void **memp)
    {
      size_t size = *sizep;
      struct pthread *result = NULL;
      list_t *entry;
    
      lll_lock (stack_cache_lock, LLL_PRIVATE);
    
      /* Search the cache for a matching entry.  We search for the
         smallest stack which has at least the required size.  Note that
         in normal situations the size of all allocated stacks is the
         same.  As the very least there are only a few different sizes.
         Therefore this loop will exit early most of the time with an
         exact match.  */
      list_for_each (entry, &stack_cache)
        {
          struct pthread *curr;
    [...]
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search