skip to Main Content

In the C++ code below, a segmentation fault occurs before the first line of main() is executed.
This happens even though there are no objects to be constructed before entering main() and it does not happen if I remove a (large) variable definition at the second line of main().

I assume the segmentation fault occurs because of the size of the variable being defined. My question is why does this occur before the prior line is executed?

It would seem this shouldn’t be occurring due to instruction reordering by the optimizer. I say this based on the compilation options selected and based on debug output.
Is the size of the (array) variable being defined blowing the stack / causing the segfault?
It would seem so since using a smaller array (e.g. 15 elements) does not result in a segmentation fault and since the expected output to stdout is seen.

#include <array>
#include <iostream>
#include <vector>

using namespace std;

namespace {
using indexes_t = vector<unsigned int>;
using my_uint_t = unsigned long long int;

constexpr my_uint_t ITEMS{ 52 };
constexpr my_uint_t CHOICES{ 5 };
static_assert(CHOICES <= ITEMS, "CHOICES must be <= ITEMS");

constexpr my_uint_t combinations(const my_uint_t n, my_uint_t r)
{
    if (r > n - r)
        r = n - r;

    my_uint_t rval{ 1 };

    for (my_uint_t i{ 1 }; i <= r; ++i) {
        rval *= n - r + i;
        rval /= i;
    }

    return rval;
}

using hand_map_t = array<indexes_t, combinations(ITEMS, CHOICES)>;

class dynamic_loop_functor_t {
private:
    // std::array of C(52,5) = 2,598,960 (initially) empty vector<unsigned int>
    hand_map_t hand_map;
};
}

int main()
{
    cout << "Starting main()..." << endl
         << std::flush;

    // "Starting main()..." is not printed if and only if the line below is included.
    dynamic_loop_functor_t dlf;

    // The same result occurs with either of these alternatives:
    // array<indexes_t, 2598960> hand_map;
    // indexes_t hand_map[2598960];
}
  • OS: CentOS Linux release 7.9.2009 (Core)
  • Compiler: g++ (GCC) 7.3.1 20180303 (Red Hat 7.3.1-5)
  • Compile command:
g++ -std=c++14 -Wall -Wpedantic -Og -g -o create_hand_map create_hand_map.cpp

No errors or warnings are generated at compile time.

Static analysis:

A static analysis via cppcheck produces no unexpected results.
Using check-config as suggested in the command output below yields only: Please note: Cppcheck does not need standard library headers to get proper results.

$ cppcheck --enable=all create_hand_map.cpp
create_hand_map.cpp:136:27: style: Unused variable: dlf [unusedVariable]
   dynamic_loop_functor_t dlf;
                          ^
nofile:0:0: information: Cppcheck cannot find all the include files (use --check-config for details) [missingIncludeSystem]

Attempted debug with GDB:

$ gdb ./create_hand_map
GNU gdb (GDB) Red Hat Enterprise Linux 8.0.1-36.el7
<snip>
This GDB was configured as "x86_64-redhat-linux-gnu".
<snip>
Reading symbols from ./create_hand_map...done.
(gdb) run
Starting program: ./create_hand_map

Program received signal SIGSEGV, Segmentation fault.
0x0000000000400894 in std::operator<< <std::char_traits<char> > (__s=0x4009c0 "Starting main()...",
    __out=...) at /opt/rh/devtoolset-7/root/usr/include/c++/7/ostream:561
561             __ostream_insert(__out, __s,
(gdb) bt
#0  0x0000000000400894 in std::operator<< <std::char_traits<char> > (
    __s=0x4009c0 "Starting main()...", __out=...)
    at /opt/rh/devtoolset-7/root/usr/include/c++/7/ostream:561
#1  main () at create_hand_map.cpp:133
(gdb)

2

Answers


  1. This is definitely a stack overflow. sizeof(dynamic_loop_functor_t) is nearly 64 MiB, and the default stack size limit on most Linux distributions is only 8 MiB. So the crash is not surprising.

    The remaining question is, why does the debugger identify the crash as coming from inside std::operator<<? The actual segfault results from the CPU exception raised by the first instruction to access to an address beyond the stack limit. The debugger only gets the address of the faulting instruction, and has to use the debug information provided by the compiler to associate this with a particular line of source code.

    The results of this process are not always intuitive. There is not always a clear correspondence between instructions and source lines, especially when the optimizer may reorder instructions or combine code coming from different lines. Also, there are many cases where a bug or problem with one source line can cause a fault in another section of code that is otherwise innocent. So the source line shown by the debugger should always be taken with a grain of salt.

    In this case, what happened is as follows.

    • The compiler determines the total amount of stack space to be needed by all local variables, and allocates it by subtracting this number from the stack pointer at the very beginning of the function, in the prologue. This is more efficient than doing a separate allocation for each local variable at the point of its declaration. (Note that constructors, if any, are not called until the point in the code where the variable’s declaration actually appears.)

      The prologue code is typically not associated with any particular line of source code, or maybe with the line containing the function’s opening {. But in any case, subtracting from the stack pointer is a pure register operation; it does not access memory and therefore cannot cause a segfault by itself. Nonetheless, the stack pointer is now pointing outside the area mapped for the stack, so the next attempt to access memory near the stack pointer will segfault.

    • The next few instructions of main execute the cout << "Starting main". This is conceptually a call to the overloaded operator<< from the standard library; but in GCC’s libstdc++, the operator<< is a very short function that simply calls an internal helper function named __ostream_insert. Since it is so short, the compiler decides to inline operator<< into main, and so main actually contains a call to __ostream_insert. This is the instruction that faults: the x86 call instruction pushes a return address to the stack, and the stack pointer, as noted, is out of bounds.

      Now the instructions that set up arguments and call __ostream_insert are marked by the debug info as corresponding to the source of operator<<, in the <ostream> header file – even though those instructions have been inlined into main. Hence your debugger shows the crash as having occurred "inside" operator<<.

      Had the compiler not inlined operator<< (e.g. if you compile without optimization), then main would have contained an actual call to operator<<, and this call is what would have crashed. In that case the traceback would have pointed to the cout << "Starting main" line in main itself – misleading in a different way.


    Note that you can have GCC warn you about functions that use a large amount of stack with the options -Wstack-usage=NNN or -Wframe-larger-than=NNN. These are not enabled by -Wall, but could be useful to add to your build, especially if you expect to use large local objects. Specifying either of them, with a reasonable number for NNN (say 4000000), I get a warning on your main function.

    Login or Signup to reply.
  2. You must raise the stack size limit before putting the huge object on stack.

    In Linux you can achieve that by calling setrlimit() from main(). From then on you can invoke functions with huge stack objects. E.g.:

    struct huge_t { /* something really huge lives here */ };
    
    int main () {
        struct rlimit rlim;
        rlim.rlim_cur = sizeof (huge_t) + 1048576;
        setrlimit (RLIMIT_STACK, &rlim);
        return worker ();
    }
    int worker () {
        struct huge_t huge;
        /* do something with huge */
        return EXIT_SUCCESS;
    }
    

    Because local objects are allocated on stack before you have the chance to call setrlimit() the huge object must be in worker().

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search