skip to Main Content

The problem is from cs:app3e 2.82.
I learn that when x = INT_MIN, -x is also -INT_MIN, but

#include <stdio.h>
#include <limits.h>

int main() {
    int x = INT_MIN, y = -3;
    printf("%dn", (x < y) == (-x > -y));
    return 0;
}

on my machine(Linux version 6.2.0-34-generic (buildd@bos03-amd64-059) (x86_64-linux-gnu-gcc-11 (Ubuntu 11.4.0-1ubuntu1~22.04) 11.4.0, GNU ld (GNU Binutils for Ubuntu) 2.38)), this gives output 1.
Why does that happen?

I use gcc -o to compile it.
I also use gcc -O0 to compile it.

2

Answers


  1. On your platform, like many others (any platform using two’s-complement without trap representations, which is most modern platforms), negating INT_MIN is undefined behavior. Compilers are allowed to assume undefined behavior does not occur, and to behave however they like, including in nonsensical ways, should it occur. As a result, gcc‘s optimizer (which still operates at a minimal level regardless of optimization settings) can exclude the case of INT_MIN from being a possibility when analyzing (x<y)==(-x>-y), and conclude that it is tautological, substituting in 1 without performing any runtime comparisons. And in fact, it doesn’t perform any runtime comparisons, it merely loads 1 directly for printing.

    Login or Signup to reply.
  2. The actual constants INT_MIN and INT_MAX may be a bit confusing. If we look in the C standard (C17 5.2.4.2.1) it says: INT_MIN -32767 and INT_MAX +32767, meaning these are the utter minimum that must be supported, for a 16 bit int. For simplicity, all examples below will assume 16 bit int (32/64 bit two’s complement systems of course use 2^31 – 1 and 2^32 so 2147483647/2147483648).

    Why these values were picked is for historical reasons. In addition to industry standard two’s complement, C also supports two exotic signedness formats: one’s complement and signed magnitude. In the latter two we have a negative zero and/or trap representation, giving a possible value range of -32767 to 32767.

    But the vast majority of all computers use two’s complement, and then INT_MIN becomes -32768 on a 16 bit system. This is fine with the standard, since it just says that INT_MIN must be at least -32767. And in two’s complement, INT_MAX is still 32767.

    Therefore on a two’s complement system, we cannot do int x = INT_MIN; x = -x;, since INT_MAX is 32767 and cannot hold the value 32768. We would create an integer overflow, which is undefined behavior – anything can happen, including strange and nonsensical code generation by the compiler.

    In the upcoming C23 standard, support for exotic signedness formats will finally get removed from C. And then INT_MIN will likely become -32768 in the standard as well.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search