skip to Main Content
#include <stdio.h>

int main() {
    double a =92233720368547758071;
    printf("value=%lfn", a);
    int i;
    char *b = &a;
    for (i = 0; i < 8; i++) {
       printf("%d byte (%p)value: <%d>n", i + 1, b,*b);
       b++;
    }
    return 0;
}

(compiler details):
gcc (Ubuntu 13.2.0-23ubuntu4) 13.2.0

The above code will generate warnings when compiling because (92233720368547758071) is too large to store in double, and I used char pointer to address the double data type because I want to inspect all the 8 bytes.

(Warnings):

test.c: In function ‘main’:
test.c:6:12: warning: integer constant is too large for its type
    6 | double a = 92233720368547758071;
      |            ^~~~~~~~~~~~~~~~~~~~
test.c:8:17: warning: initialization of ‘char *’ from incompatible pointer type ‘double *’ [-Wincompatible-pointer-types]
    8 | int i;char *b = &a;

Output:

value=18446744073709551616.000000
1 byte (0x7ffda3f173f8)value: <0>
2 byte (0x7ffda3f173f9)value: <0>
3 byte (0x7ffda3f173fa)value: <0>
4 byte (0x7ffda3f173fb)value: <0>
5 byte (0x7ffda3f173fc)value: <0>
6 byte (0x7ffda3f173fd)value: <0>
7 byte (0x7ffda3f173fe)value: <-16>
8 byte (0x7ffda3f173ff)value: <67>

IEEE 754 standard is used to represent the floating point numbers in memory. double data type uses double precision encoding format.

The format contains:

  1. Sign-bit: (1-bit)

  2. Exponent: (11-bit)

  3. Mantissa: (52-bit)

It represents the floating point numbers in 64 bits.

(92233720368547758071) conversion:

Normalised number: [+]1.[0100000000000000000000000000000000000000000000000000]*2^[66]

double precision bias: 1023

Exponent: 1023+66=1089

1) sign-bit: (0)
2) Exponent: (10001000001)
3) Mantissa|Precision: (0100000000000000000000000000000000000000000000000000)

92233720368547758071 ->

01000100 00010100 00000000 00000000  00000000 00000000 00000000 00000000

(18446744073709551616) conversion:

Normalised number:[+]1.[0000000000000000000000000000000000000000000000000000]*2^[64]

double precision bias: 1023

Exponent: 1023+64=1087

1) sign-bit: (0) 
2) Exponent: (10000111111)
3) Mantissa|Precision: (0000000000000000000000000000000000000000000000000000) 

18446744073709551616 ->

01000011 11110000 00000000 00000000 00000000 00000000 00000000 00000000

My system linux uses little endian to store data in memory. Following that we try to decode the data stored in each and every byte we get the same result.

(System Information):

PRETTY_NAME="Ubuntu 24.04 LTS"
NAME="Ubuntu"
VERSION_ID="24.04"
VERSION="24.04 LTS (Noble Numbat)"
VERSION_CODENAME=noble
Hostname=HP-Laptop-15s-fq5xxx
1st byte value (0) which in binary -> (00000000)
2nd byte value (0) which in binary -> (00000000)
3rd byte value (0) which in binary -> (00000000)
4th byte value (0) which in binary -> (00000000)
5th byte value (0) which in binary -> (00000000)
6th byte value (0) which in binary -> (00000000)
7th byte value (-16) which in binary -> (11110000)
8th byte value (67) which in binary -> (01000011)

My question here is how did number 92233720368547758071 get converted, to get stored as 18446744073709551616?

What’s happened here? How does this (18446744073709551616) get stored in memory instead of 92233720368547758071?

2

Answers


  1. Chosen as BEST ANSWER

    This warning message shows that compiler had treated this as integer.

    test.c: In function ‘main’:
    test.c:6:12: warning: integer constant is too large for its type
        6 | double a = 92233720368547758071;
          |            ^~~~~~~~~~~~~~~~~~~~
    test.c:8:17: warning: initialization of ‘char *’ from incompatible pointer type ‘double *’ [-Wincompatible-pointer-types]
        8 | int i;char *b = &a;
    

    so, what is happened behind the scene? compiler first convert this number (92233720368547758071) in binary by normal integer encoding standard,The binary value of number by normal integer encoding standard is:

    92233720368547758071->1001111111111111111111111111111111111111111111111111111111111110111(67-bit)
    

    There is only possible of storing 64-bits so the first 3-bits gets truncated.

    64-bit integer conversion takes place in binary:

    1111111111111111111111111111111111111111111111111111111111110111 (64-bit)
    
    

    According to IEE-754 standard the above integer vale can be converted to this binary value:

    18446744073709551607->0 10000111111 0000000000000000000000000000000000000000000000000000
    
    1)sign-bit:(0)(1-bit)
    2)Exponent:(10000111111)(11-bit)
    3)Mantissa:(0000000000000000000000000000000000000000000000000000)(52-bit)
    
    
    0 10000111111 0000000000000000000000000000000000000000000000000000 when this binary value is interpreted as 18446744073709551616
    

    if you want to verify this calculation:

    Iee-754 converter Binary converter


  2. The number 92233720368547758071 can definitely fit inside a double, as a double (IEEE754 float64) can hold values large up to around 1.8e+308.

    The reason 92233720368547758071 became 18446744073709551616 is that 92233720368547758071 is an integer literal, and 18446744073709551616 is the closest (and largest) 64-bit integer.

    To make a double literal, use a ., e.g. 92233720368547758071.0.


    Note: 92233720368547758071.0 would actually be stored as 92233720368547758080.0.

    The gap between representable values grows as their magnitude grows; when the magnitude crosses 2^52 the gap between representable values becomes larger than 1.0. Notice that integers up 52-bits can fit completely inside the mantissa.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search