skip to Main Content

I am currently learning about the stack layout of frames on x86 using GNU gdb (Ubuntu 9.2-0ubuntu1~20.04.1). I ran into some behaviour that I cannot explain myself when playing around with IEEE754 double precision values. Currently I have the following values on my stack.

(gdb) x/2wx 0xffffca74
0xffffca74:     0x9ba5e354      0x400920c4

I.e. 0xffffca74 to 0xffffca78 contains a IEEE754 Double Precision value with 0x400920C49BA5E354_16 == 3.141_10. Now I tried to print the value of the float in gdb and got the following outputs.

(gdb) x/f 0xffffca74
0xffffca74:     -2.74438676e-22
(gdb) x/f 0xffffca74
0xffffca74:     -2.74438676e-22
(gdb) x/b 0xffffca74
0xffffca74:     84
(gdb) x/f 0xffffca74
0xffffca74:     3.141

So at first, GDB considered the value in 0xffffca74 as a IEEE754 single precision number. But after printing one single byte in that location and running the command again, it suddenly correctly interprets it as a double precision number? How does it do that, does it have some sort of automatic type recognition?

I’ve tried finding some information about it in the documentation, but unfortunately got no information on this behaviour. I would have only expected it to print the correct result, when explicitly querying a giant word.

2

Answers


  1. (gdb) help x
    Examine memory: x/FMT ADDRESS.
    ADDRESS is an expression for the memory address to examine.
    FMT is a repeat count followed by a format letter and a size letter.
    Format letters are o(octal), x(hex), d(decimal), u(unsigned decimal),
      t(binary), f(float), a(address), i(instruction), c(char), s(string)
      and z(hex, zero padded on the left).
    Size letters are b(byte), h(halfword), w(word), g(giant, 8 bytes).
    The specified number of objects of the specified size are printed
    according to the format.  If a negative number is specified, memory is
    examined backward from the address.
    
    Defaults for format and size letters are those previously used.
    Default count is 1.  Default address is following last thing printed
    with this command or "print".
    

    The problem is that GDB defaults to w (word) width, and that it also remembers the last width and reuses it (but not consistently, see below).

    Consider:

    (gdb) p d
    $1 = 3.1415000000000002
    
    (gdb) x/wf &d
    0x7fffffffd858: -4.09600019
    (gdb) x/f &d
    0x7fffffffd858: -4.09600019
    
    (gdb) x/gf &d
    0x7fffffffd858: 3.1415000000000002
    (gdb) x/f &d
    0x7fffffffd858: 3.1415000000000002
    

    But why did this work after x/b?

    (gdb) x/b &d
    0x7fffffffd858: 111
    (gdb) x/f &d
    0x7fffffffd858: 3.1415000000000002
    

    My guess is that somewhere in GDB there is logic that says: current width is 1, but that makes no sense for a float, so let’s guess that the user wanted a double.

    Login or Signup to reply.
  2. GDB’s x command is documented as taking a size (b, h, w, g) and format (o, x, d, u, t, f, a, i, c, s, z) argument. They are "sticky" from one x command to the next; if you omit a size or format, the last one specified is reused.

    Not all combinations of size and format make sense. In particular, ‘f’ is only supported for the ‘w’ and ‘g’ sizes. In the GDB function printcmd.c:decode_format, if the current size is not ‘w’ or ‘g’, it is changed to be ‘g’.

    (gdb) x/wx 0x555555558010
    0x555555558010:    0x9ba5e354
    (gdb) x/f 0x555555558010
    0x555555558010:    -2.74438676e-22  # 32-bit float
    (gdb) x/bx 0x555555558010
    0x555555558010:    0x54
    (gdb) x/x 0x555555558010
    0x555555558010:    0x54
    (gdb) x/f 0x555555558010
    0x555555558010:    3.141   # 64-bit float
    (gdb) x/x 0x555555558010
    0x555555558010:    0x400920c49ba5e354 # default size was changed to 'g'
    
    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search