I am currently learning about the stack layout of frames on x86 using GNU gdb (Ubuntu 9.2-0ubuntu1~20.04.1). I ran into some behaviour that I cannot explain myself when playing around with IEEE754 double precision values. Currently I have the following values on my stack.
(gdb) x/2wx 0xffffca74
0xffffca74: 0x9ba5e354 0x400920c4
I.e. 0xffffca74 to 0xffffca78 contains a IEEE754 Double Precision value with 0x400920C49BA5E354_16 == 3.141_10. Now I tried to print the value of the float in gdb and got the following outputs.
(gdb) x/f 0xffffca74
0xffffca74: -2.74438676e-22
(gdb) x/f 0xffffca74
0xffffca74: -2.74438676e-22
(gdb) x/b 0xffffca74
0xffffca74: 84
(gdb) x/f 0xffffca74
0xffffca74: 3.141
So at first, GDB considered the value in 0xffffca74 as a IEEE754 single precision number. But after printing one single byte in that location and running the command again, it suddenly correctly interprets it as a double precision number? How does it do that, does it have some sort of automatic type recognition?
I’ve tried finding some information about it in the documentation, but unfortunately got no information on this behaviour. I would have only expected it to print the correct result, when explicitly querying a giant word.
2
Answers
The problem is that GDB defaults to
w
(word) width, and that it also remembers the last width and reuses it (but not consistently, see below).Consider:
But why did this work after
x/b
?My guess is that somewhere in GDB there is logic that says: current width is 1, but that makes no sense for a float, so let’s guess that the user wanted a
double
.GDB’s x command is documented as taking a size (b, h, w, g) and format (o, x, d, u, t, f, a, i, c, s, z) argument. They are "sticky" from one x command to the next; if you omit a size or format, the last one specified is reused.
Not all combinations of size and format make sense. In particular, ‘f’ is only supported for the ‘w’ and ‘g’ sizes. In the GDB function
printcmd.c:decode_format
, if the current size is not ‘w’ or ‘g’, it is changed to be ‘g’.