When using a int to float implicit conversion, it fails with printf()
#include <stdio.h>
int main(int argc, char **argv) {
float s = 10.0;
printf("%f %f %f %fn", s, 0, s, 0);
return 0;
}
when compiled with gcc -g scale.c -o scale
it does output garbage
./scale
10.000000 10.000000 10.000000 -5486124068793688683255936251187209270074392635932332070112001988456197381759672947165175699536362793613284725337872111744958183862744647903224103718245670299614498700710006264535590197791934024641512541262359795191593953928908168990292758500391456212260452596575509589842140073806143686060649302051520512.000000
If I explicitly cast the integer to float, or use a 0.0
(which is a double
) it works as designed.
#include <stdio.h>
int main(int argc, char **argv) {
float s = 10.0;
printf("%f %f %f %fn", s, 0.0, s, 0.0);
return 0;
}
when compiled with gcc -g scale.c -o scale
it does output the expected output
./scale
10.000000 0.000000 10.000000 0.000000
What is happening ?
I’m using gcc (Debian 10.2.1-6) 10.2.1 20210110
if that’s important.
2
Answers
The conversion specifier
f
expects an object of the typedouble
. Usuallysizeof( double )
is equal to8
whilesizeof( int )
is equal to4
. And moreover integers and doubles have different internal representations.Using an incorrect conversion specifier results in undefined behavior.
From the C Standard (7.21.6.1 The fprintf function)
As for objects of the type
float
then due to default argument promotions they are converted to the typedouble
.From the C Standard (6.5.2.2 Function calls)
So these calls of
printf
and
are equivalent relative to the result.
Pay attention to that some programmers to output doubles use the length modifier
l
in conversion specification as for example%lf
. However the length modifier has no effect and should be removed.A variadic function needs some kind of information on the number and type of arguments it was passed. In the case of
printf
, this information is in the first parameter, the format string.If there is a "%f" in the format string, the variadic function will expect a
float
argument, and will retrieve this usingva_arg
.But the compiler has no idea about this. It does not take the format string into account — all it sees is the actual type of the argument, in this case an
int
.So there was an
int
put in, and afloat
taken out. This happens to be undefined behavior.Your observed behavior is because your first platform passes floating point values differently than integer values — the integer
0
got put in one place, butva_args
read from another — reading, in fact, the second actualfloat
argument the first time, and whatever was in the place a third float value would have been put in had you passed one.