skip to Main Content

When using a int to float implicit conversion, it fails with printf()

#include <stdio.h>

int main(int argc, char **argv) {
    float s = 10.0;
    printf("%f %f %f %fn", s, 0, s, 0); 
    return 0;
}

when compiled with gcc -g scale.c -o scale it does output garbage

./scale
10.000000 10.000000 10.000000 -5486124068793688683255936251187209270074392635932332070112001988456197381759672947165175699536362793613284725337872111744958183862744647903224103718245670299614498700710006264535590197791934024641512541262359795191593953928908168990292758500391456212260452596575509589842140073806143686060649302051520512.000000

If I explicitly cast the integer to float, or use a 0.0 (which is a double) it works as designed.

#include <stdio.h>

int main(int argc, char **argv) {
    float s = 10.0;
    printf("%f %f %f %fn", s, 0.0, s, 0.0); 
    return 0;
}

when compiled with gcc -g scale.c -o scale it does output the expected output

./scale
10.000000 0.000000 10.000000 0.000000

What is happening ?

I’m using gcc (Debian 10.2.1-6) 10.2.1 20210110 if that’s important.

2

Answers


  1. The conversion specifier f expects an object of the type double. Usually sizeof( double ) is equal to 8 while sizeof( int ) is equal to 4. And moreover integers and doubles have different internal representations.

    Using an incorrect conversion specifier results in undefined behavior.

    From the C Standard (7.21.6.1 The fprintf function)

    9 If a conversion specification is invalid, the behavior is
    undefined.275) If any argument is not the correct type for the
    corresponding conversion specification, the behavior is undefined.

    As for objects of the type float then due to default argument promotions they are converted to the type double.

    From the C Standard (6.5.2.2 Function calls)

    6 If the expression that denotes the called function has a type that
    does not include a prototype, the integer promotions are performed on
    each argument, and arguments that have type float are promoted to
    double. These are called the default argument promotions.

    So these calls of printf

    printf("%f %f %f %fn", s, 0.0, s, 0.0); 
    

    and

    printf("%f %f %f %fn", s, 0.0f, s, 0.0f); 
    

    are equivalent relative to the result.

    Pay attention to that some programmers to output doubles use the length modifier l in conversion specification as for example %lf. However the length modifier has no effect and should be removed.

    Login or Signup to reply.
  2. A variadic function needs some kind of information on the number and type of arguments it was passed. In the case of printf, this information is in the first parameter, the format string.

    If there is a "%f" in the format string, the variadic function will expect a float argument, and will retrieve this using va_arg.

    But the compiler has no idea about this. It does not take the format string into account — all it sees is the actual type of the argument, in this case an int.

    So there was an int put in, and a float taken out. This happens to be undefined behavior.

    Your observed behavior is because your first platform passes floating point values differently than integer values — the integer 0 got put in one place, but va_args read from another — reading, in fact, the second actual float argument the first time, and whatever was in the place a third float value would have been put in had you passed one.

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search