skip to Main Content

I’ve read 13.1.3 Fixed-Point Types (Exact Value) – DECIMAL, NUMERIC, but nothing about the topic is mentioned.

As far as I tested, I can INSERT a string literal such as '123.45' into a DECIMAL(5, 2) column, but does this insertion lose precision?

Similarly, does SELECTing a DECIMAL column as a string in any programming language lose accuracy?

Finally, this is what I want to ask the most: in which section of the official documentation are they explained? I want a strict understanding about the topic.

2

Answers


  1. https://dev.mysql.com/doc/refman/8.0/en/precision-math-decimal-characteristics.html says:

    The SQL standard requires that the precision of NUMERIC(M,D) be exactly M digits. For DECIMAL(M,D), the standard requires a precision of at least M digits but permits more. In MySQL, DECIMAL(M,D) and NUMERIC(M,D) are the same, and both have a precision of exactly M digits.

    For a full explanation of the internal format of DECIMAL values, see the file strings/decimal.c in a MySQL source distribution. The format is explained (with an example) in the decimal2bin() function.

    https://dev.mysql.com/doc/refman/8.0/en/problems-with-float.html has an example that shows the effect of FLOAT imprecision, and that DECIMAL will not have the same issue.

    Many programming languages use IEEE 754 format for a floating point numeric type. These all suffer the same imprecision issue.

    You may want to read What Every Computer Scientist Should Know About Floating-Point Arithmetic. This is not part of the MySQL manual, but it applies to many programming languages.

    Login or Signup to reply.
  2. MySQL DECIMAL(m,n) function data type is an exact numeric data type defined by its precision (total number of digits, m) and scale (number of digits to the right of the decimal point, n).

    Now, a decimal number define as:
    enter image description here

    First get the maximum possible numeric digits or whole number digits (x) of your data column and then find the round off digits (y).

    So, m & n should be define as:

         m → (x+y) ≡ total number of digits of whole_number & decimal_number
    and 
         n → y ≡ total number of decimal digits
    

    and relation between m & n is:

       m >= n and m >= 1
       (m-n) = maximum number of digits left side of the decimal point (max whole number digits)
    

    This is official documentation where explained it in details
    DECIMAL Data Type Characteristics

    I hope this will resolve your doubts

    Login or Signup to reply.
Please signup or login to give your own answer.
Back To Top
Search