CERT
Skip to end of metadata
Go to start of metadata

Integer types in C have both a size and a precision. The size indicates the number of bytes used by an object and can be retrieved for any object or type using the sizeof operator.  The precision of an integer type is the number of bits it uses to represent values, excluding any sign and padding bits.

Padding bits contribute to the integer's size, but not to its precision. Consequently, inferring the precision of an integer type from its size may result in too large a value, which can then lead to incorrect assumptions about the numeric range of these types.  Programmers should use correct integer precisions in their code, and in particular, should not use the sizeof operator to compute the precision of an integer type on architectures that use padding bits or in strictly conforming (that is, portable) programs.

Noncompliant Code Example

This noncompliant code example illustrates a function that produces 2 raised to the power of the function argument. To prevent undefined behavior in compliance with INT34-C. Do not shift an expression by a negative number of bits or by greater than or equal to the number of bits that exist in the operand, the function ensures that the argument is less than the number of bits used to store a value of type unsigned int.

However, if this code runs on a platform where unsigned int has one or more padding bits, it can still result in values for exp that are too large. For example, on a platform that stores unsigned int in 64 bits, but uses only 48 bits to represent the value, a left shift of 56 bits would result in undefined behavior.

Compliant Solution

This compliant solution uses a popcount() function, which counts the number of bits set on any unsigned integer, allowing this code to determine the precision of any integer type, signed or unsigned.

Implementations can replace the PRECISION() macro with a type-generic macro that returns an integer constant expression that is the precision of the specified type for that implementation. This return value can then be used anywhere an integer constant expression can be used, such as in a static assertion. (See DCL03-C. Use a static assertion to test the value of a constant expression.) The following type generic macro, for example, might be used for a specific implementation targeting the IA-32 architecture:

The revised version of the pow2() function uses the PRECISION() macro to determine the precision of the unsigned type:

Implementation Details

Some platforms, such as the Cray Linux Environment (CLE; supported on Cray XT CNL compute nodes), provide a _popcnt instruction that can substitute for the popcount() function.

Risk Assessment

Mistaking an integer's size for its precision can permit invalid precision arguments to operations such as bitwise shifts, resulting in undefined behavior.

 

Rule

Severity

Likelihood

Remediation Cost

Priority

Level

INT35-C

Low

Unlikely

Medium

P2

L3

Automated Detection

ToolVersionCheckerDescription
PRQA QA-C9.2 1820,1821,1822,1823,1824,1840,1841,1842,1843,1844,1850,1851,1852,1853,1854 Partially implemented

 

Related Guidelines

MITRE CWECWE-190, Integer Overflow or Wraparound

 

Bibliography

[Dowd 2006]Chapter 6, "C Language Issues"
[C99 Rationale 2003]6.5.7, "Bitwise Shift Operators"


   

 

 

10 Comments

  1. I may be becoming overly enamored with these, but I now think this would be another good application of a type generic macro.

  2. I'm wondering if we need a signed version of this function.  The standard does say this regarding the size:

    For each of the signed integer types, there is a corresponding (but different) unsigned integer type (designated with the keyword unsigned) that uses the same amount of storage (including sign information) and has the same alignment requirements.

    As this rule points out, size is not the same as width.

    I started researching actual architectures, but that hurt my brain.   It seems likely however that there are signed representations that use internal sign bits that may not be used to represent the value.

    So anyway, I’m not sure testing the width of an unsigned type is sufficient to determine the width of the corresponding signed type.

    I guess this might work, provided no one passes a negative number.  I started the width at one to count the sign bit:

     

    /* Returns the number of set bits */

    size_t popcount(intmax_t num) {

      size_t width = 1;

      assert(num > 0);

      while (num != 0) {

        if (num % 2 == 1) {

          width++;

        }

        num >>= 1;

      }

      return width;

    }

    #define WIDTH(max_value) popcount(max_value) 

  3. For unsigned integer, isn't the following condition always true?

    UXXX_MAX = 2^PRECISION(UXXX_MAX) - 1

    PRECISION(UXXX_MAX) = log_2 (UXXX_MAX+1)

    So, a simple mapping table would do the job.

     

    1. Yes, for an unsigned integer, that should work (assuming you don't actually use UXXX_MAX + 1, which will always result in 0 due to the overflow). However, it doesn't handle signed integer values (which also have to worry about oddball representations such as sign magnitude, etc). The _Generic example is effectively the simple mapping table solution, and it relies on types instead of values, which is a nice benefit.

      1. Thanks.

        My question came up because of the last CERT Secure Coding newsletter pointing to http://www.open-std.org/jtc1/sc22/wg14/www/docs/n1899.pdf

        At least for unsigned integers, the proposed *_WIDTH constants for limits.h seem to carry redundant information as we already have *_MAX.

        Are these constants just added for convenience? If so, the problem statement in the document is misleading.

        1. I think that you cannot calculate (in standard C) *_WIDTH from *_MAX in such a way that would work from a _Static_assert(), and so using macros solves that issue, even if the values may be redundant or calculable for a particular implementation.

  4. At https://groups.google.com/forum/embed/#!topic/comp.lang.c/NfedEFBFJ0k, the following formula is given:

    Macro for the precision

    In particular, the precision of an unsigned type u_t can be computed at compile-time: IMAX_BITS((u_t) -1).

    This is arguably better than a run-time function; it also has the advantage to be applicable to typedefs (as long as the underlying type is an unsigned integral type); finally, it does not rely on unportable macros.

    1. Thank you for sharing this! I've looked it over (as well as the original link), and it seems plausible that it would work. However, I would feel more comfortable if we had a more authoritative, scholarly source than a google group link. Do you know of any other sources that can confirm the math? Also, this solution will violate INT30-C. Ensure that unsigned integer operations do not wrap, will it not?

  5. I am pleased to do so.

    I have unfortunately no further reference (perhaps you could ask the writer of the formula in the google group link in question).

    I successfully tested the formula with gmp for each 2^k-1 with 0 <= k <= 1'000'000, which is a trivial proof that the formula is correct over this range. The formula still holds for k = 30'000'000'000, but not for k = 35'000'000'000, in accordance with the comment. I will try to prove this formula (incl. overflow considerations) over the suitable range, and let you know if I succeed.

  6. I managed to prove this formula. As to the overflows, there are none: only divisions and reminders are involved for the three summands, which are positive and whose sum is b <= m.

    Proof of width formula  Expand source