Each arithmetic data type has a default precision and a maximum precision as specified in Table 2-2. The range of the scale factor of fixed-point decimal data is 0 through 31 (0 ≤ q ≤ 31).
| Data Type | Maximum Precision | Default Precision |
|---|---|---|
| Fixed Binary | 31 | 15* |
| Fixed Decimal | 31 | 5 |
| Float Binary | 52 | 23 |
| Float Decimal | 16 | 6 |
* The -longint option changes the default precision of fixed binary from 15 to 31.