Commit Graph

25 Commits

Author SHA1 Message Date
Stefan Krah 0271766c88 Use workctx instead of ctx for cosmetic reasons. Also zero-pad the result
in the simple path (not correctly rounded but faster).
2012-05-31 20:49:24 +02:00
Stefan Krah 4d3e0a695a Improve Underflow handling in the correct-rounding loop. The case for
Underflow to zero hasn't changed: _mpd_qexp() internally uses MIN_EMIN,
so the result would also underflow to zero for all emin > MIN_EMIN.

In case digits are left, the informal argument is as follows: Underflow can
occur only once in the last multiplication of the power stage (in the Horner
stage Underflow provably cannot occur, and if Underflow occurred twice in
the power stage, the result would underflow to zero on the second occasion).

Since there is no double rounding during Underflow, the effective work
precision is now 1 <= result->digits < prec. It can be shown by a somewhat
tedious argument that abs(result - e**x) < ulp(result, result->digits).

Therefore the correct rounding loop now uses ulp(result, result->digits)
to generate the bounds for e**x in case of Underflow.
2012-05-31 20:01:05 +02:00
Stefan Krah 9a5beece1b Improve comments. 2012-05-31 16:21:34 +02:00
Stefan Krah 5ddbcfc53e Pad the result with zeros just before the final rounding. 2012-05-31 16:00:21 +02:00
Stefan Krah 30c35e8154 Do not clobber existing flags. 2012-05-31 15:09:27 +02:00
Stefan Krah e34a209584 Fix Visual Studio warning. 2012-05-16 20:20:03 +02:00
Stefan Krah 696d10f1bb Changes in _mpd_qexp():
-----------------------

  1) Reduce the number of iterations in the Horner scheme for operands with
     a negative adjusted exponent. Previously the number was overestimated
     quite generously.

  2) The function _mpd_get_exp_iterations() now has an ACL2 proof and
     is rewritten accordingly.

  3) The proof relies on abs(op) > 9 * 10**(-prec-1), so operands without
     that property are now handled by the new function _mpd_qexp_check_one().

  4) The error analysis for the evaluation of the truncated Taylor series
     in Hull&Abrham's paper relies on the fact that the reduced operand
     'r' has fewer than context.prec digits.

     Since the operands may have more than context.prec digits, a new ACL2
     proof covers the case that r.digits > context.prec. To facilitate the
     proof, the Horner step now uses fma instead of rounding twice in
     multiply/add.


Changes in mpd_qexp():
----------------------

  1) Fix a bound in the correct rounding loop that was too optimistic. In
     practice results were always correctly rounded, because it is unlikely
     that the error in _mpd_qexp() ever reaches the theoretical maximum.
2012-05-16 20:10:21 +02:00
Stefan Krah 9d3a5aeabe Defensive programming: mpd_isspecial(r) already implies mpd_isspecial(q), but
this is more readable.
2012-04-20 21:00:31 +02:00
Stefan Krah 3c23a87e58 The divmod function for large numbers now has an ACL2 proof. Related changes:
1) Rename _mpd_qbarrett_divmod into _mpd_base_ndivmod: The function is
     only marginally related to either Barrett's algorithm or to the version
     in Hasselstrom's paper.

  2) In places where the proof assumes exact operations, use new versions of
     add/sub/multiply that set NaN/Invalid_operation if this condition is
     not met. According to the proof this cannot happen, so this should be
     regarded as an extra safety net.

  3) Raise Division_impossible for operands with a number of digits greater
     than MPD_MAX_PREC. This facilitates the audit of the function and can
     practically only occur in the 32-bit version under conditions where
     a MemoryError is already imminent.

  4) Use _mpd_qmul() in places where the result can exceed MPD_MAX_PREC in
     a well defined manner.

  5) Test for mpd_isspecial(qq) in a place where the addition of one
     can theoretically trigger a Malloc_error.

  6) Remove redundant code in _mpd_qdivmod().

  7) Add many comments.
2012-04-20 19:59:20 +02:00
Stefan Krah c51b7fd65b 1) Simplify comment -- one has to read the complete proof (available in ACL2)
in order to understand the algorithm anyway.

2) v->exp == -v->digits may be assumed.

3) Fix comment (v always shares data with a).
2012-04-18 19:27:32 +02:00
Stefan Krah 5d0d2e2b04 Explain the strategy to avoid huge alignment shifts in _mpd_qadd() in detail. 2012-04-18 18:59:56 +02:00
Stefan Krah ed4b21ff4f Cosmetic change: initialize digits to 1 (redundant). 2012-04-18 18:45:22 +02:00
Stefan Krah bc771e9b19 Remove redundant finalization of the result. 2012-04-18 18:25:37 +02:00
Stefan Krah aecaf0b663 Fix comments and whitespace. 2012-04-18 18:08:20 +02:00
Stefan Krah 6369f77d20 Support mythical ones' complement machines. 2012-04-18 17:57:56 +02:00
Stefan Krah 140893cbaa The previous code is correct, but hard to verify: The libmpdec documentation
rightfully states that an mpd_t with a coefficient flagged as MPD_CONST_DATA
must not be in the position of the result operand. In this particular case
several assumptions guarantee that a resize will never occur in all possible
code paths, which was the reason for using MPD_CONST_DATA and saving an
instruction by omitting the initialization of tmp.alloc.

For readability, tmp is now flagged as MPD_STATIC_DATA and tmp.alloc
is initialized.
2012-04-18 17:48:34 +02:00
Stefan Krah ec766a6179 1) Remove claim of an input invariant that is only true for static mpd_t.
Resizing is used _inside_ libmpdec functions, and it is permitted to
   change x->alloc several times while setting x->len at the end of the
   function. Therefore, for dynamic mpd_t x->alloc can _temporarily_ drop
   below x->len. Of course the final result always has x->len <= x->alloc.

   For static mpd_t this cannot happen, since resizing to a smaller
   coefficient is a no-op.

2) Remove micro optimization in mpd_switch_to_dyn(): Previously only the
   valid initialized part of the existing coefficient up to x->len was
   copied to the new dynamic memory area. Now copying does the same as
   realloc() and the entire old memory area is copied.

   The rationale for this change is that it is no longer needed to memorize
   the explanation given in 1).
2012-04-10 23:11:54 +02:00
Stefan Krah 7b544ca08d Fix stale comment. 2012-04-10 23:08:29 +02:00
Stefan Krah f69aef747a Resize the coefficient to MPD_MINALLOC also if the requested size is below
MPD_MINALLOC. Previously the resize was skipped as a micro optimization.
2012-04-09 20:47:57 +02:00
Stefan Krah dc36efa368 1) Fix comment.
2) Assert that the source operand is not special. Prevent resulting assert
   failure (harmless) by initializing flags before calling mpd_qshiftr_inplace.

3) Save a couple of instructions (mpd_zerocoeff already sets digits and len).
   Reorder initialization to match the order in the mpd_t struct.
2012-04-07 15:57:59 +02:00
Stefan Krah 0e41981cd5 Use abort() rather than exit() to appease tools like rpmlint. abort() is used
in libmpdec to prevent undefined behavior if an invalid context is used. This
cannot occur for the _decimal module since user input for the context is
validated.
2012-03-30 14:12:20 +02:00
Stefan Krah c64150bcac Fix formatting after removing tabs. 2012-03-23 16:34:41 +01:00
Stefan Krah cd9e1d0205 Whitespace. 2012-03-23 16:22:05 +01:00
Stefan Krah 7cc5521d40 Whitespace. 2012-03-21 20:21:20 +01:00
Stefan Krah 1919b7e72b Issue #7652: Integrate the decimal floating point libmpdec library to speed
up the decimal module. Performance gains of the new C implementation are
between 12x and 80x, depending on the application.
2012-03-21 18:25:23 +01:00