commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "J.Pietschmann" <>
Subject Re: Math.pow usage was: Re: cvs commit: ...
Date Wed, 18 Jun 2003 20:26:11 GMT
Mark R. Diggory wrote:
> (1) Does it seem logical that when working with "n" (or values.length) 
> to use Math.pow(n, x), as positive integers, the risk is actually 
> 'integer overflow' when the array representing the number of cases gets 
> very large, for which the log implementation of Math.pow would help 
> retain greater numerical accuracy?

No. If you cast the base into a double there is not much risk of
overflow: double x = n;  y=x*x; or y=((double)n)*((double)n);
or even y=n*(double)n; (but avoid y=(double)n*n).
Double mantissa has IIRC 52 bits, this should be good for integers
up to 2^26=67108864 without loss of precision. OTOH the log is calculated
for x=m*2^e with 0.5<=m<1 as e*log(2)+log(m), where log(m) can be
calculated to full precision by a process called pseudodivision,
however, for 2^26 you'll loose log(26*log(2))/log(2)~4 bits due to
the first summand (the same effect happens for very small bases,
like 1E-16). The exponentiation amplifies the error, although I'm not
sure by how much.
Well, with some luck the processor uses its guarding bits to cover
the precision loss mentioned above (on i386, FP registers have a 64 bit
denormalized mantissa in contrast to the 52 bit normalized mantissa for
double). Strict math and IEEE 854 conformant hardware IIRC will discard
them though.

> (2) In the opposite case from below, where values[i] are very large, 
> doesn't the log based Math.pow(values[i], 2.0) again work in our favor 
> to reduce overflow? Seems a catch22.

If you are dealing with floating point numbers, your concern is
loss of precision, not overflow. Apart from this, I don't understand
in what sense the log based Math.pow(values[i], 2.0) should be
favorable. If ther's precision loss for x*x, there will be at least
the same presicion loss for Math.pow(values[i], 2.0), because at least
the same number of bits will be missing from the mantissa.

> More than anyone really ever wanted to know about Math.pow's 
> implementation:

This is a pure software solution. With modern processors, hardware
supported solutions are the rule, which are much much much more
performant and less prone to precision loss without carrying expensive
guarding bits.


To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message