commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Mark R. Diggory" <mdigg...@latte.harvard.edu>
Subject Re: [math] numerical considerations
Date Wed, 18 Jun 2003 17:53:17 GMT
Phil Steitz wrote:

>>>More nitpicking.  I don't see that multiplying top and bottom by
>>>      
>>>
>>values.length
>>    
>>
>>>makes things better.  In fact, it could reduce precision by inflating the
>>>magnitude of the first term before subtracting from it and dividing it.
>>>
>>>      
>>>
>>hmm, good points. this may be an example of where "consolidating 
>>division operations" to limit the amount of division going on does not 
>>necessarily lead to a better algorithm. Its general practice to 
>>consolidate division operations to produce a more efficient algorithm 
>>where ever possible. 
>>    
>>
>
>These kinds of statements need to be substantiated from a numerical analysis
>standpoint. I would like to suggest that from this point forward all assertions
>about numerical stability or "best practices" in numerical computing using J2SE
>be accompanied by references to definitive sources.  
>  
>
Sorry, that last sentence should have been a question, it was not meant 
to be a statement.

And, yes, references would be helpful when substantiating a viewpoint, 
but IMHO this is a mailing list, not a scientific journal, I don't think 
we should be overly restrictive about discussion format, my feeling is 
that it might drive away newcomers from joining into the community.

It would be wise to place the content of discussions which get repeated 
and for which we have references on into the "programming/best 
practices" section of the developer documentation. This discussion on 
consolidating division vs accuracy would be an excellent candidate as 
its come up in the past. Perhaps we can drum up some good online 
references and establish some ground rules. I can go back through the 
archives and consolidate some of the past comments.

>Now I have my doubts that its proper to do from 
>  
>
>>what you've suggested. Yes, its optimized an will be a faster 
>>calculation ("values.length" fewer expensive divisions) , but it will be 
>>less accurate as you've suggested. Accuracy should probably be weighted 
>>of greater importance than efficiency in much of our work.
>>    
>>
>
>That is a debatable assertion. The best approach is to actually do the analysis
>to determine exactly what the computational cost difference is, examine the use
>cases and make a decision on what to implement and whether or not to give the
>user a choice.
>  
>
While this is admirable to attempt, this project is not my full time 
job, I'm not sure I'd have the time to accomplish such a thorough 
analysis on my own.

And again, True, I probably should have said "Accuracy and efficiency 
should be weighted with great importance in much of our work."

Computational Cost calculations are not my strong point, we could use 
someone with stronger experience with such matters in our discussion. I 
am at least working on Numerical Stability exploration with the 
certified testing framework I started. I would like to setup some 
non-junit tests with the certified value sets to explore accuracy. 
Possibly implement several versions of the variance calculation and get 
some accuracy comparisons generated that could be placed into the 
documentation.

All good stuff,
-M.


---------------------------------------------------------------------
To unsubscribe, e-mail: commons-dev-unsubscribe@jakarta.apache.org
For additional commands, e-mail: commons-dev-help@jakarta.apache.org


Mime
View raw message