commons-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Gilles <gil...@harfang.homelinux.org>
Subject Re: [rng] Usefulness of benchmarks
Date Sun, 04 Sep 2016 11:06:02 GMT
Hi Artem.

I've just updated the usage examples of Commons Math:
  commit ae8f5f04574be75727f5e948e04bc649cbdbbb3b

A few quick runs seems to produce numbers that are now much
closer to what JMH produces.[1]

What do you think?

I'm now fairly comfortable dropping the "PerfTestUtils" column
from the table in the userguide (i.e. IMHO it's sufficiently
close to what JMH provides to not raise concern).


Thanks,
Gilles


[1] Although I don't recall having made any code update since
     the previous numbers were produced that could account for
     this change. :-{


On Sun, 04 Sep 2016 00:17:17 +0200, Gilles wrote:
> On Sun, 4 Sep 2016 00:33:32 +0300, Artem Barger wrote:
>> On Sat, Sep 3, 2016 at 1:36 AM, Gilles 
>> <gilles@harfang.homelinux.org> wrote:
>>
>>> The discrepancy between "PerfTestUtils" and JMH could be a bug (in
>>> "PerfTestUtils" of course!) or ... measuring different use-cases:
>>> Use of several RNGs at the same time vs using a single one; the
>>> latter could allow for more aggressive optimizations.
>>>
>>
>> ​I'm not really familiar with the PerfTestUtils, while I know that 
>> JMH is
>> doing a great
>> job to avoid different pitfalls while building microbenchmarks for
>> measuring a performance.
>> Also it looks a bit suspicious where comparing JDK random generator 
>> against
>> itself it's not
>> showing ration of 1.0 for PerfTestUtils.
>
> That is easy to explain: the ratio was done wrt to a
> "java.util.Random" object, while "RandomSource.JDK" wraps
> an instance of "java.util.Random".
>
>
>>> Lacking input as to what the benchmarks purport to demonstrate, I'm
>>> about to simply delete the "PerfTestUtils" column.
>>> The result will be a simplified (maybe simplistic) view of the
>>> relative performance of the RNGs in the "single use" use-case.
>>>
>>>
>> ​I can try to take a look on PerfTestUtils to understand what is the 
>> main
>> cause of such difference.​
>
> AFAICS, the main difference is that JMH benchmarks one code
> (marked with "@Benchmark") after the other while "PerfTestUtils"
> benchmarks all the codes together.
>
>>> Any comment, objection, explanation, suggestion?
>>> [E.g. set up JMH to benchmark the other use case, or a reason why
>>> this is in fact not necessary.]
>>>
>>
>> ​We can play with different amount of warm up rounds in JMH to see 
>> whenever
>> there is a degradation
>> to results similar to PerfTestUtils for example.​
>
> I'm pretty sure it's not related to warmup because the
> "PerfTestUtils" benchmarks were running for much longer
> than JMH.
>
> What would be interesting is to see whether JMH performance
> degrades when multiple RNGs (of different types) have been
> instantiated and run in the same scope.
>
> Regards,
> Gilles


---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@commons.apache.org
For additional commands, e-mail: dev-help@commons.apache.org


Mime
View raw message