lucene-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Vivek Narang (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SOLR-10317) Solr Nightly Benchmarks
Date Thu, 01 Jun 2017 23:32:05 GMT

    [ https://issues.apache.org/jira/browse/SOLR-10317?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16033906#comment-16033906
] 

Vivek Narang commented on SOLR-10317:
-------------------------------------

Hello [~ichattopadhyaya], I have added a mechanism with which the failed tests sessions are
identified and handled. The running processes, if any, (solr/zookeeper) from the last failed
session are located and destroyed, any files/folders created during the failed session are
located and removed and Metric data files are only updated with new statistics when the test
session is completely successful. This mechanism will make this suite more self-sufficient.


Please access the latest code at [https://github.com/viveknarang/lucene-solr/tree/SolrNightlyBenchmarks/dev-tools/SolrNightBenchmarks].
Regards.

> Solr Nightly Benchmarks
> -----------------------
>
>                 Key: SOLR-10317
>                 URL: https://issues.apache.org/jira/browse/SOLR-10317
>             Project: Solr
>          Issue Type: Task
>            Reporter: Ishan Chattopadhyaya
>              Labels: gsoc2017, mentor
>         Attachments: changes-lucene-20160907.json, changes-solr-20160907.json, managed-schema,
Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks.docx, Narang-Vivek-SOLR-10317-Solr-Nightly-Benchmarks-FINAL-PROPOSAL.pdf,
solrconfig.xml
>
>
> Solr needs nightly benchmarks reporting. Similar Lucene benchmarks can be found here,
https://home.apache.org/~mikemccand/lucenebench/.
> Preferably, we need:
> # A suite of benchmarks that build Solr from a commit point, start Solr nodes, both in
SolrCloud and standalone mode, and record timing information of various operations like indexing,
querying, faceting, grouping, replication etc.
> # It should be possible to run them either as an independent suite or as a Jenkins job,
and we should be able to report timings as graphs (Jenkins has some charting plugins).
> # The code should eventually be integrated in the Solr codebase, so that it never goes
out of date.
> There is some prior work / discussion:
> # https://github.com/shalinmangar/solr-perf-tools (Shalin)
> # https://github.com/chatman/solr-upgrade-tests/blob/master/BENCHMARKS.md (Ishan/Vivek)
> # SOLR-2646 & SOLR-9863 (Mark Miller)
> # https://home.apache.org/~mikemccand/lucenebench/ (Mike McCandless)
> # https://github.com/lucidworks/solr-scale-tk (Tim Potter)
> There is support for building, starting, indexing/querying and stopping Solr in some
of these frameworks above. However, the benchmarks run are very limited. Any of these can
be a starting point, or a new framework can as well be used. The motivation is to be able
to cover every functionality of Solr with a corresponding benchmark that is run every night.
> Proposing this as a GSoC 2017 project. I'm willing to mentor, and I'm sure [~shalinmangar]
and [~markrmiller@gmail.com] would help here.



--
This message was sent by Atlassian JIRA
(v6.3.15#6346)

---------------------------------------------------------------------
To unsubscribe, e-mail: dev-unsubscribe@lucene.apache.org
For additional commands, e-mail: dev-help@lucene.apache.org


Mime
View raw message