metron-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Ryan Merriman <merrim...@gmail.com>
Subject Re: [DISCUSS][PROPOSAL] Acceptance Tests
Date Fri, 03 Mar 2017 15:35:26 GMT
+1, great idea.  At some point our manual testing checklist is going to
grow large enough that we'll need to move to something more automated.
We're probably already there.

I very much agree with Justin's concern.  Building and running
unit/integration tests takes a long time right now and this will increase
that time.  My request would be for these tests to be organized and
granular enough for me to focus on tests I believe exercise the feature I'm
working on.  The full suite would then be used to find regressions that may
not be obvious.  We also should be careful about when we add this to our
travis job.  I think optimizing our build and current tests would be a
prerequisite for that.

Do we want to enforce a passing rate of 100% for every PR?  I think
everyone would agree this is ideal but are there cases where this might not
be practical?

On Fri, Mar 3, 2017 at 7:44 AM, Casey Stella <cestella@gmail.com> wrote:

> That's a very good point.  I'm hoping that this can take the place of some
> of the more rigorous manual testing scripts that we have, so it's less time
> at the keyboard for reviewers.
>
> On Fri, Mar 3, 2017 at 8:41 AM, Justin Leet <justinjleet@gmail.com> wrote:
>
> > +1 to both.  Having this would especially ease a lot of testing that hits
> > multiple areas (which there is a fair amount of, given that we're
> building
> > pretty quickly).
> >
> > I do want to point out that adding this type of thing makes the speed of
> > our builds and tests more important, because they already take up a good
> > amount of time.  There are obviously tickets to optimize these things,
> but
> > I would like to make sure we don't pile too much on to every testing
> cycle
> > before a PR.  Having said that, I think the testing proposed is
> absolutely
> > valuable enough to go forward with.
> >
> > Justin
> >
> > On Fri, Mar 3, 2017 at 8:33 AM, Casey Stella <cestella@gmail.com> wrote:
> >
> > > I also propose, once this is done, that we modify the developer bylaws
> > and
> > > the github PR script to ensure that PR authors:
> > >
> > >    - Update the acceptance tests where appropriate
> > >    - Run the tests as a smoketest
> > >
> > >
> > >
> > > On Fri, Mar 3, 2017 at 8:21 AM, Casey Stella <cestella@gmail.com>
> wrote:
> > >
> > > > Hi All,
> > > >
> > > > After doing METRON-744, where I had to walk through a manual test of
> > > every
> > > > place that Stellar touched, it occurred to me that we should script
> > this.
> > > > It also occurred to me that some scripts that are run by the PR
> author
> > to
> > > > ensure no regressions and, eventually maybe, even run on an INFRA
> > > instance
> > > > of Jenkins would give all of us some peace of mind.
> > > >
> > > > I am certain that this, along with a couple other manual tests from
> > other
> > > > PRs, could form the basis of a really great regression
> acceptance-test
> > > > suite and I'd like to propose that we do that, as a community.
> > > >
> > > > What I'd like to see from such a suite has the following
> > characteristics:
> > > >
> > > >    - Can be run on any Metron cluster, including but not limited to
> > > >       - Vagrant
> > > >       - AWS
> > > >       - An existing deployment
> > > >    - Can be *deployed* from ansible, but must be able to be deployed
> > > >    manually
> > > >       - With instructions in the readme
> > > >    - Tests should be idempotent and independent
> > > >       - Tear down what you set up
> > > >
> > > > I think between the Stellar REPL and the fundamental scriptability of
> > the
> > > > Hadoop services, we can accomplish these tests with a combination of
> > > shell
> > > > scripts and python.
> > > >
> > > > I propose we break this into the following parts:
> > > >
> > > >    - Acceptance Testing Framework with a small smoketest
> > > >    - Baseline Metron Test
> > > >       - Send squid data through the squid topology
> > > >       - Add an threat triage alert
> > > >       - Ensure it gets through to the other side with alerts
> preserved
> > > >    - + Enrichment
> > > >       - Add an enrichment in the enrichment pipeline to the above
> > > >    - + Profiler
> > > >       - Add a profile with a tick of 1 minute to count per
> destination
> > > >       address
> > > >    - Base PCap test
> > > >       - Something like the manual test for METRON-743 (
> > > >       https://github.com/apache/incubator-metron/pull/467#
> > > issue-210285324
> > > >       <https://github.com/apache/incubator-metron/pull/467#
> > > issue-210285324>
> > > >       )
> > > >
> > > > Thoughts?
> > > >
> > > >
> > > > Best,
> > > >
> > > > Casey
> > > >
> > >
> >
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message