qpid-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Rob Godfrey <rob.j.godf...@gmail.com>
Subject Re: Masters Thesis on False Positives in Test Failures
Date Sat, 09 Apr 2016 15:57:25 GMT
Hi Kevin,

On 8 April 2016 at 13:27, Kevin van den Bekerom <k.vandenbekerom@sig.eu>

> Dear Developers of the Apache Qpid project,
> My name is Kevin van den Bekerom and I am currently doing my Master's
> research on the topic of false alarms in test code. I would like to ask the
> input of the Qpid development team categorizing test code bugs.
> My research is based on a recent paper by Arash et al. (
> http://salt.ece.ubc.ca/publications/docs/icsme15.pdf). They conducted an
> empirical study, categorizing "test code bugs" in Apache software projects,
> e.g. semantic, flaky, environmental, etc. A "test code bug" is a failing
> test, where the System Under Test is correct, but the test code is
> incorrect. To identify test code bugs they looked at issues in JIRA, and
> checked if the fixing commit was only in the test code. Only fixed issues
> were counted and categorised.
> My goal is to replicate their results using a different approach, i.e. ask
> developers that were involved in the issue and/or fix how they
> would categorise it.  For the Qpid project they counted 152 test code bugs.
> Insight into false positives can therefore be very relevant for your
> project. Note that the authors only sampled a number of identified test
> code bugs for individual inspection.
> I would like to ask the Qpid team’s participation in categorizing the
> various test bugs. I will provide a list of JIRA IDs which are identified
> as test code bugs and an initial list of categories (assertion fault,
> obsolete assertion, test dependency, etc) to aid in
> the categorisation process with short explanations. In my belief, the
> developers that worked on the issue are the one's that are most capable of
> categorizing the issue. Please let me know if this project looks
> interesting to you and if you are willing to help me out.
The paper you reference seems only to be looking at Java code, are you
similarly restricting your research, or looking across all languages.  I
ask mainly because Qpid supports many different components written across a
number of different languages (and the developers are somewhat disjoint
sets). I'm certainly willing to see if I can find some time to look at
JIRAs you list that affect the Java client/broker components, but I
wouldn't be able to offer any opinion on the C++ code (for example).

> As a next step I will look for common patterns in identified test code bugs
> and my aim is to extent static source code analysis techniques to be also
> suited to find test code bugs. I am of course very happy to share my
> findings with the team.
Historically the Java system test codebase has had a large number of
"flaky" tests...  This is partly due to how failure is detected when you
are testing an asynchronous messaging system.  If the test is to check that
under the set of test conditions "a message is delivered" then failure
(i.e. no message is delivered) can only be established by setting a timeout
and saying "if no message has been delivered in X seconds then consider it
a failure"... and on a slow (contended) CI machine, assumptions about a
reasonable timeout value may be invalid.


> Hope to hear from you!
> With kind regards,
> --
> *Kevin van den Bekerom* | Intern
> +31 6 21 33 93 85 | kvandenbekerom@sig.eu
> Software Improvement Group | www.sig.eu

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message