james-server-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Bernd Fondermann" <bernd.fonderm...@googlemail.com>
Subject Re: A failing or slow test is a good test too
Date Tue, 14 Nov 2006 12:52:18 GMT
On 11/14/06, Joachim Draeger <jd@joachim-draeger.de> wrote:
> Am Dienstag, den 14.11.2006, 09:31 +0100 schrieb Bernd Fondermann:
> > > > > IMO a failing test is as valuable as a passing one. Maybe even more
> > > > > because it reminds us to do something.
> > > >
> > > > > I don't think that it is an indicator of quality to have always 100%
> > > > > tests passing.
> > > >
> > > > My unit test 1x1 says: test runs have a binary result. It is either
> > > > green or red. If red, you have to fix the test. If you do nothing, the
> > > > test will soon be accompanied by a second failing test. Nobody checks
> > > > for failing tests anymore.
> > >
> > > I do not propose to do nothing when a test changes from green to red. I
> > > propose committing tests that fail because you don't have a solution so
> > > far.
> > > I can't see the advantage in accepting only passing tests.
> >
> > Because thats the fundamental unit test paradigm. The whole red/green
> > thing is build on this.
> ... which supposes to have small iterations: write a test, bring it to
> pass?

generally (but not exclusively), yes. unit tests (in the narrow sense
of the word) have a limited scope, so has the work I myself am able to
do at the same time.

if somebody considers to write a whole bunch of unit tests, then go to
implementation afterwards, then this is not the scenario I have in
mind and I'd have to first think about that ;-)

> BTW: An argument against having failing tests came to my mind:
> Tests are aging because code and requirements evolves. Passing tests
> will began to fail and will be fixed to reflect the new requirements.
> A failing test may get outdated without anybody notices.


> > > Do you refer to psychological aspects? I consider the message "don't
> > > start working on a test, it could fail" more bad than "there is one
> > > failing test, nobody would mind a second...".
> >
> > If somebody starts working on tests, they _will_ fail at the
> > beginning. But go away with a failing test and do no clean ups is not
> > polite.
> But it's okay to commit code that has only 70% of the required
> functionality. Maybe enough to start integration.
> Why not commit Tests that show what is missing? I think that is quite
> polite.

I consider this not only polite but a really good job. Someone works
intensively on a problem (like you with IMAP) and knows he has not
enough time to finish because of other work for some weeks. So he
conserves his knowledge in working (test) code. An excellent means of
communication for him and others.

> It's probably more transparent than a bunch of TODOs.
> Okay you may say that tests should be inverted... I'm still not
> convinced about the benefits except they are circumstancing limitations
> in current tools.

It is a limitation (or one could say _intention_) of the JUnit design.
The result of running a test suite is binary. There is no result like
"some tests failed but only unimportant ones" or "all test succeed but
this ugly old one nobody knows anything about if it _ever_ succeeded".

> > > The developer working on the fix runs the tests again and again, maybe
> > > using a debugger. I think this should be done on the original, not on a
> > > inverted one.
> >
> > Agreed. But you are talking about work in progress on a working copy.
> This requires reinverting the inverted test which is IMO error-prone.

:-) of course tests can have bugs, too. this is the downside of
testing using a computer program. in fact, many tests are difficult to
understand. this possibly highlights a problem with the tested unit.

> > > Unit tests can be part of the definition. They should not be changed in
> > > any way as long they are valid.
> >
> > Not agreed. This is a too dogmatic point of view.
> Maybe. But I noticed that they help me a lot as a "definition" in the
> part-time os developer job. What does the code I started writing last
> week?

OK, I said something like this above. sorry to repeat you.

> > > > Other tests, of course, for example integration or compliance test
> > > > could take much more time.
> > >
> > > Right. As I started to test which IMAP commands work in the current
> > > implementation I wrote integration tests. These have proven to be very
> > > useful.
> > > If the problem arises, that they are to slow for "every-day-use" we
> > > should separate, but not change them.
> >
> > integration tests don't belong into a unit test suite.
> > per definition, in java, unit tests test (more or less) a single class
> > ("the unit").
> What do you mean by that statement related to current James development?
> Separating integration from unit tests? Don't use junit for integration
> tests?

Simply: Be aware of the difference. I did not want to suggest I want
to change something in the way how others write their tests. In this
paragraph I was just argueing from a more academical standpoint.

I learned from a book about unit testing that developers sometimes
tend to test "too much", for example an underlying DB or other things
the unit is directly involved with. This is a grey area and there is
no rule of thumb. When in doubt I prefer testing "too much".
But sometimes I write no tests at all, for example for Postage :-(


To unsubscribe, e-mail: server-dev-unsubscribe@james.apache.org
For additional commands, e-mail: server-dev-help@james.apache.org

View raw message