openoffice-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From janI <j...@apache.org>
Subject Re: Example of spreadsheet formula testing
Date Fri, 16 Aug 2013 20:47:09 GMT
On 16 August 2013 22:14, Rob Weir <robweir@apache.org> wrote:

> On Fri, Aug 16, 2013 at 3:51 PM, janI <jani@apache.org> wrote:
> > On 16 August 2013 21:37, Regina Henschel <rb.henschel@t-online.de>
> wrote:
> >
> >> Hi Rob,
> >>
> >> Rob Weir schrieb:
> >>
> >>> Moving this topic to its own thread.
> >>>
> >>> It should be possible to code a very thorough set of test cases in a
> >>> spreadsheet, without using macros or anything fancy.  Just careful
> >>> reading of the ODF 1.2 specification and simple spreadsheet logic.
> >>>
> >>>
> >> Following the spec is not enough. For example, if the accuracy decreases
> >> from 14 digits to 7 digits, that is not covered by the spec.
> >>
> >> <skip test case example description>
> >>
> >>  If we used an approach like this on the other spreadsheet functions,
> >>> we could have a semi-automated test suite that would practically
> >>> guarantee that Calc is free of calculations errors.  Once we're
> >>> written the test cases, a modest upfront investment
> >>>
> >>
> >> "modest"? One function a day and you need more than a year.
> >>
> >
> > I think that a bit over the top, you can do quite a lot with an editor
> :-)
> >
> > And dont forget, if it really takes that long, how long does it then take
> > to test it manually, no less I assume.
> >
> > so its a win-win situation, first person that tests functions write the
> > first macros and so on.
> >
> >
> >>
> >> , it will benefit
> >>
> >>> us with every release we do.  Heck, it would benefit LibreOffice,
> >>> Gnumeric, Calligra as well, maybe even Microsoft and Google, though
> >>> they might already have such test cases defined internally.
> >>>
> >>
> >> I see a problem in how such a test suite is made available. And how the
> >> results for a special release are collected.
> >>
> >> The problem with the current test cases is, that I do not know where
> they
> >> are, how they are to use and how to generate new ones. It is a closed
> book,
> >> only for insiders.
> >>
> >
> > Thats a matter of documentation, I did not know where build.pl was
> stored
> > or how the build system worked before I invested time. Everything is a
> > closed book and for insider until you know it.
> >
> >
> >>
> >>
> >>> Anyone interesting in helping with this kind of test case development?
> >>>
> >>
> >> There exist some files already in Bugzilla. I used to make test
> documents,
> >> when working on functions. I think, that they can be extended to work
> in a
> >> way, that a simple look on it will tell errors. But I have no ready
> >> collection on my PC and most will be already deleted from my PC in the
> >> meantime.
> >>
> >
> > ODF must also as such have test suites to test the specification.
> >
>
> There are test cases, but they are incomplete.  And I think everything
> must be hand-verified.
>
> A short story of what can go wrong.  When the OOXML standard was
> written the authors included an example calculation for each function,
> to show what the expected result was.  They also gave a text
> description and in many cases an equation in mathematical notation.
>
> This all looked great.  But when I looked closely I found that the
> test cases were just copy & paste from Excel output.  And in some
> cases the result given was not what the mathematical equation said.
> They were not in synch.  So it was a case of "never go to sea with two
> compasses", because you then are lost if they disagree.
>
> So if we do this we really need to generate the expected values from
> the specification itself, by hand, or using some other trusted tool,
> like an HP financial calculator.  If you look at the details of my
> YEARFRAC test sheet you see that is what I did.    As teaches tell
> their students, "show your work", not just the final result.
>

I see your point, and I am probably to full blooded a programmer, used to
specs like those from W3N, nice BN formulas and at the same time a test
suite thats real big. It used to be a selling point to say that the html
part of our software had passed the W3C test suites.

But I agree, as you write it, we need to be careful, or to make things
simple (to get us started), assume our current calc is ok, and just to be
sure compare with e.g. excel.


> In any case, I like the idea of a focus here, for two reasons:
>
> 1) It lends itself well to automation
>
> 2) From a user perspective a spreadsheet can do almost anything else
> wrong, but it must not do calculations wrong.  This is the core trust
> of a spreadsheet application.  So it makes sense to really nail this
> area.
>

Once we have that nailed, it more or less exact the same (but much more
complicated) to test writer.

rgds
jan I.


>
>
> >
> >>
> >> One problem is, that comparisons with constants have to be written in a
> >> way, that they are independent from local. Eike has once corrected one
> of
> >> my test spreadsheets that way.
> >>
> >
> > We can simply assume for a start (that would be huge) that automated
> > testing runs in the en-US environment, then if there are problems it is
> > very likely in the translation.
> >
> >
> >>
> >>
> >>> Any ideas on how to fully automate this?  ODF 1.2 is very strict, so
> >>> we're not starting from a  perfect score.  But we should find an easy
> >>> way to report on regressions.
> >>>
> >>
> >> If you will automate this, you will need to develop a frame. But
> >> automation is not the total solution. Testing can be a way to bring user
> >> into the community. And tests have to cover different languages and
> >> scripts. I remember errors reported to LibreOffice, where a time
> >> calculation was wrong only in special locals. To extend a testing frame
> to
> >> consider this would be very expensive.
> >>
> >> I simply disagree, I come from a company where every bug was turned into
> > at least one test case in the framework, that was not all expensive,
> merely
> > giving the developer a slightly different job. Using this philosophy we
> > guarantee that a bug never reoccurs and that we test more and more with
> > every regression.
> >
> > Testing AOO is actually quite simple than testing live systems, where the
> > history changes the behavior of the system. Apart from very few test
> cases,
> > we can simply restart every time.
> >
> > We do need a test framework, but that is much more documentation and then
> > use what we have. In the first line macros, later perhaps scripting
> through
> > the extension framework. But alone with macros we have come a long way.
> >
> >> Let me not be misunderstood, I like the idea of collecting test cases.
> >>
> >
> > I could have misunderstood what you wrote, but I know we all have the
> same
> > goal, a high quality.
> >
> > rgds
> > jan I
> >
> >
> >> Kind regard
> >> Regina
> >>
> >>
> >>
> >>
> >>
> ------------------------------**------------------------------**---------
> >> To unsubscribe, e-mail: dev-unsubscribe@openoffice.**apache.org<
> dev-unsubscribe@openoffice.apache.org>
> >> For additional commands, e-mail: dev-help@openoffice.apache.org
> >>
> >>
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: dev-unsubscribe@openoffice.apache.org
> For additional commands, e-mail: dev-help@openoffice.apache.org
>
>

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message