incubator-kato-spec mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Nicholas Sterling <Nicholas.Sterl...@Sun.COM>
Subject Re: Kato API javadoc
Date Wed, 08 Apr 2009 03:11:50 GMT

Hmmm.  I had it all wrong, then, thinking that 99% of the time our 
assumptions would be valid and we would skate through without an 
exception.  It sounds like in practice there is very little chance that 
a program that assumes the best would not die somewhere along the way, 
at least if it rummages around in the heap.  Thanks for straightening me 
out, Daniel.


Daniel Julin wrote:
> I would like put in a good word in favor of checked exceptions.
> One key thing to keep mind is that minor errors are actually quite common
> and often unavoidable when analyzing a dump. For example:
> * Some data items may appear corrupted or inconsistent because they were
> caught "in flight" at the instant when the dump was generated
> * Some data items may not be present, depending on the circumstances, the
> method used to obtain the dump and the implementation of a particular
> reader. For example, in some cases dumps from Linux do not contain full
> information about all the native threads, because the Linux kernel itself
> does not put it in the dump.
> From our experiences with the DumpAnalyzer tool over the past couple of
> years, it's not unusual to have dozens or even hundreds of small errors
> like that while walking through a typical dump, and they do not generally
> degrade the overall usefulness of the analysis. In the DTFJ API, these
> result in a DataUnavailable or CorrupDataException. The DumpAnalyzer tool
> catches the exception right at the point where it happens, prints a simple
> "[<unavailable>]" or "[<corrupted>]" tag at the appropriate point in the
> output, and moves on with the rest of the dump analysis.
> Having these reported through checked exceptions obviously makes it much
> easier to write a tool that carefully checks for all such potential errors
> and handles them in as narrow a scope as possible. The price is indeed that
> we have to have try/catch blocks all over the place. It's annoying, but it
> does not really make the code any more complicated than if we had to have
> an explicit test for an invalid value everywhere. And, if our code
> compiles, we can be sure that we did not forget to put a check for an error
> condition somewhere. Unchecked exceptions, of course, do not give us that
> guarantee.
> I recognize that, conversely, this makes life somewhat more complicated for
> the sustaining engineer who wants to write a quick simple program for some
> special analysis, and who does not want to spend a lot of time writing lots
> of error handling code. But all these exceptions from the Kato API are
> subclasses of DTFJException. So, he/she can simply declare every method in
> his/her program with "throws DTFJException", and have a single global
> try/catch block at the top level of the program to catch any DTFJException
> and abort the program gracefully. Is that really too onerous?
> -- Daniel --
> Nicholas.Sterling@Sun.COM wrote on 2009-04-06 06:11:35 PM:
>> I personally prefer unchecked exceptions, Steve.  Let me see if I can
>> articulate why.
>> If a sustaining engineer is working on a problem with a customer, s/he
>> may write a little program using this API to rummage around in each of
>> the stream of core files they will get as they try various experiments.
>> The sustaining engineer thinks s/he knows the situation: that an image
>> contains a single process, that it is a Java program, that a particular
>> thread exists, that a particular class is being used, etc.  Sure, they
>> could codify their assumptions by writing explicit checks, but it would
>> be more straightforward to just write the code saying "give me thread
>> X."  They will be sorely tempted to do that; it's easier, and such code
>> will be easier to grok when passed around.
>> Unfortunately, if they do that and the API returns special values for
>> exceptional conditions, then if one of their assumptions *is* violated
>> their analyzer is going to blow up in some strange way, perhaps
>> downstream a bit from the violated assumption, and now they're debugging
>> their analyzer instead of the customer's problem.
>> If, however, unchecked exceptions are used, then the sustaining engineer
>> could just assume everything they expect to be there actually is there,
>> and the API will tell them if they are wrong -- clearly, and precisely
>> at the location of the violated assumption.  Aha!  There is no thread
>> X.  99% of the time there assumptions will be correct, so let them just
>> assume what they want to.  And the 1% of the time they are wrong, it
>> isn't a brain-teaser in its own right.
>> It is because I am hoping that the API will prove useful for such ad-hoc
>> analysis that I prefer not to encumber the API by requiring explicit
>> tests or exception-handling.
>> Of course, people writing serious generic diagnostic tools can't assume
>> anything; their tools will have to deal with all possibilities,
>> everything the API could return.  Regardless of whether we use special
>> values or exceptions, they'll have to code for the uncommon case.
>> Honestly, I think *checked* exceptions would be best for them, because
>> they couldn't forget to catch an exception.  But checked exceptions
>> place constraints on the API's evolution, and again, they make life
>> harder for the sustaining engineer who thinks s/he knows the situation
>> (by requiring explicit handling for each exception).
>> Just my 2 cents.
>> Nicholas

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message