freemarker-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Daniel Dekany <>
Subject Re: [FM3] improve “null” handling
Date Sat, 04 Mar 2017 22:26:04 GMT
Saturday, March 4, 2017, 7:19:09 PM, Pedro M. Zamboni wrote:

>> In FTL2 if you give something a default value (like `foo.y!0`), then you potentially
unwillingly hide mistakes in the name (there was never an "y"). But because `foo` is not just
a `Map`, we could do better. We know that `y` is not a valid name. So we could throw exception
even for `foo.y!0`.
> Well, if you always throw an exception when a value is absent, I don’t
> think it’s that bad,

Note sure what you mean by absent. With the FM2 logic we always throw
exception when something is null or undefined (this two isn't
differentiated in FM2), except if it's covered by an `!` or `??` or
such. What I'm considering is that if something is undefined, we might
as well throw an exception regardless of if it's covered by an `!`,
`??`, etc. (What counts as undefined though depends on the
ObjectWrapper or on the MOP implementation.)

> I just wouldn’t want to end up with a `null` and
> an `undefined` value like in Javascript.

Definitely not like in JavaScript... that undefined is quite

> It *could* be a little confusing since generally nulls are not
> tolerated in Freemarker,

That wouldn't change. Undefined is tolerated even less. See the
example of `foo.x`. `${foo.x}` where is x is null is still an error,
but `${foo.x!}` isn't, just as in FM2. However, `${foo.y!}` is an
error, if foo is a bean without getY() method.

> but I *think* people would learn the differences soon enough for it
> to not be a big problem.
>> My guess is that Ceylon goes too far there. It's very unlikely that someone is able
to comprehend something written in Ceylon, yet things like `var` or `val` would make reading
code harder for them (surely you already know what those mean). So writing `variable` and
`value` hardly have a practical value. Yes, it's consistent, I get that. But certainly many
will dislike or even be annoyed by `variable` and `value`. So to me it doesn't look like the
right balance. I prefer if the *very* common things has a sort syntax, after all, you will
very quickly learn them if you do any real work with the language.
> Well, as it turns out, `value` is only two keystrokes away from `val`.
> What is believed is that the time spent designing the structure of
> your module, and actually writing its logic will take a much greater
> amount of time than actually occasionally typing those two characters
> throughout your module, so the time lost typing out “`value`” is
> insignificant to the overall development time of a module. As a
> consequence, you get a much more elegant‐looking code that reads much
> more nicely.
> And even then, `value` still comes as a much shorter way to declare
> values. Instead of writing `ArrayList<Map<String, Integer>>`, you can
> just write `value`.

It's not a question (to me) that Ceylon beats poor old Java when it
comes to design and other technical merits.

> We rarely ever use `variable` in Ceylon, so it’s okay for it to be a
> little bit more verbose.

People's brain functions differently. To me, `val`/`var` is easier to
spot visually. You don't actually read keywords after all, very
quickly they become to ideograms basically. And then, shorter
ideograms are easier to recognize. Anyway, I'm not the kind of person
who rejects a language because of such details. But I know many are
put off by such things, while I guess almost nobody would have problem
with var/val.

>> So it's not just an assignment as in Java (where `foo = bar` does an assignment and
is also an expression with the value of `bar`). […] Ah, so the assigment is part of the
`exists` syntax...
> Yeah, sorry if I didn’t make that clear. The assignment is part of the
> `exists` syntax, which in turn is part of the `if` syntax.
>> […] or just allow assignments as the operands of a top-level && in the
case of #if exclusively... which is kind of a hack […].
> Gavin (the creator of Ceylon) said that he did consider using `&&`
> instead of `,` for separating expressions in a condition list. The
> problem he faced was that the assignment operator should bind closer
> than the separator, however `&&` binded closer than `=`. I’m not sure
> if this is a problem in Freemarker, since the `=` is part of the
> `assign` directive syntax (so `&& could be made to bind more loosely
> than `=` in an `if`), but *at least I* would be weirded out by `<#if
> exists foo = bar && baz>`.

`&&` would bind closer than assignment `=`, so you had to use

  <#if (val x = a.b.x)?? && (val y = a.b.y)??>

In practice, even without `&&` you had to use parentheses, just as
you have to in Java most of the time, such as in:

  while ((bytesRead = != -1) { ... }

>> […] In FTL3 I plan to replace #assign/#local/#global with #var/#val and #set.
> To be honest, I think you should only have two directives. One to mean
> “set” and one to mean “declare locally”. No differentiation would be
> made for constants/variables. I think the perfect directive names to
> use are `set` and `let`.

As of differentiating var from val, maybe that's overly sophisticated
for a template language indeed.

As of `var` VS `let`, because we have `function` and `macro` (or... as
far as we have those), `var` is might be a more logical choice. OTOH
for block scope declarations they are using `let` in modern
JavaScript, while they also have `var` but with a different meaning.
So considering the influence of JavaScript, and to avoid confusion
because of that, `let` could be the winner.

>> Note that "behave slightly differently" in practice often just means pushing the
null requirement on your operand expression.
> I don’t understand. It seems to me that most expressions will ignore
> their null policy when evaluating their operands by asking for
> non‐null.
> For example, consider this expression: ``. It seems to me that
> `.bar` will evaluate `foo` by asking for non‐null regardless of
> whether it has a `!` appended after it or not.

It's an implementation detail, but no, it doesn't. But it's only
because if there will be an error, then we want it explode as deep in
the syntax tree as possible, so that we can give a better error

> Only if it’s asked to *really* not return null, it will pass that
> restriction down the evaluation chain, but for the other two types
> of evaluation, it’d evaluate its “subordinate” expression the same
> way.
>> Again, this is just the implementation. It's not how you explain the language rules.
> If I understand correctly from the rest of your message, then this
> would be explained like this:
> There are two types of null: a “good null” and a “bad null”. Most
> expressions (like `${}`, `.foo`, etc) can’t tolerate bad nulls but are
> okay with good nulls.

Yes. (I wonder what the terminology should be. "bad null" and "good
null" sounds a bit too informal.)

> Some expressions (arithmetic operations, and
> occasionally others) can’t tolerate either type of null;

Yes. (Sometimes there's just no obvious way to continue, like in the
case of `1/thisIsNull!`.)

> a few expressions (`!` and `!:`) can tolerate both types of null.

Yes. Though for `exp!` it's perhaps better to say that it changes a
"bad null" to a "good null".

> Okay, then. That addresses my main concern with this approach: that it
> would be hard to understand. More advanced users could investigate how
> Freemarker actually implements this feature under the covers (more
> atent users could be curious by seeing better error messages than they
> would expect), but the average user wouldn’t have to think about it
> too much.
>> […] [N]amespaces accessed with colon (like in XML) are better than those accessed
with dot (as in FTL2) […].
> I agree.
> -----
> About the whole built‐ins thing (using `?`): the more I think about
> it, the more it feels to me like it shouldn’t be a thing. What is so
> good about writing `foo?bar` compared to `bar(foo)`? I think the
> argument would be for writing `foo?bar` instead of *`core:bar(foo)`*,
> but I think it’d be a better solution overall to simply have a
> different syntax for language variables (like `#uppercase("hello")`
> instead of `.uppercase("hello")` or `core:uppercase("hello")` or
> `"hello"?uppercase`).
> I’m not completely sure about that, though. People might be too used
> to their postfix function (“method”) call to be able to give it up.
> For the sake of understandability, I’ll keep using `foo?bar` to mean a
> similar thing to today for the rest of this message.

While tradition is an important factor, FM2's foo?bar syntax serve
multiple purposes:

- The obvious one is avoiding name clashes with the other variables.
  As you said, #upperCase("hello") or c:upperCase("hello") could be
  another solution for that.

- It allows you to keep a What How order. The What is usually what's
  interesting, the How is less interesting. Like, in ${title?upperCase}
  the important thing is that you print the title. It's a secondary
  thing that you want to it to be shown in upper case.

- Most of the built-ins has no parameters, except the LHO itself, so
  the syntax allow avoiding the parentheses. Because formatting
  inserted values is a goal of a template language, ${foo?bar} VS
  ${foo?bar()} matters. (Some template languages even avoid the {}-s.
  How terse it is to insert and format is important.)

- Chaining (pipelining) tranformations is much readable.
  Compare `#c(#b(#a(x)))` and `x?a?b?c`. Note the ordering.

If we ignore tradition, I would probably go for `"hello".#upperCase`,
as it shows clearly that this is basically an extension method, or
maybe `"hello"#upperCase`. But I don't think ignoring the tradition
would be wise, as we want to use the FreeMarker "trademark".

>> The null bypassing thing addresses a quite common problem.
> What I said is that we *would* have “null bypassing”, but for every
> function and every parameter. I was surprised that you couldn’t store
> nulls in variables (and parameters) in Freemarker 2, but that doesn’t
> mean I don’t like it.
> The rules would be simple:
> For regular users: a function call throws if any argument is a bad
> null, and returns a good null without executing the function if any
> argument is a good null.

Some functions may want to return non-null for a "good null" input
though. So while some functions work like that, a hard and fast rule
like that won't fit some. The hardest cases where such rules can fall
apart is with calling Java API-s from the template (where by "Java
API" I mean any application- or framework-specific API implemented in
Java). Perhaps the cleanest problematic case is that you receive some
variable from the data model (or as the return value from a Java API
call), and then pass it to a Java API call. So then, you just naively
write `foo.someApi(someVar)`, and because you just pass through that
variable, you certainly won't remember writing
``foo.someApi(someVar!)`, because it's not really a templating
construct. You just have some variable from the world outside the
template, you don't care what is it, you just pass it to a method
that's also outside the template. Anyway, the most important part:
what if `someApi` actually supports a null argument, and would return
something non-null for it. We can't even tell that, because it's a
Java API, a Java has no (widely accepted) way of declaring how null
arguments are treaded. Clearly, it would be rather counter intuitive
for the user if we skip calling that method and return a "good null",
or if we deny calling it because of a bad null argument. Regardless if
the argument is a good null or a bad null, it seems we just must to
call that method, and see what's going to happen. Maybe it will throw
an NPE or IllegalArgumentException back at us, in which case we can't
show an error message as helpful as usual (we can still show which
arguments were null, in the "tips" section), but we still behave as

> For advanced users: a function call returns null if one of its
> parameters is null, but it evaluates its parameter expressions by
> asking for non‐null.
>> […] So let's say you are naive and write
>> ${x!'N/A'?transformLikeThis?transformLikeThat}. First, after some
>> hair loss you realize that you got precedence problem there, and
>> after you have fixed that you end up with
>> ${(x!'N/A')?transformLikeThis?transformLikeThat}. […]
> That’s another thing that gets fixed by preferring the `#builtin`
> syntax. Instead of writing
> `(x!:"N/A")?transformLikeThis?transformLikeThat`, one would write
> `transformLikeThat(transformLikeThis(x!:"N/A"))`.

Yes, but on what price... Losing the whole postfix call syntax for a
corner case.

>> […] Like, x is a number or date, so you format it with the
>> transform, but 'N/A' is not a number or date, it's just substitute
>> for the whole interpolated value. So you want to put it at the end,
>> like this: ${x?transformLikeThis?transformLikeThat!'N/A'}. […]
> Well, with my approach you’d be able to do the same thing. Suppose we
> keep the `?` syntax (as opposed to the `#` syntax I suggested), you’d
> do it like this:
> ```
> ${x!?transformLikeThis?transformLikeThat!:"N/A"}
> ```

You had to add that extra `!` though. Not only an extra key stroke,
but it kind of kicks you out of the flow. Think of how the user adds
the new pieces:

1. What to show: `x`
2. Yeah, but how to show it (formatting): `x?foo`
3. But what if it's missing: `x?foo!'default'`

Now if at 3. he had to go *back* with the cursor to add another `!` at
the beginning, that's annoying I believe. I might sound too picky
here, but these things people tend to write into the ${...} should be
concise and flow nicely, because that's what a template language does,
80% of the time.

>> […] We are shooting for the ${What How OrElseWhat} order for aesthetic reasons
too. […]
> Right, that’s the main concern I have with the `#` syntax I proposed:
> it’s regular and nice, but it might not look as good.
>> […] For the user point of view, ?upperCase etc, allows null, while
>> ${} doesn't. If a null touches it, it explodes. But you want FM to
>> shut up, so you apply a `!` on the null to tell FreeMarker not to
>> freak out because of that null. […]
> So, what you are saying is that, from the point of view of a regular
> user, function call expressions whose function is a “null bypassers”
> return “bad nulls” if one of its arguments is a good null, and `${}`
> only handles good nulls.

No, what I'm saying is that f(x) is a null bypasser, then if x is a
"good null", it returns a "good null", and if x is a "bad null", it
returns a "bad null". Because it bypasses that null unspoiled.

In the case of non-null bypasser f(x), if x is a null (good or bad...
whichever it accepts) and it returns null, we just don't know why it
is. Is it the same null as the argument, or a completely new null
comes from some other source? Did it return null because x was null at
all, or it's just a coincidence? You just can't know.

>> From the implementation perspective, `${}` does handle null, but it
>> asks for a non-null value […]. In
>> `${maybeMissing?upperCase?trim?upperCase}` (no `!` anywhere), it's
>> the `maybeMissing` that will explode because it obeys the "don't
>> dare to return null" command. […]
> So, from the point of view of an advanced user, call expressions
> whose function is a “null bypasser” would pass their “nullability”
> down to their argument expressions.

Actually, not from his point of view, but from the point of view of a
contributor who works on something close to that part of the core.
Even for an advanced user, a null bypasser just returns the *same*
null as the argument. So the null keeps it's good/bad property.

> This approach has an important flaw: the call expressions would have
> to know if the function is a null bypasser.

I has to, yes.

> This wouldn’t be possible if the user did something like this:
> ```
> <#assign x = fun> <#-- or let/var/val/whatever -->
> ${x(thisIsNull)}
> ```
> The call expression (the parentheses) wouldn’t be able to know if `x`
> is a null bypasser or not, so it wouldn’t know if it should call
> `thisIsNull` by allowing null or not.

It would know it. `x(thisIsNull)` is no different from
`fun(thisIsNull)`. `x` or `fun` is evaluated to a function object.
Then, as a separate step, we invoke it with `(thisIsNull)`. The null
policy of the parameters is part of the function object.

> However, if, instead, all functions were null bypassers (like I
> suggested), it’d always evaluate its argument expressions by asking
> for non‐nulls. (Unless they were asked to really not return nulls, in
> which case they’d evaluate their argument expressions by asking them
> to really not return null).
> That is, in regular user terms, functions would accept good nulls (and
> return a good null back while not executing their bodies), but
> wouldn’t accept bad nulls.
>> `!:` […] looks less cute. […]
> Well, I think it looks adorable!

It's one more "magic" characters I mean...

> But seriously now, I’ve written this part of the message in a
> different day than the rest (to be honest, I’ve written this message
> over the span of a couple days, but that’s unimportant), and I’ve
> recently had an idea: what if `:` was its own operator?
> In regular user terms, it’d explode on bad nulls, but it’d accept good
> nulls and return the right‐hand‐side in that case.
> In advanced user terms, it’d evaluate its left‐hand‐side by asking for
> non‐null, but if null is returned anyways, it’d evaluate its
> right‐hand‐side.
> It wouldn’t always need a `!` before it. For example, consider this:
> ```
> ${maybeNull!?foo?bar:"xxx"}
> <#-- or ${#bar(#foo(maybeNull!)):"xxx"} -->
> ${maybeNull!"xxx"}
> ```
> Now, whenever `maybeNull` is null, `"xxx"` will be shown. But
> whenever, for example, `.bar` is null, it will throw.

That's a good idea, and I especially like that now we don't suppress
the null problem at `.bar`. I wish we had `?` instead of `!`, and then
we not only managed to "generalize" `?.`, but also the `?:` operator.
(Such a same that some 14 years ago the role of `?` and `!` was
selected the other way around...)

But... it has some problems, because of which I think we better resist
the temptation. At least in the primary (<#>-ish) language. I was here
earlier BTW (not in this thread, but months ago and alone). Not sure
the syntax/semantic was the same, but I wanted to prevent suppressing
the null at `.bar`, and that resulted in an extra symbol to be used
after the `!` (just like here, the `:`), and the resulting problems
were the same:

- I wanted to use `:` for namespace prefix separator... now it's
  taken. What to do? I can use something like `|`, but it's less ideal
- It's not how you did it in FM2 (breaks tradition... possible but hurts)
- It's one more symbol for specifying a default, which meant to be
  a basic templating operation.
- It's yet again something that's kind of difficult to grasp for the
  average user. I mean, users will keep writing ${x:'-'}, which
  *never* works. So we can catch it during parsing and tell in the
  error message why it's wrong, but still.

> -----
> By the way, sorry for not responding sooner.

 Daniel Dekany

View raw message