spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Zsolt Tóth <toth.zsolt....@gmail.com>
Subject Re: Delegation Token renewal in yarn-cluster
Date Thu, 03 Nov 2016 21:02:07 GMT
Yes, I did change dfs.namenode.delegation.key.update-interval
and dfs.namenode.delegation.token.renew-interval to 15 min, the
max-lifetime to 30min. In this case the application (without Spark having
the keytab) did not fail after 15 min, only after 30 min. Is it possible
that the resource manager somehow automatically renews the delegation
tokens for my application?

2016-11-03 21:34 GMT+01:00 Marcelo Vanzin <vanzin@cloudera.com>:

> Sounds like your test was set up incorrectly. The default TTL for
> tokens is 7 days. Did you change that in the HDFS config?
>
> The issue definitely exists and people definitely have run into it. So
> if you're not hitting it, it's most definitely an issue with your test
> configuration.
>
> On Thu, Nov 3, 2016 at 7:22 AM, Zsolt Tóth <toth.zsolt.bme@gmail.com>
> wrote:
> > Hi,
> >
> > I ran some tests regarding Spark's Delegation Token renewal mechanism.
> As I
> > see, the concept here is simple: if I give my keytab file and client
> > principal to Spark, it starts a token renewal thread, and renews the
> > namenode delegation tokens after some time. This works fine.
> >
> > Then I tried to run a long application (with HDFS operation in the end)
> > without providing the keytab/principal to Spark, and I expected it to
> fail
> > after the token expires. It turned out that this is not the case, the
> > application finishes successfully without a delegation token renewal by
> > Spark.
> >
> > My question is: how is that possible? Shouldn't a saveAsTextfile() fail
> > after the namenode delegation token expired?
> >
> > Regards,
> > Zsolt
>
>
>
> --
> Marcelo
>

Mime
View raw message