spark-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Felix Cheung <felixcheun...@hotmail.com>
Subject Re: [DISCUSS][K8S] Local dependencies with Kubernetes
Date Sun, 07 Oct 2018 21:26:32 GMT
Jars and libraries only accessible locally at the driver is fairly limited? Don’t you want
the same on all executor?



________________________________
From: Yinan Li <liyinan926@gmail.com>
Sent: Friday, October 5, 2018 11:25 AM
To: Stavros Kontopoulos
Cc: rvesse@dotnetrdf.org; dev
Subject: Re: [DISCUSS][K8S] Local dependencies with Kubernetes

> Just to be clear: in client mode things work right? (Although I'm not
really familiar with how client mode works in k8s - never tried it.)

If the driver runs on the submission client machine, yes, it should just work. If the driver
runs in a pod, however, it faces the same problem as in cluster mode.

Yinan

On Fri, Oct 5, 2018 at 11:06 AM Stavros Kontopoulos <stavros.kontopoulos@lightbend.com<mailto:stavros.kontopoulos@lightbend.com>>
wrote:
@Marcelo is correct. Mesos does not have something similar. Only Yarn does due to the distributed
cache thing.
I have described most of the above in the the jira also there are some other options.

Best,
Stavros

On Fri, Oct 5, 2018 at 8:28 PM, Marcelo Vanzin <vanzin@cloudera.com.invalid<mailto:vanzin@cloudera.com.invalid>>
wrote:
On Fri, Oct 5, 2018 at 7:54 AM Rob Vesse <rvesse@dotnetrdf.org<mailto:rvesse@dotnetrdf.org>>
wrote:
> Ideally this would all just be handled automatically for users in the way that all other
resource managers do

I think you're giving other resource managers too much credit. In
cluster mode, only YARN really distributes local dependencies, because
YARN has that feature (its distributed cache) and Spark just uses it.

Standalone doesn't do it (see SPARK-4160) and I don't remember seeing
anything similar on the Mesos side.

There are things that could be done; e.g. if you have HDFS you could
do a restricted version of what YARN does (upload files to HDFS, and
change the "spark.jars" and "spark.files" URLs to point to HDFS
instead). Or you could turn the submission client into a file server
that the cluster-mode driver downloads files from - although that
requires connectivity from the driver back to the client.

Neither is great, but better than not having that feature.

Just to be clear: in client mode things work right? (Although I'm not
really familiar with how client mode works in k8s - never tried it.)

--
Marcelo

---------------------------------------------------------------------
To unsubscribe e-mail: dev-unsubscribe@spark.apache.org<mailto:dev-unsubscribe@spark.apache.org>





Mime
View raw message