spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Deenar Toraskar <deenar.toras...@gmail.com>
Subject Re: execute native system commands in Spark
Date Mon, 02 Nov 2015 16:02:18 GMT
You can do the following, make sure you the no of executors requested equal
the number of executors on your cluster.

import scala.sys.process._
import org.apache.hadoop.security.UserGroupInformation
import org.apache.spark.deploy.SparkHadoopUtil
sc.parallelize(0 to 10).map { _ =>(("hostname".!!).trim,
UserGroupInformation.getCurrentUser.toString)}.collect.distinct

Regards
Deenar
*Think Reactive Ltd*
deenar.toraskar@thinkreactive.co.uk
07714140812




On 2 November 2015 at 15:38, Adrian Tanase <atanase@adobe.com> wrote:

> Have you seen .pipe()?
>
>
>
>
> On 11/2/15, 5:36 PM, "patcharee" <Patcharee.Thongtra@uni.no> wrote:
>
> >Hi,
> >
> >Is it possible to execute native system commands (in parallel) Spark,
> >like scala.sys.process ?
> >
> >Best,
> >Patcharee
> >
> >---------------------------------------------------------------------
> >To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> >For additional commands, e-mail: user-help@spark.apache.org
> >
>
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
> For additional commands, e-mail: user-help@spark.apache.org
>
>

Mime
View raw message