spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From aappddeevv <aappdde...@gmail.com>
Subject serial data import from master node without leaving spark
Date Thu, 13 Nov 2014 18:48:42 GMT
I have large files that need to be imported into hdfs for further spark
processing. Obviously, I can import it in using hadoop fs however, there is
some minor processing that needs to be performed around a few
transformations, stripping the header line, and other such stuff. 

I would like to stay in the spark environment for doing this versus
switching to other tools either prior to the parallel tools or after loading
the file.

sc.textFile() requires files to be on the nodes when running in cluster
mode, which defeats the purposes of importing the serial file into the
parallel world without any extra steps.

After searching, it was not obvious how to do this. Should I use spark
streaming and pretend that I am streaming the data in through the master
node from the file?



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/serial-data-import-from-master-node-without-leaving-spark-tp18869.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org


Mime
View raw message