spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Oleg Ruchovets <oruchov...@gmail.com>
Subject Re: multiple hdfs folder & files input to PySpark
Date Fri, 15 May 2015 15:45:47 GMT
Hello ,
   I used approach that you've suggested :
        lines = sc.textFile("/input/lprs/2015_05_15/file4.csv,
/input/lprs/2015_05_14/file3.csv, /input/lprs/2015_05_13/file2.csv,
/input/lprs/2015_05_12/file1.csv")

but It doesn't work for me:

     py4j.protocol.Py4JJavaError: An error occurred while calling
o30.partitions.
: org.apache.hadoop.mapred.InvalidInputException: Input path does not
exist: hdfs://sdo-hdp-bd-master1.development.c4i:8020/user/hdfs/
/input/lprs/2015_05_14/file3.csv
Input path does not exist:
hdfs://sdo-hdp-bd-master1.development.c4i:8020/user/hdfs/
/input/lprs/2015_05_13/file2.csv
Input path does not exist:
hdfs://sdo-hdp-bd-master1.development.c4i:8020/user/hdfs/
/input/lprs/2015_05_12/file1.csv
        at
org.apache.hadoop.mapred.FileInputFormat.singleThreadedListStatus(FileInputFormat.java:285)
        at
org.apache.hadoop.mapred.FileInputFormat.listStatus(FileInputFormat.java:228)
        at
org.apache.hadoop.mapred.FileInputFormat.getSplits(FileInputFormat.java:304)
        at org.apache.spark.rdd.HadoopRDD.getPartitions(HadoopRDD.scala:203)
        at
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
        at
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
        at
org.apache.spark.rdd.MapPartitionsRDD.getPartitions(MapPartitionsRDD.scala:32)
        at
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
        at
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
        at
org.apache.spark.api.python.PythonRDD.getPartitions(PythonRDD.scala:56)
        at
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:219)
        at
org.apache.spark.rdd.RDD$$anonfun$partitions$2.apply(RDD.scala:217)
        at scala.Option.getOrElse(Option.scala:120)
        at org.apache.spark.rdd.RDD.partitions(RDD.scala:217)
        at
org.apache.spark.api.java.JavaRDDLike$class.partitions(JavaRDDLike.scala:56)
        at org.apache.spark.api.java.JavaRDD.partitions(JavaRDD.scala:32)
        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at
sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)
        at
sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.lang.reflect.Method.invoke(Method.java:606)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:231)
        at
py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:379)
        at py4j.Gateway.invoke(Gateway.java:259)
        at
py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:133)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.GatewayConnection.run(GatewayConnection.java:207)
        at java.lang.Thread.run(Thread.java:745)


Please advise what  I am doing wrong.

Thanks
Oleg.

On Wed, May 6, 2015 at 10:23 AM, MEETHU MATHEW <meethu2006@yahoo.co.in>
wrote:

> Hi,
>
> 1. Please try giving the input paths as a comma separated list inside
> sc.textFile()
> sc.textFile("/path/to/file1,/path to /file2")
>
>
> Thanks & Regards,
> Meethu M
>
>
>
>   On Tuesday, 5 May 2015 6:30 PM, Oleg Ruchovets <oruchovets@gmail.com>
> wrote:
>
>
> Hi
>    We are using pyspark 1.3 and input is text files located on hdfs.
>
> file structure
>     <day1>
>                 file1.txt
>                 file2.txt
>     <day2>
>                 file1.txt
>                 file2.txt
>      ...
>
> Question:
>
>    1) What is the way to provide as an input for PySpark job  multiple
> files which located in Multiple folders (on hdfs).
> Using textFile method works fine for single file or folder , but how can I
> do it using multiple folders?
> Is there a way to pass array , list of files?
>
>    2) What is the meaning of partition parameter in textFile method?
>
>   sc = SparkContext(appName="TAD")
>   lines = sc.textFile(<my input>, 1)
>
> Thanks
> Oleg.
>
>
>

Mime
View raw message