spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From victor sheng <victorsheng...@gmail.com>
Subject spark1.0 spark sql saveAsParquetFile Error
Date Thu, 05 Jun 2014 02:31:19 GMT
I've got an exception when run the example of spark sql -- Using Parquet 

When I  call saveAsParquetFile method on people schemaRDD, throws an
exception like below, I don't know why, can anyone help me with it ? 

Exception: 
parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in
file file:/app/hadoop/shengli/spark-1.0.0/people.parquet/part-r-2.parquet 

spark shell : 

scala> people.saveAsParquetFile("people.parquet") 
14/06/05 10:01:48 INFO Analyzer: Max iterations (2) reached for batch
MultiInstanceRelations 
14/06/05 10:01:48 INFO Analyzer: Max iterations (2) reached for batch
CaseInsensitiveAttributeReferences 
14/06/05 10:01:48 WARN NativeCodeLoader: Unable to load native-hadoop
library for your platform... using builtin-java classes where applicable 
SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder". 
SLF4J: Defaulting to no-operation (NOP) logger implementation 
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further
details. 
14/06/05 10:01:48 INFO SQLContext$$anon$1: Max iterations (2) reached for
batch Add exchange 
14/06/05 10:01:48 INFO SQLContext$$anon$1: Max iterations (2) reached for
batch Prepare Expressions 
14/06/05 10:01:49 WARN LoadSnappy: Snappy native library not loaded 
14/06/05 10:01:49 INFO FileInputFormat: Total input paths to process : 1 
14/06/05 10:01:49 INFO SparkContext: Starting job: runJob at
ParquetTableOperations.scala:242 
14/06/05 10:01:49 INFO DAGScheduler: Got job 0 (runJob at
ParquetTableOperations.scala:242) with 2 output partitions
(allowLocal=false) 
14/06/05 10:01:49 INFO DAGScheduler: Final stage: Stage 0(runJob at
ParquetTableOperations.scala:242) 
14/06/05 10:01:49 INFO DAGScheduler: Parents of final stage: List() 
14/06/05 10:01:49 INFO DAGScheduler: Missing parents: List() 
14/06/05 10:01:49 INFO DAGScheduler: Submitting Stage 0 (MapPartitionsRDD[4]
at mapPartitions at basicOperators.scala:174), which has no missing parents 
14/06/05 10:01:49 INFO DAGScheduler: Submitting 2 missing tasks from Stage 0
(MapPartitionsRDD[4] at mapPartitions at basicOperators.scala:174) 
14/06/05 10:01:49 INFO TaskSchedulerImpl: Adding task set 0.0 with 2 tasks 
14/06/05 10:01:49 INFO TaskSetManager: Starting task 0.0:0 as TID 0 on
executor localhost: localhost (PROCESS_LOCAL) 
14/06/05 10:01:49 INFO TaskSetManager: Serialized task 0.0:0 as 6343 bytes
in 3 ms 
14/06/05 10:01:49 INFO TaskSetManager: Starting task 0.0:1 as TID 1 on
executor localhost: localhost (PROCESS_LOCAL) 
14/06/05 10:01:49 INFO TaskSetManager: Serialized task 0.0:1 as 6343 bytes
in 1 ms 
14/06/05 10:01:49 INFO Executor: Running task ID 0 
14/06/05 10:01:49 INFO Executor: Running task ID 1 
14/06/05 10:01:49 INFO BlockManager: Found block broadcast_0 locally 
14/06/05 10:01:49 INFO BlockManager: Found block broadcast_0 locally 
14/06/05 10:01:49 INFO HadoopRDD: Input split:
file:/app/hadoop/shengli/spark-1.0.0/examples/src/main/resources/people.txt:0+16 
14/06/05 10:01:49 INFO HadoopRDD: Input split:
file:/app/hadoop/shengli/spark-1.0.0/examples/src/main/resources/people.txt:16+16 
14/06/05 10:01:49 INFO CodecConfig: Compression: GZIP 
14/06/05 10:01:49 INFO CodecConfig: Compression: GZIP 
14/06/05 10:01:49 INFO ParquetOutputFormat: Parquet block size to 134217728 
14/06/05 10:01:49 INFO ParquetOutputFormat: Parquet block size to 134217728 
14/06/05 10:01:49 INFO ParquetOutputFormat: Parquet page size to 1048576 
14/06/05 10:01:49 INFO ParquetOutputFormat: Parquet page size to 1048576 
14/06/05 10:01:49 INFO ParquetOutputFormat: Parquet dictionary page size to
1048576 
14/06/05 10:01:49 INFO ParquetOutputFormat: Dictionary is on 
14/06/05 10:01:49 INFO ParquetOutputFormat: Validation is off 
14/06/05 10:01:49 INFO ParquetOutputFormat: Writer version is: PARQUET_1_0 
14/06/05 10:01:49 INFO ParquetOutputFormat: Parquet dictionary page size to
1048576 
14/06/05 10:01:49 INFO ParquetOutputFormat: Dictionary is on 
14/06/05 10:01:49 INFO ParquetOutputFormat: Validation is off 
14/06/05 10:01:49 INFO ParquetOutputFormat: Writer version is: PARQUET_1_0 
14/06/05 10:01:49 INFO CodecPool: Got brand-new compressor 
14/06/05 10:01:49 INFO CodecPool: Got brand-new compressor 
14/06/05 10:01:49 INFO InternalParquetRecordWriter: Flushing mem store to
file. allocated memory: 31,457,298 
14/06/05 10:01:49 INFO InternalParquetRecordWriter: Flushing mem store to
file. allocated memory: 31,457,319 
14/06/05 10:01:49 INFO ColumnChunkPageWriteStore: written 59B for [name]
BINARY: 2 values, 25B raw, 42B comp, 1 pages, encodings: [PLAIN, BIT_PACKED,
RLE] 
14/06/05 10:01:49 INFO ColumnChunkPageWriteStore: written 51B for [name]
BINARY: 1 values, 16B raw, 34B comp, 1 pages, encodings: [PLAIN, BIT_PACKED,
RLE] 
14/06/05 10:01:49 INFO ColumnChunkPageWriteStore: written 48B for [age]
INT32: 2 values, 14B raw, 31B comp, 1 pages, encodings: [PLAIN, BIT_PACKED,
RLE] 
14/06/05 10:01:49 INFO ColumnChunkPageWriteStore: written 45B for [age]
INT32: 1 values, 10B raw, 28B comp, 1 pages, encodings: [PLAIN, BIT_PACKED,
RLE] 
14/06/05 10:01:50 INFO FileOutputCommitter: Saved output of task
'attempt_201406051001_0006_r_000000_0' to
file:/app/hadoop/shengli/spark-1.0.0/people.parquet 
14/06/05 10:01:50 INFO FileOutputCommitter: Saved output of task
'attempt_201406051001_0006_r_000001_1' to
file:/app/hadoop/shengli/spark-1.0.0/people.parquet 
14/06/05 10:01:50 INFO Executor: Serialized size of result for 0 is 596 
14/06/05 10:01:50 INFO Executor: Serialized size of result for 1 is 596 
14/06/05 10:01:50 INFO Executor: Sending result for 1 directly to driver 
14/06/05 10:01:50 INFO Executor: Sending result for 0 directly to driver 
14/06/05 10:01:50 INFO Executor: Finished task ID 1 
14/06/05 10:01:50 INFO Executor: Finished task ID 0 
14/06/05 10:01:50 INFO DAGScheduler: Completed ResultTask(0, 1) 
14/06/05 10:01:50 INFO TaskSetManager: Finished TID 1 in 611 ms on localhost
(progress: 1/2) 
14/06/05 10:01:50 INFO DAGScheduler: Completed ResultTask(0, 0) 
14/06/05 10:01:50 INFO TaskSetManager: Finished TID 0 in 649 ms on localhost
(progress: 2/2) 
14/06/05 10:01:50 INFO DAGScheduler: Stage 0 (runJob at
ParquetTableOperations.scala:242) finished in 0.682 s 
14/06/05 10:01:50 INFO TaskSchedulerImpl: Removed TaskSet 0.0, whose tasks
have all completed, from pool 
14/06/05 10:01:50 INFO SparkContext: Job finished: runJob at
ParquetTableOperations.scala:242, took 0.965559 s 

scala> val parquetFile = sqlContext.parquetFile("people.parquet") 
14/06/05 10:01:55 INFO Analyzer: Max iterations (2) reached for batch
MultiInstanceRelations 
14/06/05 10:01:55 INFO Analyzer: Max iterations (2) reached for batch
CaseInsensitiveAttributeReferences 
14/06/05 10:01:55 INFO SQLContext$$anon$1: Max iterations (2) reached for
batch Add exchange 
14/06/05 10:01:55 INFO SQLContext$$anon$1: Max iterations (2) reached for
batch Prepare Expressions 
parquetFile: org.apache.spark.sql.SchemaRDD = 
SchemaRDD[7] at RDD at SchemaRDD.scala:98 
== Query Plan == 
ParquetTableScan [name#8,age#9], (ParquetRelation people.parquet), None 

scala> parquetFile.registerAsTable("parquetFile") 

scala> val teenagers = sql("SELECT name FROM parquetFile WHERE age >= 13 AND
age <= 19") 
14/06/05 10:02:03 INFO Analyzer: Max iterations (2) reached for batch
MultiInstanceRelations 
14/06/05 10:02:03 INFO Analyzer: Max iterations (2) reached for batch
CaseInsensitiveAttributeReferences 
14/06/05 10:02:03 INFO SQLContext$$anon$1: Max iterations (2) reached for
batch Add exchange 
14/06/05 10:02:03 INFO SQLContext$$anon$1: Max iterations (2) reached for
batch Prepare Expressions 
14/06/05 10:02:03 INFO MemoryStore: ensureFreeSpace(75559) called with
curMem=72202, maxMem=308713881 
14/06/05 10:02:03 INFO MemoryStore: Block broadcast_1 stored as values to
memory (estimated size 73.8 KB, free 294.3 MB) 
teenagers: org.apache.spark.sql.SchemaRDD = 
SchemaRDD[8] at RDD at SchemaRDD.scala:98 
== Query Plan == 
Project [name#8:0] 
 Filter ((age#9:1 >= 13) && (age#9:1 <= 19)) 
  ParquetTableScan [name#8,age#9], (ParquetRelation people.parquet), None 

scala> teenagers.collect().foreach(println) 
14/06/05 10:02:07 INFO FileInputFormat: Total input paths to process : 2 
14/06/05 10:02:07 INFO ParquetInputFormat: Total input paths to process : 2 
14/06/05 10:02:07 INFO ParquetFileReader: reading summary file:
file:/app/hadoop/shengli/spark-1.0.0/people.parquet/_metadata 
14/06/05 10:02:07 INFO SparkContext: Starting job: collect at <console>:20 
14/06/05 10:02:07 INFO DAGScheduler: Got job 1 (collect at <console>:20)
with 2 output partitions (allowLocal=false) 
14/06/05 10:02:07 INFO DAGScheduler: Final stage: Stage 1(collect at
<console>:20) 
14/06/05 10:02:07 INFO DAGScheduler: Parents of final stage: List() 
14/06/05 10:02:07 INFO DAGScheduler: Missing parents: List() 
14/06/05 10:02:07 INFO DAGScheduler: Submitting Stage 1 (SchemaRDD[8] at RDD
at SchemaRDD.scala:98 
== Query Plan == 
Project [name#8:0] 
 Filter ((age#9:1 >= 13) && (age#9:1 <= 19)) 
  ParquetTableScan [name#8,age#9], (ParquetRelation people.parquet), None),
which has no missing parents 
14/06/05 10:02:07 INFO DAGScheduler: Submitting 2 missing tasks from Stage 1
(SchemaRDD[8] at RDD at SchemaRDD.scala:98 
== Query Plan == 
Project [name#8:0] 
 Filter ((age#9:1 >= 13) && (age#9:1 <= 19)) 
  ParquetTableScan [name#8,age#9], (ParquetRelation people.parquet), None) 
14/06/05 10:02:07 INFO TaskSchedulerImpl: Adding task set 1.0 with 2 tasks 
14/06/05 10:02:07 INFO TaskSetManager: Starting task 1.0:0 as TID 2 on
executor localhost: localhost (PROCESS_LOCAL) 
14/06/05 10:02:07 INFO TaskSetManager: Serialized task 1.0:0 as 3017 bytes
in 1 ms 
14/06/05 10:02:07 INFO TaskSetManager: Starting task 1.0:1 as TID 3 on
executor localhost: localhost (PROCESS_LOCAL) 
14/06/05 10:02:07 INFO TaskSetManager: Serialized task 1.0:1 as 3017 bytes
in 1 ms 
14/06/05 10:02:07 INFO Executor: Running task ID 3 
14/06/05 10:02:07 INFO Executor: Running task ID 2 
14/06/05 10:02:07 INFO BlockManager: Found block broadcast_1 locally 
14/06/05 10:02:07 INFO BlockManager: Found block broadcast_1 locally 
14/06/05 10:02:07 INFO NewHadoopRDD: Input split: ParquetInputSplit{part:
file:/app/hadoop/shengli/spark-1.0.0/people.parquet/part-r-2.parquet start:
0 length: 96 hosts: [] blocks: 1 requestedSchema: same as file fileSchema:
message root { 
  optional binary name; 
  optional int32 age; 
} 
 extraMetadata: {} readSupportMetadata: {}} 
14/06/05 10:02:07 INFO NewHadoopRDD: Input split: ParquetInputSplit{part:
file:/app/hadoop/shengli/spark-1.0.0/people.parquet/part-r-1.parquet start:
0 length: 107 hosts: [] blocks: 1 requestedSchema: same as file fileSchema:
message root { 
  optional binary name; 
  optional int32 age; 
} 
 extraMetadata: {} readSupportMetadata: {}} 
14/06/05 10:02:07 WARN ParquetRecordReader: Can not initialize counter due
to context is not a instance of TaskInputOutputContext, but is
org.apache.hadoop.mapreduce.TaskAttemptContext 
14/06/05 10:02:07 WARN ParquetRecordReader: Can not initialize counter due
to context is not a instance of TaskInputOutputContext, but is
org.apache.hadoop.mapreduce.TaskAttemptContext 
14/06/05 10:02:07 INFO InternalParquetRecordReader: RecordReader initialized
will read a total of 1 records. 
14/06/05 10:02:07 INFO InternalParquetRecordReader: RecordReader initialized
will read a total of 2 records. 
14/06/05 10:02:07 INFO InternalParquetRecordReader: at row 0. reading next
block 
14/06/05 10:02:07 INFO InternalParquetRecordReader: at row 0. reading next
block 
14/06/05 10:02:07 INFO CodecPool: Got brand-new decompressor 
14/06/05 10:02:07 INFO CodecPool: Got brand-new decompressor 
14/06/05 10:02:07 INFO InternalParquetRecordReader: block read in memory in
6 ms. row count = 2 
14/06/05 10:02:07 INFO InternalParquetRecordReader: block read in memory in
6 ms. row count = 1 
14/06/05 10:02:07 ERROR Executor: Exception in task ID 2 
parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in
file file:/app/hadoop/shengli/spark-1.0.0/people.parquet/part-r-2.parquet 
        at
parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:177)

        at
parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:130) 
        at
org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:122) 
        at
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39) 
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
        at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) 
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
        at scala.collection.Iterator$class.foreach(Iterator.scala:727) 
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) 
        at
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) 
        at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) 
        at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) 
        at
scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) 
        at scala.collection.AbstractIterator.to(Iterator.scala:1157) 
        at
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) 
        at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) 
        at
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) 
        at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) 
        at org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717) 
        at org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717) 
        at
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080) 
        at
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080) 
        at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) 
        at org.apache.spark.scheduler.Task.run(Task.scala:51) 
        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) 
        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) 
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) 
        at java.lang.Thread.run(Thread.java:619) 
Caused by: java.lang.NullPointerException 
        at
parquet.hadoop.CodecFactory$BytesDecompressor.decompress(CodecFactory.java:56) 
        at
parquet.hadoop.ColumnChunkPageReadStore$ColumnChunkPageReader.readPage(ColumnChunkPageReadStore.java:78)

        at
parquet.column.impl.ColumnReaderImpl.readPage(ColumnReaderImpl.java:500) 
        at
parquet.column.impl.ColumnReaderImpl.checkRead(ColumnReaderImpl.java:493) 
        at
parquet.column.impl.ColumnReaderImpl.consume(ColumnReaderImpl.java:546) 
        at
parquet.column.impl.ColumnReaderImpl.<init>(ColumnReaderImpl.java:339) 
        at
parquet.column.impl.ColumnReadStoreImpl.newMemColumnReader(ColumnReadStoreImpl.java:63) 
        at
parquet.column.impl.ColumnReadStoreImpl.getColumnReader(ColumnReadStoreImpl.java:58) 
        at
parquet.io.RecordReaderImplementation.<init>(RecordReaderImplementation.java:265) 
        at
parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:60) 
        at
parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:74) 
        at
parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:110)

        at
parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:172)

        ... 28 more 
14/06/05 10:02:07 ERROR Executor: Exception in task ID 3 
parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in
file file:/app/hadoop/shengli/spark-1.0.0/people.parquet/part-r-1.parquet 
        at
parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:177)

        at
parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:130) 
        at
org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:122) 
        at
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39) 
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
        at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) 
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
        at scala.collection.Iterator$class.foreach(Iterator.scala:727) 
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) 
        at
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) 
        at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) 
        at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) 
        at
scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) 
        at scala.collection.AbstractIterator.to(Iterator.scala:1157) 
        at
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) 
        at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) 
        at
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) 
        at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) 
        at org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717) 
        at org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717) 
        at
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080) 
        at
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080) 
        at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) 
        at org.apache.spark.scheduler.Task.run(Task.scala:51) 
        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) 
        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) 
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) 
        at java.lang.Thread.run(Thread.java:619) 
Caused by: java.lang.NullPointerException 
        at
parquet.hadoop.CodecFactory$BytesDecompressor.decompress(CodecFactory.java:56) 
        at
parquet.hadoop.ColumnChunkPageReadStore$ColumnChunkPageReader.readPage(ColumnChunkPageReadStore.java:78)

        at
parquet.column.impl.ColumnReaderImpl.readPage(ColumnReaderImpl.java:500) 
        at
parquet.column.impl.ColumnReaderImpl.checkRead(ColumnReaderImpl.java:493) 
        at
parquet.column.impl.ColumnReaderImpl.consume(ColumnReaderImpl.java:546) 
        at
parquet.column.impl.ColumnReaderImpl.<init>(ColumnReaderImpl.java:339) 
        at
parquet.column.impl.ColumnReadStoreImpl.newMemColumnReader(ColumnReadStoreImpl.java:63) 
        at
parquet.column.impl.ColumnReadStoreImpl.getColumnReader(ColumnReadStoreImpl.java:58) 
        at
parquet.io.RecordReaderImplementation.<init>(RecordReaderImplementation.java:265) 
        at
parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:60) 
        at
parquet.io.MessageColumnIO.getRecordReader(MessageColumnIO.java:74) 
        at
parquet.hadoop.InternalParquetRecordReader.checkRead(InternalParquetRecordReader.java:110)

        at
parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:172)

        ... 28 more 
14/06/05 10:02:07 WARN TaskSetManager: Lost TID 2 (task 1.0:0) 
14/06/05 10:02:07 WARN TaskSetManager: Loss was due to
parquet.io.ParquetDecodingException 
parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in
file file:/app/hadoop/shengli/spark-1.0.0/people.parquet/part-r-2.parquet 
        at
parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:177)

        at
parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:130) 
        at
org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:122) 
        at
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39) 
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
        at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) 
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
        at scala.collection.Iterator$class.foreach(Iterator.scala:727) 
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) 
        at
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) 
        at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) 
        at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) 
        at
scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) 
        at scala.collection.AbstractIterator.to(Iterator.scala:1157) 
        at
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) 
        at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) 
        at
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) 
        at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) 
        at org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717) 
        at org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717) 
        at
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080) 
        at
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080) 
        at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) 
        at org.apache.spark.scheduler.Task.run(Task.scala:51) 
        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) 
        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) 
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) 
        at java.lang.Thread.run(Thread.java:619) 
14/06/05 10:02:07 ERROR TaskSetManager: Task 1.0:0 failed 1 times; aborting
job 
14/06/05 10:02:07 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks
have all completed, from pool 
14/06/05 10:02:07 INFO DAGScheduler: Failed to run collect at <console>:20 
14/06/05 10:02:07 WARN TaskSetManager: Loss was due to
parquet.io.ParquetDecodingException 
parquet.io.ParquetDecodingException: Can not read value at 0 in block -1 in
file file:/app/hadoop/shengli/spark-1.0.0/people.parquet/part-r-1.parquet 
        at
parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:177)

        at
parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:130) 
        at
org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:122) 
        at
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39) 
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
        at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) 
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
        at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
        at scala.collection.Iterator$class.foreach(Iterator.scala:727) 
        at scala.collection.AbstractIterator.foreach(Iterator.scala:1157) 
        at
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) 
        at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) 
        at
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) 
        at
scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) 
        at scala.collection.AbstractIterator.to(Iterator.scala:1157) 
        at
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) 
        at scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) 
        at
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) 
        at scala.collection.AbstractIterator.toArray(Iterator.scala:1157) 
        at org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717) 
        at org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717) 
        at
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080) 
        at
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080) 
        at
org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) 
        at org.apache.spark.scheduler.Task.run(Task.scala:51) 
        at
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) 
        at
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) 
        at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) 
        at java.lang.Thread.run(Thread.java:619) 
14/06/05 10:02:07 INFO TaskSchedulerImpl: Removed TaskSet 1.0, whose tasks
have all completed, from pool 
14/06/05 10:02:07 INFO TaskSchedulerImpl: Cancelling stage 1 
org.apache.spark.SparkException: Job aborted due to stage failure: Task
1.0:0 failed 1 times, most recent failure: Exception failure in TID 2 on
host localhost: parquet.io.ParquetDecodingException: Can not read value at 0
in block -1 in file
file:/app/hadoop/shengli/spark-1.0.0/people.parquet/part-r-2.parquet 
       
parquet.hadoop.InternalParquetRecordReader.nextKeyValue(InternalParquetRecordReader.java:177)

       
parquet.hadoop.ParquetRecordReader.nextKeyValue(ParquetRecordReader.java:130) 
       
org.apache.spark.rdd.NewHadoopRDD$$anon$1.hasNext(NewHadoopRDD.scala:122) 
       
org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39) 
        scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
        scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388) 
        scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
        scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327) 
        scala.collection.Iterator$class.foreach(Iterator.scala:727) 
        scala.collection.AbstractIterator.foreach(Iterator.scala:1157) 
       
scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48) 
       
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103) 
       
scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:47) 
        scala.collection.TraversableOnce$class.to(TraversableOnce.scala:273) 
        scala.collection.AbstractIterator.to(Iterator.scala:1157) 
       
scala.collection.TraversableOnce$class.toBuffer(TraversableOnce.scala:265) 
        scala.collection.AbstractIterator.toBuffer(Iterator.scala:1157) 
       
scala.collection.TraversableOnce$class.toArray(TraversableOnce.scala:252) 
        scala.collection.AbstractIterator.toArray(Iterator.scala:1157) 
        org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717) 
        org.apache.spark.rdd.RDD$$anonfun$15.apply(RDD.scala:717) 
       
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080) 
       
org.apache.spark.SparkContext$$anonfun$runJob$4.apply(SparkContext.scala:1080) 
        org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:111) 
        org.apache.spark.scheduler.Task.run(Task.scala:51) 
       
org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:187) 
       
java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886) 
       
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908) 
        java.lang.Thread.run(Thread.java:619) 
Driver stacktrace: 
        at
org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$failJobAndIndependentStages(DAGScheduler.scala:1033)

        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1017)

        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$abortStage$1.apply(DAGScheduler.scala:1015)

        at
scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59) 
        at
scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47) 
        at
org.apache.spark.scheduler.DAGScheduler.abortStage(DAGScheduler.scala:1015) 
        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633)

        at
org.apache.spark.scheduler.DAGScheduler$$anonfun$handleTaskSetFailed$1.apply(DAGScheduler.scala:633)

        at scala.Option.foreach(Option.scala:236) 
        at
org.apache.spark.scheduler.DAGScheduler.handleTaskSetFailed(DAGScheduler.scala:633) 
        at
org.apache.spark.scheduler.DAGSchedulerEventProcessActor$$anonfun$receive$2.applyOrElse(DAGScheduler.scala:1207)

        at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498) 
        at akka.actor.ActorCell.invoke(ActorCell.scala:456) 
        at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237) 
        at akka.dispatch.Mailbox.run(Mailbox.scala:219) 
        at
akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)

        at
scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260) 
        at
scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339) 
        at
scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979) 
        at
scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/spark1-0-spark-sql-saveAsParquetFile-Error-tp7006.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

Mime
View raw message