spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Kaihui Gao (Jira)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-27292) Spark Job Fails with Unknown Error writing to S3 from AWS EMR
Date Sun, 08 Dec 2019 02:33:00 GMT

    [ https://issues.apache.org/jira/browse/SPARK-27292?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16990688#comment-16990688
] 

Kaihui Gao commented on SPARK-27292:
------------------------------------

HiĀ [~padmakarm9], I am facing a similar issue with you.

Have you fixed the issue??

Thank you very much!

[~padmakarm9]

> Spark Job Fails with Unknown Error writing to S3 from AWS EMR
> -------------------------------------------------------------
>
>                 Key: SPARK-27292
>                 URL: https://issues.apache.org/jira/browse/SPARK-27292
>             Project: Spark
>          Issue Type: Question
>          Components: Input/Output
>    Affects Versions: 2.3.2
>            Reporter: Olalekan Elesin
>            Priority: Major
>
> I am currently experiencing issues writing data to S3 from my Spark Job running on AWS
EMR.
> The job writings to some staging path in S3 e.g \{{.spark-random-alphanumeric}}. After
which it fails with this error:
> {code:java}
> 9/03/26 10:54:07 WARN AsyncEventQueue: Dropped 196300 events from appStatus since Tue
Mar 26 10:52:05 UTC 2019.
> 19/03/26 10:55:07 WARN AsyncEventQueue: Dropped 211186 events from appStatus since Tue
Mar 26 10:54:07 UTC 2019.
> 19/03/26 11:37:09 WARN DataStreamer: Exception for BP-312054361-10.41.97.71-1553586781241:blk_1073742995_2172
> java.io.EOFException: Unexpected EOF while trying to read response from server
> 	at org.apache.hadoop.hdfs.protocolPB.PBHelperClient.vintPrefixed(PBHelperClient.java:402)
> 	at org.apache.hadoop.hdfs.protocol.datatransfer.PipelineAck.readFields(PipelineAck.java:213)
> 	at org.apache.hadoop.hdfs.DataStreamer$ResponseProcessor.run(DataStreamer.java:1073)
> 19/03/26 11:37:09 WARN DataStreamer: Error Recovery for BP-312054361-10.41.97.71-1553586781241:blk_1073742995_2172
in pipeline [DatanodeInfoWithStorage[10.41.121.135:50010,DS-cba2a850-fa30-4933-af2a-05b40b58fdb5,DISK],
DatanodeInfoWithStorage[10.41.71.181:50010,DS-c90a1d87-b40a-4928-a709-1aef027db65a,DISK]]:
datanode 0(DatanodeInfoWithStorage[10.41.121.135:50010,DS-cba2a850-fa30-4933-af2a-05b40b58fdb5,DISK])
is bad.
> 19/03/26 11:50:34 WARN AsyncEventQueue: Dropped 157572 events from appStatus since Tue
Mar 26 10:55:07 UTC 2019.
> 19/03/26 11:51:34 WARN AsyncEventQueue: Dropped 785 events from appStatus since Tue Mar
26 11:50:34 UTC 2019.
> 19/03/26 11:52:34 WARN AsyncEventQueue: Dropped 656 events from appStatus since Tue Mar
26 11:51:34 UTC 2019.
> 19/03/26 11:53:35 WARN AsyncEventQueue: Dropped 1335 events from appStatus since Tue
Mar 26 11:52:34 UTC 2019.
> 19/03/26 11:54:35 WARN AsyncEventQueue: Dropped 1087 events from appStatus since Tue
Mar 26 11:53:35 UTC 2019.
> ...
> 19/03/26 13:39:39 WARN TaskSetManager: Lost task 33302.0 in stage 1444.0 (TID 1324427,
ip-10-41-122-224.eu-west-1.compute.internal, executor 18): org.apache.spark.SparkException:
Task failed while writing rows.
> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$.org$apache$spark$sql$execution$datasources$FileFormatWriter$$executeTask(FileFormatWriter.scala:254)
> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:169)
> 	at org.apache.spark.sql.execution.datasources.FileFormatWriter$$anonfun$write$1.apply(FileFormatWriter.scala:168)
> 	at org.apache.spark.scheduler.ResultTask.runTask(ResultTask.scala:90)
> 	at org.apache.spark.scheduler.Task.run(Task.scala:121)
> 	at org.apache.spark.executor.Executor$TaskRunner$$anonfun$10.apply(Executor.scala:402)
> 	at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1360)
> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:408)
> 	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
> 	at java.lang.Thread.run(Thread.java:748)
> Caused by: com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception:
Your socket connection to the server was not read from or written to within the timeout period.
Idle connections will be closed. (Service: Amazon S3; Status Code: 400; Error Code: RequestTimeout;
Request ID: 4E2E351899CDFB89; S3 Extended Request ID: iQhU4xTloYk9aTvO2FmDXk03M1pYCRQl539bG6PqEOeZrtw4KeAGRZDek9RugJywREfPmAC99FE=),
S3 Extended Request ID: iQhU4xTloYk9aTvO2FmDXk03M1pYCRQl539bG6PqEOeZrtw4KeAGRZDek9RugJywREfPmAC99FE=
> 	at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.handleErrorResponse(AmazonHttpClient.java:1658)
> 	at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeOneRequest(AmazonHttpClient.java:1322)
> 	at com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.http.AmazonHttpClient$RequestExecutor.executeHelper(AmazonHttpClient.java:1072)
> {code}
> This I don't seem to understand and would appreciate some help with this.
> Thanks in advance



--
This message was sent by Atlassian Jira
(v8.3.4#803005)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message