spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Akhil Das <ak...@sigmoidanalytics.com>
Subject Re: java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
Date Tue, 14 Oct 2014 09:03:42 GMT
Yes, in the spark-env.sh file across the cluster. You can also check this
out <https://issues.apache.org/jira/browse/SPARK-1264> regarding the
variables.

Thanks
Best Regards

On Tue, Oct 14, 2014 at 2:04 PM, chepoo <swallow_pulm@163.com> wrote:

>
> spark-shell is work ok, and the configuration takes effect:
>
>
> val something = sc.parallelize(1 to 10000000).collect().filter(_<1000)
> result:
> 14/10/14 16:18:18 INFO scheduler.TaskSetManager: Finished TID 27 in 1293
> ms on Hadoop.Master (progress: 48/48)
> 14/10/14 16:18:18 INFO scheduler.TaskSchedulerImpl: Removed TaskSet 0.0,
> whose tasks have all completed, from pool
> 14/10/14 16:18:18 INFO scheduler.DAGScheduler: Stage 0 (collect at
> <console>:12) finished in 1.440 s
> 14/10/14 16:18:18 INFO spark.SparkContext: Job finished: collect at
> <console>:12, took 1.623173676 s
> something: Array[Int] = Array(1,…………
>
>
> spark-submit is work ok too.
>  *spark-submit --class org.apache.spark.examples.SparkPi --master
> spark://recommend1:7077 --executor-memory 4G --total-executor-cores 4
> /root/spark-examples_2.10-1.0.1.jar 1000*
> result:
> 14/10/14 16:25:41 INFO scheduler.DAGScheduler: Stage 0 (reduce at
> SparkPi.scala:35) finished in 9.311 s
> 14/10/14 16:25:41 INFO spark.SparkContext: Job finished: reduce at
> SparkPi.scala:35, took 9.670565214 s
> Pi is roughly 3.14138392
>
>
> application logs:
>
> "try increasing the driver memory" is "SPARK_DRIVER_MEMORY" option at
> spark-env.conf file?
>
>
> On Oct 14, 2014, at 14:52, Akhil Das <akhil@sigmoidanalytics.com> wrote:
>
> Looks like a configuration issue, can you fire up a spark-shell
> ($SPARK_HOME/bin/spark-shell --master=spark://recommend1:7077) and try
> the following?
>
> scala>val something = sc.parallelize(1 to
> 10000000).collect().filter(_<1000)
>
> And if that is working fine, try the wordcount example
> <https://spark.apache.org/examples.html> and make sure it is not a
> configuration issue.
> Also try increasing the driver memory.
>
> Thanks
> Best Regards
>
> On Tue, Oct 14, 2014 at 8:04 AM, chepoo <swallow_pulm@163.com> wrote:
>
>> Hi all,
>> The data size of my task is about 2g, I use spark-itemsimilarity
>> calculating at Spark cluster(spark standalone mode), configuration is as
>> follows:
>> <PastedGraphic-1.tiff>
>>
>> and my run command is: *mahout spark-itemsimilarity *-i
>> /rec/input/ss/archive/rec_ss_input.har -o* /rec/ss/output -os
>> -ma spark://recommend1:7077 -sem 15g -f1 purchase -f2 view -ic 2 -fc 1*
>>
>> The task finished, but have the following error message:
>>
>> -------------------------------------------------------------------------------
>> 14/10/13 20:34:19 WARN storage.BlockManagerMasterActor: Removing
>> BlockManager BlockManagerId(1, Hadoop.Slave2, 43516, 0) with no recent
>> heart beats: 75197ms exceeds 45000ms
>> 14/10/13 20:34:53 INFO storage.BlockManagerInfo: Registering block
>> manager Hadoop.Slave2:43516 with 8.6 GB RAM
>> 14/10/13 20:34:53 INFO storage.BlockManagerInfo: Added rdd_48_2 in memory
>> on Hadoop.Slave2:43516 (size: 48.3 MB, free: 8.6 GB)
>> 14/10/13 20:34:53 INFO storage.BlockManagerInfo: Added rdd_4_0 in memory
>> on Hadoop.Slave2:43516 (size: 355.8 MB, free: 8.2 GB)
>> 14/10/13 20:34:53 INFO storage.BlockManagerInfo: Added rdd_4_1 in memory
>> on Hadoop.Slave2:43516 (size: 112.0 B, free: 8.2 GB)
>> 14/10/13 20:34:53 INFO storage.BlockManagerInfo: Added rdd_4_2 in memory
>> on Hadoop.Slave2:43516 (size: 112.0 B, free: 8.2 GB)
>> 14/10/13 20:34:53 INFO storage.BlockManagerInfo: Added rdd_26_3 in memory
>> on Hadoop.Slave2:43516 (size: 554.2 MB, free: 7.7 GB)
>> 14/10/13 20:34:53 INFO storage.BlockManagerInfo: Added rdd_4_3 in memory
>> on Hadoop.Slave2:43516 (size: 112.0 B, free: 7.7 GB)
>> 14/10/13 20:38:19 WARN storage.BlockManagerMasterActor: Removing
>> BlockManager BlockManagerId(1, Hadoop.Slave2, 43516, 0) with no recent
>> heart beats: 95864ms exceeds 45000ms
>> 14/10/13 20:38:23 INFO storage.BlockManagerInfo: Registering block
>> manager Hadoop.Slave2:43516 with 8.6 GB RAM
>> 14/10/13 20:38:23 INFO storage.BlockManagerInfo: Added rdd_48_2 in memory
>> on Hadoop.Slave2:43516 (size: 48.3 MB, free: 8.6 GB)
>> 14/10/13 20:38:23 INFO storage.BlockManagerInfo: Added rdd_4_0 in memory
>> on Hadoop.Slave2:43516 (size: 355.8 MB, free: 8.2 GB)
>> 14/10/13 20:38:23 INFO storage.BlockManagerInfo: Added rdd_4_1 in memory
>> on Hadoop.Slave2:43516 (size: 112.0 B, free: 8.2 GB)
>> 14/10/13 20:38:23 INFO storage.BlockManagerInfo: Added rdd_4_2 in memory
>> on Hadoop.Slave2:43516 (size: 112.0 B, free: 8.2 GB)
>> 14/10/13 20:38:23 INFO storage.BlockManagerInfo: Added rdd_26_3 in memory
>> on Hadoop.Slave2:43516 (size: 554.2 MB, free: 7.7 GB)
>> 14/10/13 20:38:23 INFO storage.BlockManagerInfo: Added rdd_4_3 in memory
>> on Hadoop.Slave2:43516 (size: 112.0 B, free: 7.7 GB)
>> 14/10/13 20:55:19 WARN storage.BlockManagerMasterActor: Removing
>> BlockManager BlockManagerId(1, Hadoop.Slave2, 43516, 0) with no recent
>> heart beats: 84679ms exceeds 45000ms
>> 14/10/13 21:01:33 INFO storage.BlockManagerInfo: Registering block
>> manager Hadoop.Slave2:43516 with 8.6 GB RAM
>> 14/10/13 21:02:19 WARN storage.BlockManagerMasterActor: Removing
>> BlockManager BlockManagerId(1, Hadoop.Slave2, 43516, 0) with no recent
>> heart beats: 46389ms exceeds 45000ms
>> 14/10/13 21:06:18 INFO storage.BlockManagerInfo: Registering block
>> manager Hadoop.Slave2:43516 with 8.6 GB RAM
>> 14/10/13 21:07:19 WARN storage.BlockManagerMasterActor: Removing
>> BlockManager BlockManagerId(1, Hadoop.Slave2, 43516, 0) with no recent
>> heart beats: 61127ms exceeds 45000ms
>> 14/10/13 21:44:01 WARN scheduler.TaskSetManager: Lost TID 45 (task 13.0:1)
>> 14/10/13 21:44:01 WARN scheduler.TaskSetManager: Loss was due to
>> java.io.IOException
>> java.io.IOException: Filesystem closed
>> at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:707)
>> at
>> org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:776)
>> at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:837)
>> at java.io.DataInputStream.read(DataInputStream.java:83)
>> at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
>> at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
>> at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
>> at
>> org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:209)
>> at
>> org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:47)
>> at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:201)
>> at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:184)
>> at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
>> at
>> org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
>> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>> at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)
>> at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>> at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>> at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>> at
>> scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>> at
>> scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>> at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:107)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
>> at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>> at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>> at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>> at
>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:158)
>> at
>> org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
>> at org.apache.spark.scheduler.Task.run(Task.scala:51)
>> at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> at
>> java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> at java.lang.Thread.run(Thread.java:662)
>>
>> -------------------------------------------------------------------------------
>> I tried to add configuration option in the *spark-default.conf*file:"*spark.storage.blockManagerSlaveTimeoutMs
>> 200000*", but it has no effect (http://<driver>:4040 page does not exist
>> under the “Environment” tab).
>>
>> The following is part of the log information:
>>
>> -------------------------------------------------------------------------------
>>
>> 14/10/13 20:34:36 INFO BlockManager: BlockManager re-registering with master
>> 14/10/13 20:34:36 INFO BlockManagerMaster: Trying to register BlockManager
>> 14/10/13 20:34:36 INFO BlockManagerMaster: Registered BlockManager
>> 14/10/13 20:34:36 INFO BlockManager: Reporting 13 blocks to the master.
>> 14/10/13 20:34:36 INFO BlockManagerMaster: Updated info of block rdd_48_2
>> 14/10/13 20:34:36 INFO BlockManagerMaster: Updated info of block rdd_4_0
>> 14/10/13 20:34:36 INFO BlockManagerMaster: Updated info of block rdd_4_1
>> 14/10/13 20:34:36 INFO BlockManagerMaster: Updated info of block rdd_4_2
>> 14/10/13 20:34:36 INFO BlockManagerMaster: Updated info of block rdd_26_3
>> 14/10/13 20:34:36 INFO BlockManagerMaster: Updated info of block rdd_4_3
>> 14/10/13 20:37:00 WARN BlockManagerMaster: Error sending message to BlockManagerMaster
in 1 attempts
>> java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
>> 	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>> 	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>> 	at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>> 	at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>> 	at scala.concurrent.Await$.result(package.scala:107)
>> 	at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:237)
>> 	at org.apache.spark.storage.BlockManagerMaster.sendHeartBeat(BlockManagerMaster.scala:51)
>> 	at org.apache.spark.storage.BlockManager.org <http://org.apache.spark.storage.blockmanager.org/>$apache$spark$storage$BlockManager$$heartBeat(BlockManager.scala:113)
>> 	at org.apache.spark.storage.BlockManager$$anonfun$initialize$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(BlockManager.scala:158)
>> 	at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:790)
>> 	at org.apache.spark.storage.BlockManager$$anonfun$initialize$1.apply$mcV$sp(BlockManager.scala:158)
>> 	at akka.actor.Scheduler$$anon$9.run(Scheduler.scala:80)
>> 	at akka.actor.LightArrayRevolverScheduler$$anon$3$$anon$2.run(Scheduler.scala:241)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> 	at java.lang.Thread.run(Thread.java:662)
>> 14/10/13 20:38:06 INFO BlockManager: BlockManager re-registering with master
>> 14/10/13 20:38:06 INFO BlockManagerMaster: Trying to register BlockManager
>> 14/10/13 20:38:06 INFO BlockManagerMaster: Registered BlockManager
>> 14/10/13 20:38:06 INFO BlockManager: Reporting 13 blocks to the master.
>> 14/10/13 20:38:06 INFO BlockManagerMaster: Updated info of block rdd_48_2
>> 14/10/13 20:38:06 INFO BlockManagerMaster: Updated info of block rdd_4_0
>> 14/10/13 20:38:06 INFO BlockManagerMaster: Updated info of block rdd_4_1
>> 14/10/13 20:38:06 INFO BlockManagerMaster: Updated info of block rdd_4_2
>> 14/10/13 20:38:06 INFO BlockManagerMaster: Updated info of block rdd_26_3
>> 14/10/13 20:38:06 INFO BlockManagerMaster: Updated info of block rdd_4_3
>> 14/10/13 20:50:56 WARN BlockManagerMaster: Error sending message to BlockManagerMaster
in 1 attempts
>> java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
>> 	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>> 	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>> 	at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>> 	at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>> 	at scala.concurrent.Await$.result(package.scala:107)
>> 	at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:237)
>> 	at org.apache.spark.storage.BlockManagerMaster.sendHeartBeat(BlockManagerMaster.scala:51)
>> 	at org.apache.spark.storage.BlockManager.org <http://org.apache.spark.storage.blockmanager.org/>$apache$spark$storage$BlockManager$$heartBeat(BlockManager.scala:113)
>> 	at org.apache.spark.storage.BlockManager$$anonfun$initialize$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(BlockManager.scala:158)
>> 	at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:790)
>> 	at org.apache.spark.storage.BlockManager$$anonfun$initialize$1.apply$mcV$sp(BlockManager.scala:158)
>> 	at akka.actor.Scheduler$$anon$9.run(Scheduler.scala:80)
>> 	at akka.actor.LightArrayRevolverScheduler$$anon$3$$anon$2.run(Scheduler.scala:241)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> 	at java.lang.Thread.run(Thread.java:662)
>> 14/10/13 20:53:05 WARN BlockManagerMaster: Error sending message to BlockManagerMaster
in 1 attempts
>> java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
>> 	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>> 	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>> 	at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>> 	at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>> 	at scala.concurrent.Await$.result(package.scala:107)
>> 	at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:237)
>> 	at org.apache.spark.storage.BlockManagerMaster.sendHeartBeat(BlockManagerMaster.scala:51)
>> 	at org.apache.spark.storage.BlockManager.org <http://org.apache.spark.storage.blockmanager.org/>$apache$spark$storage$BlockManager$$heartBeat(BlockManager.scala:113)
>> 	at org.apache.spark.storage.BlockManager$$anonfun$initialize$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(BlockManager.scala:158)
>> 	at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:790)
>> 	at org.apache.spark.storage.BlockManager$$anonfun$initialize$1.apply$mcV$sp(BlockManager.scala:158)
>> 	at akka.actor.Scheduler$$anon$9.run(Scheduler.scala:80)
>> 	at akka.actor.LightArrayRevolverScheduler$$anon$3$$anon$2.run(Scheduler.scala:241)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> 	at java.lang.Thread.run(Thread.java:662)
>> 14/10/13 20:54:10 WARN BlockManagerMaster: Error sending message to BlockManagerMaster
in 2 attempts
>> java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
>> 	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>> 	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>> 	at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>> 	at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>> 	at scala.concurrent.Await$.result(package.scala:107)
>> 	at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:237)
>> 	at org.apache.spark.storage.BlockManagerMaster.sendHeartBeat(BlockManagerMaster.scala:51)
>> 	at org.apache.spark.storage.BlockManager.org <http://org.apache.spark.storage.blockmanager.org/>$apache$spark$storage$BlockManager$$heartBeat(BlockManager.scala:113)
>> 	at org.apache.spark.storage.BlockManager$$anonfun$initialize$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(BlockManager.scala:158)
>> 	at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:790)
>> 	at org.apache.spark.storage.BlockManager$$anonfun$initialize$1.apply$mcV$sp(BlockManager.scala:158)
>>        at akka.actor.Scheduler$$anon$9.run(Scheduler.scala:80)
>> 	at akka.actor.LightArrayRevolverScheduler$$anon$3$$anon$2.run(Scheduler.scala:241)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> 	at java.lang.Thread.run(Thread.java:662)
>> 14/10/13 20:56:33 INFO BlockManager: BlockManager re-registering with master
>> 14/10/13 20:56:33 INFO BlockManagerMaster: Trying to register BlockManager
>> 14/10/13 20:58:55 WARN BlockManagerMaster: Error sending message to BlockManagerMaster
in 1 attempts
>> java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
>> 	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>> 	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>> 	at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>> 	at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>> 	at scala.concurrent.Await$.result(package.scala:107)
>> 	at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:237)
>> 	at org.apache.spark.storage.BlockManagerMaster.tell(BlockManagerMaster.scala:216)
>> 	at org.apache.spark.storage.BlockManagerMaster.registerBlockManager(BlockManagerMaster.scala:57)
>> 	at org.apache.spark.storage.BlockManager.reregister(BlockManager.scala:193)
>> 	at org.apache.spark.storage.BlockManager.org <http://org.apache.spark.storage.blockmanager.org/>$apache$spark$storage$BlockManager$$heartBeat(BlockManager.scala:114)
>> 	at org.apache.spark.storage.BlockManager$$anonfun$initialize$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(BlockManager.scala:158)
>> 	at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:790)
>> 	at org.apache.spark.storage.BlockManager$$anonfun$initialize$1.apply$mcV$sp(BlockManager.scala:158)
>> 	at akka.actor.Scheduler$$anon$9.run(Scheduler.scala:80)
>> 	at akka.actor.LightArrayRevolverScheduler$$anon$3$$anon$2.run(Scheduler.scala:241)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> 	at java.lang.Thread.run(Thread.java:662)
>> 14/10/13 21:03:38 WARN BlockManagerMaster: Error sending message to BlockManagerMaster
in 2 attempts
>> java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
>> 	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>> 	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>> 	at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>> 	at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>> 	at scala.concurrent.Await$.result(package.scala:107)
>> 	at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:237)
>> 	at org.apache.spark.storage.BlockManagerMaster.tell(BlockManagerMaster.scala:216)
>> 	at org.apache.spark.storage.BlockManagerMaster.registerBlockManager(BlockManagerMaster.scala:57)
>> 	at org.apache.spark.storage.BlockManager.reregister(BlockManager.scala:193)
>> 	at org.apache.spark.storage.BlockManager.org <http://org.apache.spark.storage.blockmanager.org/>$apache$spark$storage$BlockManager$$heartBeat(BlockManager.scala:114)
>> 	at org.apache.spark.storage.BlockManager$$anonfun$initialize$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(BlockManager.scala:158)
>> 	at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:790)
>> 	at org.apache.spark.storage.BlockManager$$anonfun$initialize$1.apply$mcV$sp(BlockManager.scala:158)
>> 	at akka.actor.Scheduler$$anon$9.run(Scheduler.scala:80)
>> 	at akka.actor.LightArrayRevolverScheduler$$anon$3$$anon$2.run(Scheduler.scala:241)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> 	at java.lang.Thread.run(Thread.java:662)
>> 14/10/13 21:08:23 WARN BlockManagerMaster: Error sending message to BlockManagerMaster
in 3 attempts
>> java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
>> 	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>> 	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>> 	at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>> 	at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>> 	at scala.concurrent.Await$.result(package.scala:107)
>> 	at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:237)
>> 	at org.apache.spark.storage.BlockManagerMaster.tell(BlockManagerMaster.scala:216)
>> 	at org.apache.spark.storage.BlockManagerMaster.registerBlockManager(BlockManagerMaster.scala:57)
>> 	at org.apache.spark.storage.BlockManager.reregister(BlockManager.scala:193)
>> 	at org.apache.spark.storage.BlockManager.org <http://org.apache.spark.storage.blockmanager.org/>$apache$spark$storage$BlockManager$$heartBeat(BlockManager.scala:114)
>> 	at org.apache.spark.storage.BlockManager$$anonfun$initialize$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(BlockManager.scala:158)
>> 	at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:790)
>> 	at org.apache.spark.storage.BlockManager$$anonfun$initialize$1.apply$mcV$sp(BlockManager.scala:158)
>> 	at akka.actor.Scheduler$$anon$9.run(Scheduler.scala:80)
>> 	at akka.actor.LightArrayRevolverScheduler$$anon$3$$anon$2.run(Scheduler.scala:241)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> 	at java.lang.Thread.run(Thread.java:662)
>> 14/10/13 21:17:59 ERROR ExecutorUncaughtExceptionHandler: Uncaught exception in thread
Thread[Connection manager future execution context-5,5,main]
>> org.apache.spark.SparkException: Error sending message to BlockManagerMaster [message
= RegisterBlockManager(BlockManagerId(1, Hadoop.Slave2, 43516, 0),9261023232,Actor[akka://spark/user/BlockManagerActor1#-125267976])]
>> 	at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:251)
>> 	at org.apache.spark.storage.BlockManagerMaster.tell(BlockManagerMaster.scala:216)
>> 	at org.apache.spark.storage.BlockManagerMaster.registerBlockManager(BlockManagerMaster.scala:57)
>> 	at org.apache.spark.storage.BlockManager.reregister(BlockManager.scala:193)
>> 	at org.apache.spark.storage.BlockManager.org <http://org.apache.spark.storage.blockmanager.org/>$apache$spark$storage$BlockManager$$heartBeat(BlockManager.scala:114)
>> 	at org.apache.spark.storage.BlockManager$$anonfun$initialize$1$$anonfun$apply$mcV$sp$1.apply$mcV$sp(BlockManager.scala:158)
>> 	at org.apache.spark.util.Utils$.tryOrExit(Utils.scala:790)
>> 	at org.apache.spark.storage.BlockManager$$anonfun$initialize$1.apply$mcV$sp(BlockManager.scala:158)
>> 	at akka.actor.Scheduler$$anon$9.run(Scheduler.scala:80)
>> 	at akka.actor.LightArrayRevolverScheduler$$anon$3$$anon$2.run(Scheduler.scala:241)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> 	at java.lang.Thread.run(Thread.java:662)
>> Caused by: java.util.concurrent.TimeoutException: Futures timed out after [30 seconds]
>> 	at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:219)
>> 	at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:223)
>> 	at scala.concurrent.Await$$anonfun$result$1.apply(package.scala:107)
>> 	at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
>> 	at scala.concurrent.Await$.result(package.scala:107)
>> 	at org.apache.spark.storage.BlockManagerMaster.askDriverWithReply(BlockManagerMaster.scala:237)
>> 	... 12 more
>> 14/10/13 21:43:29 WARN HadoopRDD: Exception in RecordReader.close()
>> java.io.IOException: Filesystem closed
>> 	at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:707)
>> 	at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:619)
>> 	at java.io.FilterInputStream.close(FilterInputStream.java:155)
>> 	at org.apache.hadoop.util.LineReader.close(LineReader.java:150)
>> 	at org.apache.hadoop.mapred.LineRecordReader.close(LineRecordReader.java:244)
>> 	at org.apache.spark.rdd.HadoopRDD$$anon$1.close(HadoopRDD.scala:211)
>> 	at org.apache.spark.util.NextIterator.closeIfNeeded(NextIterator.scala:63)
>> 	at org.apache.spark.rdd.HadoopRDD$$anon$1$$anonfun$1.apply$mcV$sp(HadoopRDD.scala:196)
>> 	at org.apache.spark.TaskContext$$anonfun$executeOnCompleteCallbacks$1.apply(TaskContext.scala:63)
>> 	at org.apache.spark.TaskContext$$anonfun$executeOnCompleteCallbacks$1.apply(TaskContext.scala:63)
>> 	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>> 	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>> 	at org.apache.spark.TaskContext.executeOnCompleteCallbacks(TaskContext.scala:63)
>> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:204)
>> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
>> 	at org.apache.spark.scheduler.Task.run(Task.scala:51)
>> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> 	at java.lang.Thread.run(Thread.java:662)
>> 14/10/13 21:43:43 WARN HadoopRDD: Exception in RecordReader.close()
>> java.io.IOException: Filesystem closed
>> 	at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:707)
>> 	at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:619)
>> 	at java.io.FilterInputStream.close(FilterInputStream.java:155)
>> 	at org.apache.hadoop.util.LineReader.close(LineReader.java:150)
>> 	at org.apache.hadoop.mapred.LineRecordReader.close(LineRecordReader.java:244)
>> 	at org.apache.spark.rdd.HadoopRDD$$anon$1.close(HadoopRDD.scala:211)
>> 	at org.apache.spark.util.NextIterator.closeIfNeeded(NextIterator.scala:63)
>> 	at org.apache.spark.rdd.HadoopRDD$$anon$1$$anonfun$1.apply$mcV$sp(HadoopRDD.scala:196)
>> 	at org.apache.spark.TaskContext$$anonfun$executeOnCompleteCallbacks$1.apply(TaskContext.scala:63)
>> 	at org.apache.spark.TaskContext$$anonfun$executeOnCompleteCallbacks$1.apply(TaskContext.scala:63)
>> 	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>> 	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>> 	at org.apache.spark.TaskContext.executeOnCompleteCallbacks(TaskContext.scala:63)
>> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:204)
>> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
>> 	at org.apache.spark.scheduler.Task.run(Task.scala:51)
>> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> 	at java.lang.Thread.run(Thread.java:662)
>> 14/10/13 21:43:43 WARN HadoopRDD: Exception in RecordReader.close()
>> java.io.IOException: Filesystem closed
>> 	at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:707)
>> 	at org.apache.hadoop.hdfs.DFSInputStream.close(DFSInputStream.java:619)
>> 	at java.io.FilterInputStream.close(FilterInputStream.java:155)
>> 	at org.apache.hadoop.util.LineReader.close(LineReader.java:150)
>> 	at org.apache.hadoop.mapred.LineRecordReader.close(LineRecordReader.java:244)
>> 	at org.apache.spark.rdd.HadoopRDD$$anon$1.close(HadoopRDD.scala:211)
>> 	at org.apache.spark.util.NextIterator.closeIfNeeded(NextIterator.scala:63)
>> 	at org.apache.spark.rdd.HadoopRDD$$anon$1$$anonfun$1.apply$mcV$sp(HadoopRDD.scala:196)
>> 	at org.apache.spark.TaskContext$$anonfun$executeOnCompleteCallbacks$1.apply(TaskContext.scala:63)
>> 	at org.apache.spark.TaskContext$$anonfun$executeOnCompleteCallbacks$1.apply(TaskContext.scala:63)
>> 	at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
>> 	at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
>> 	at org.apache.spark.TaskContext.executeOnCompleteCallbacks(TaskContext.scala:63)
>> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:204)
>> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
>> 	at org.apache.spark.scheduler.Task.run(Task.scala:51)
>> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> 	at java.lang.Thread.run(Thread.java:662)
>> 14/10/13 21:43:43 ERROR Executor: Exception in task ID 47
>> java.io.IOException: Filesystem closed
>> 	at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:707)
>> 	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:776)
>> 	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:837)
>> 	at java.io.DataInputStream.read(DataInputStream.java:83)
>> 	at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
>> 	at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
>> 	at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
>> 	at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:209)
>> 	at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:47)
>> 	at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:201)
>> 	at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:184)
>> 	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
>> 	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
>> 	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>> 	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>> 	at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)
>> 	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>> 	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>> 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>> 	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>> 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>> 	at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:107)
>> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
>> 	at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:158)
>> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
>> 	at org.apache.spark.scheduler.Task.run(Task.scala:51)
>> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> 	at java.lang.Thread.run(Thread.java:662)
>> 14/10/13 21:43:43 ERROR Executor: Exception in task ID 44
>> java.io.IOException: Filesystem closed
>> 	at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:707)
>> 	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:776)
>> 	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:837)
>> 	at java.io.DataInputStream.read(DataInputStream.java:83)
>> 	at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
>> 	at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
>> 	at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
>> 	at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:209)
>> 	at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:47)
>> 	at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:201)
>> 	at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:184)
>> 	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
>> 	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
>> 	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>> 	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>> 	at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)
>> 	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>> 	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>> 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>> 	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>> 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>> 	at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:107)
>> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
>> 	at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:158)
>> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
>> 	at org.apache.spark.scheduler.Task.run(Task.scala:51)
>> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>> 	at java.lang.Thread.run(Thread.java:662)
>> 14/10/13 21:43:43 ERROR Executor: Exception in task ID 45
>> java.io.IOException: Filesystem closed
>> 	at org.apache.hadoop.hdfs.DFSClient.checkOpen(DFSClient.java:707)
>> 	at org.apache.hadoop.hdfs.DFSInputStream.readWithStrategy(DFSInputStream.java:776)
>> 	at org.apache.hadoop.hdfs.DFSInputStream.read(DFSInputStream.java:837)
>> 	at java.io.DataInputStream.read(DataInputStream.java:83)
>> 	at org.apache.hadoop.util.LineReader.fillBuffer(LineReader.java:180)
>> 	at org.apache.hadoop.util.LineReader.readDefaultLine(LineReader.java:216)
>> 	at org.apache.hadoop.util.LineReader.readLine(LineReader.java:174)
>> 	at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:209)
>> 	at org.apache.hadoop.mapred.LineRecordReader.next(LineRecordReader.java:47)
>> 	at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:201)
>> 	at org.apache.spark.rdd.HadoopRDD$$anon$1.getNext(HadoopRDD.scala:184)
>> 	at org.apache.spark.util.NextIterator.hasNext(NextIterator.scala:71)
>> 	at org.apache.spark.InterruptibleIterator.hasNext(InterruptibleIterator.scala:39)
>> 	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>> 	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>> 	at scala.collection.Iterator$$anon$14.hasNext(Iterator.scala:388)
>> 	at scala.collection.Iterator$$anon$11.hasNext(Iterator.scala:327)
>> 	at scala.collection.Iterator$class.foreach(Iterator.scala:727)
>> 	at scala.collection.AbstractIterator.foreach(Iterator.scala:1157)
>> 	at scala.collection.generic.Growable$class.$plus$plus$eq(Growable.scala:48)
>> 	at scala.collection.mutable.ArrayBuffer.$plus$plus$eq(ArrayBuffer.scala:103)
>> 	at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:107)
>> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:227)
>> 	at org.apache.spark.rdd.MappedRDD.compute(MappedRDD.scala:31)
>> 	at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:262)
>> 	at org.apache.spark.rdd.RDD.iterator(RDD.scala:229)
>> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:158)
>> 	at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:99)
>> 	at org.apache.spark.scheduler.Task.run(Task.scala:51)
>> 	at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:183)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:886)
>> 	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:908)
>>        at java.lang.Thread.run(Thread.java:662)
>> 14/10/13 21:43:43 INFO CoarseGrainedExecutorBackend: Got assigned task 48
>> 14/10/13 21:43:43 INFO Executor: Running task ID 48
>> 14/10/13 21:43:43 INFO CoarseGrainedExecutorBackend: Got assigned task 49
>> 14/10/13 21:43:43 INFO Executor: Running task ID 49
>> 14/10/13 21:43:43 INFO BlockManager: Found block broadcast_3 locally
>> 14/10/13 21:43:43 INFO BlockManager: Found block broadcast_3 locally
>> 14/10/13 21:43:43 INFO BlockManager: Found block broadcast_5 locally
>> 14/10/13 21:43:43 INFO BlockManager: Found block broadcast_5 locally
>> 14/10/13 21:43:43 INFO BlockManager: Found block broadcast_4 locally
>> 14/10/13 21:43:43 INFO BlockManager: Found block broadcast_4 locally
>> 14/10/13 21:43:44 INFO CacheManager: Partition rdd_26_0 not found, computing it
>> 14/10/13 21:43:44 INFO HadoopRDD: Input split: hdfs://Hadoop.Master:9000/rec/input/ss/archive/rec_ss_input.har/part-0:0+536870912
>> 14/10/13 21:43:44 INFO CacheManager: Partition rdd_26_1 not found, computing it
>> 14/10/13 21:43:44 INFO HadoopRDD: Input split: hdfs://Hadoop.Master:9000/rec/input/ss/archive/rec_ss_input.har/part-0:536870912+536870912
>>
>>
>> -------------------------------------------------------------------------------
>>
>> Thanks,
>> chepoo
>>
>>
> [image: 提示图标] 邮件带有附件预览链接,若您转发或回复此邮件时不希望对方预览附件,建议您手动删除链接。
> 共有 *1* 个附件
> PastedGraphic-1.tiff(179K)极速下载
> <http://preview.mail.163.com/xdownload?filename=PastedGraphic-1.tiff&mid=xtbBLxsly1D%2BJgg0BwAAsp&part=3&sign=869f590e5950b1a2b5112a5e6d93ce9b&time=1413269733&uid=swallow_pulm%40163.com>
>
>
>

Mime
View raw message