spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From <spark....@yahoo.com.INVALID>
Subject Re: spark job is not running on yarn clustor mode
Date Tue, 17 May 2016 13:59:29 GMT
Hey Ayan, I am sorry. I have posted wrong log file. Could you please check the below log.
log1:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:
:/tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/filecache/14/spark-assembly-1.5.2-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/05/17 18:40:47 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
16/05/17 18:40:47 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/05/17 18:40:48 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1463479181441_0004_000002
16/05/17 18:40:48 INFO spark.SecurityManager: Changing view acls to: hadoop
16/05/17 18:40:48 INFO spark.SecurityManager: Changing modify acls to: hadoop
16/05/17 18:40:48 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
16/05/17 18:40:48 INFO yarn.ApplicationMaster: Starting the user application in a separate Thread
16/05/17 18:40:48 INFO yarn.ApplicationMaster: Waiting for spark context initialization
16/05/17 18:40:48 INFO spark.SparkTweetStreamingHDFSLoad: found keyword== userTwitterToken=9ACWejzaHVyxpPDYCHnDsO98U 01safwuyLO8B8S94v5i0p90SzxEPZqUUmCaDkYOj1FKN1dXKZC 702828259411521536-PNoSkM8xNIvuEVvoQ9Pj8fj7D8CkYp1 OntoQStrmwrztnzi1MSlM56sKc23bqUCC2WblbDPiiP8P
16/05/17 18:40:48 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = 9ACWejzaHVyxpPDYCHnDsO98U 01safwuyLO8B8S94v5i0p90SzxEPZqUUmCaDkYOj1FKN1dXKZC 702828259411521536-PNoSkM8xNIvuEVvoQ9Pj8fj7D8CkYp1 OntoQStrmwrztnzi1MSlM56sKc23bqUCC2WblbDPiiP8P
16/05/17 18:40:48 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = 1
16/05/17 18:40:48 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = 2
16/05/17 18:40:48 INFO yarn.ApplicationMaster: Waiting for spark context initialization ... 
16/05/17 18:40:48 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = Tue May 17 00:00:00 IST 2016
16/05/17 18:40:48 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = Tue May 17 00:00:00 IST 2016
16/05/17 18:40:48 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = nokia,samsung,iphone,blackberry
16/05/17 18:40:48 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = All
16/05/17 18:40:48 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = mo
16/05/17 18:40:48 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = en
16/05/17 18:40:48 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = retweet
16/05/17 18:40:48 INFO spark.SparkTweetStreamingHDFSLoad: Twitter Token...........[Ljava.lang.String;@3ee5e48d
16/05/17 18:40:48 INFO spark.SparkContext: Running Spark version 1.5.2
16/05/17 18:40:48 WARN spark.SparkConf: 
SPARK_JAVA_OPTS was detected (set to '-Dspark.driver.port=53411').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with conf/spark-defaults.conf to set defaults for an application
 - ./spark-submit with --driver-java-options to set -X options for a driver
 - spark.executor.extraJavaOptions to set -X options for executors
 - SPARK_DAEMON_JAVA_OPTS to set java options for standalone daemons (master or worker)
        
16/05/17 18:40:48 WARN spark.SparkConf: Setting 'spark.executor.extraJavaOptions' to '-Dspark.driver.port=53411' as a work-around.
16/05/17 18:40:48 WARN spark.SparkConf: Setting 'spark.driver.extraJavaOptions' to '-Dspark.driver.port=53411' as a work-around.
16/05/17 18:40:48 INFO spark.SecurityManager: Changing view acls to: hadoop
16/05/17 18:40:48 INFO spark.SecurityManager: Changing modify acls to: hadoop
16/05/17 18:40:48 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
16/05/17 18:40:49 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/05/17 18:40:49 INFO Remoting: Starting remoting
16/05/17 18:40:49 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@172.16.28.194:53411]
16/05/17 18:40:49 INFO util.Utils: Successfully started service 'sparkDriver' on port 53411.
16/05/17 18:40:49 INFO spark.SparkEnv: Registering MapOutputTracker
16/05/17 18:40:49 INFO spark.SparkEnv: Registering BlockManagerMaster
16/05/17 18:40:49 INFO storage.DiskBlockManager: Created local directory at /tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/appcache/application_1463479181441_0004/blockmgr-10b5e01d-18e1-4bf5-8645-8b351db27a5a
16/05/17 18:40:49 INFO storage.MemoryStore: MemoryStore started with capacity 2.4 GB
16/05/17 18:40:49 INFO spark.HttpFileServer: HTTP File server directory is /tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/appcache/application_1463479181441_0004/spark-74b8b507-32cb-41fb-aecd-b687429648ac/httpd-c680799e-bb40-4389-b7ff-9bdc48a9694d
16/05/17 18:40:49 INFO spark.HttpServer: Starting HTTP Server
16/05/17 18:40:49 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/05/17 18:40:49 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:33252
16/05/17 18:40:49 INFO util.Utils: Successfully started service 'HTTP file server' on port 33252.
16/05/17 18:40:49 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/05/17 18:40:49 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/05/17 18:40:54 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/05/17 18:40:54 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:59690
16/05/17 18:40:54 INFO util.Utils: Successfully started service 'SparkUI' on port 59690.
16/05/17 18:40:54 INFO ui.SparkUI: Started SparkUI at http://172.16.28.194:59690
16/05/17 18:40:54 INFO cluster.YarnClusterScheduler: Created YarnClusterScheduler
16/05/17 18:40:54 WARN metrics.MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
16/05/17 18:40:54 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 49472.
16/05/17 18:40:54 INFO netty.NettyBlockTransferService: Server created on 49472
16/05/17 18:40:54 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/05/17 18:40:54 INFO storage.BlockManagerMasterEndpoint: Registering block manager 172.16.28.194:49472 with 2.4 GB RAM, BlockManagerId(driver, 172.16.28.194, 49472)
16/05/17 18:40:54 INFO storage.BlockManagerMaster: Registered BlockManager
16/05/17 18:40:54 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as AkkaRpcEndpointRef(Actor[akka://sparkDriver/user/YarnAM#252649983])
16/05/17 18:40:54 INFO client.RMProxy: Connecting to ResourceManager at namenode/172.16.28.190:8030
16/05/17 18:40:54 INFO yarn.YarnRMClient: Registering the ApplicationMaster
16/05/17 18:40:55 INFO yarn.YarnAllocator: Will request 2 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
16/05/17 18:40:55 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
16/05/17 18:40:55 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
16/05/17 18:40:55 INFO yarn.ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
16/05/17 18:40:55 INFO impl.AMRMClientImpl: Received new token for : node3:45827
16/05/17 18:40:55 INFO yarn.YarnAllocator: Launching container container_1463479181441_0004_02_000002 for on host node3
16/05/17 18:40:55 INFO yarn.YarnAllocator: Launching ExecutorRunnable. driverUrl: akka.tcp://sparkDriver@172.16.28.194:53411/user/CoarseGrainedScheduler,  executorHostname: node3
16/05/17 18:40:55 INFO yarn.ExecutorRunnable: Starting Executor Container
16/05/17 18:40:55 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
16/05/17 18:40:55 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
16/05/17 18:40:55 INFO yarn.ExecutorRunnable: Setting up ContainerLaunchContext
16/05/17 18:40:55 INFO yarn.ExecutorRunnable: Preparing Local resources
16/05/17 18:40:55 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "namenode" port: 54310 file: "/user/hadoop/.sparkStaging/application_1463479181441_0004/SparkTwittterStreamingJob-0.0.1-SNAPSHOT-jar-with-dependencies.jar" } size: 216515519 timestamp: 1463490486766 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "namenode" port: 54310 file: "/user/hadoop/.sparkStaging/application_1463479181441_0004/spark-assembly-1.5.2-hadoop2.6.0.jar" } size: 183993445 timestamp: 1463490463713 type: FILE visibility: PRIVATE)
16/05/17 18:40:55 INFO yarn.ExecutorRunnable: 
===============================================================================
YARN executor launch context:
  env:
    CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
    SPARK_LOG_URL_STDERR -> http://node3:8042/node/containerlogs/container_1463479181441_0004_02_000002/hadoop/stderr?start=-4096
    SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1463479181441_0004
    SPARK_YARN_CACHE_FILES_FILE_SIZES -> 183993445,216515519
    SPARK_USER -> hadoop
    SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE
    SPARK_YARN_MODE -> true
    SPARK_JAVA_OPTS -> -Dspark.driver.port=53411
    SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1463490463713,1463490486766
    SPARK_LOG_URL_STDOUT -> http://node3:8042/node/containerlogs/container_1463479181441_0004_02_000002/hadoop/stdout?start=-4096
    SPARK_YARN_CACHE_FILES -> hdfs://namenode:54310/user/hadoop/.sparkStaging/application_1463479181441_0004/spark-assembly-1.5.2-hadoop2.6.0.jar#__spark__.jar,hdfs://namenode:54310/user/hadoop/.sparkStaging/application_1463479181441_0004/SparkTwittterStreamingJob-0.0.1-SNAPSHOT-jar-with-dependencies.jar#__app__.jar

  command:
    {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m '-Dspark.driver.port=53411' -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.ui.port=0' '-Dspark.driver.port=53411' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://sparkDriver@172.16.28.194:53411/user/CoarseGrainedScheduler --executor-id 1 --hostname node3 --cores 1 --app-id application_1463479181441_0004 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
      
16/05/17 18:40:55 INFO impl.ContainerManagementProtocolProxy: Opening proxy : node3:45827
16/05/17 18:40:55 INFO impl.AMRMClientImpl: Received new token for : node4:58299
16/05/17 18:40:55 INFO yarn.YarnAllocator: Launching container container_1463479181441_0004_02_000003 for on host node4
16/05/17 18:40:55 INFO yarn.YarnAllocator: Launching ExecutorRunnable. driverUrl: akka.tcp://sparkDriver@172.16.28.194:53411/user/CoarseGrainedScheduler,  executorHostname: node4
16/05/17 18:40:55 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
16/05/17 18:40:55 INFO yarn.ExecutorRunnable: Starting Executor Container
16/05/17 18:40:55 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
16/05/17 18:40:55 INFO yarn.ExecutorRunnable: Setting up ContainerLaunchContext
16/05/17 18:40:55 INFO yarn.ExecutorRunnable: Preparing Local resources
16/05/17 18:40:55 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "namenode" port: 54310 file: "/user/hadoop/.sparkStaging/application_1463479181441_0004/SparkTwittterStreamingJob-0.0.1-SNAPSHOT-jar-with-dependencies.jar" } size: 216515519 timestamp: 1463490486766 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "namenode" port: 54310 file: "/user/hadoop/.sparkStaging/application_1463479181441_0004/spark-assembly-1.5.2-hadoop2.6.0.jar" } size: 183993445 timestamp: 1463490463713 type: FILE visibility: PRIVATE)
16/05/17 18:40:55 INFO yarn.ExecutorRunnable: 
===============================================================================
YARN executor launch context:
  env:
    CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
    SPARK_LOG_URL_STDERR -> http://node4:8042/node/containerlogs/container_1463479181441_0004_02_000003/hadoop/stderr?start=-4096
    SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1463479181441_0004
    SPARK_YARN_CACHE_FILES_FILE_SIZES -> 183993445,216515519
    SPARK_USER -> hadoop
    SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE
    SPARK_YARN_MODE -> true
    SPARK_JAVA_OPTS -> -Dspark.driver.port=53411
    SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1463490463713,1463490486766
    SPARK_LOG_URL_STDOUT -> http://node4:8042/node/containerlogs/container_1463479181441_0004_02_000003/hadoop/stdout?start=-4096
    SPARK_YARN_CACHE_FILES -> hdfs://namenode:54310/user/hadoop/.sparkStaging/application_1463479181441_0004/spark-assembly-1.5.2-hadoop2.6.0.jar#__spark__.jar,hdfs://namenode:54310/user/hadoop/.sparkStaging/application_1463479181441_0004/SparkTwittterStreamingJob-0.0.1-SNAPSHOT-jar-with-dependencies.jar#__app__.jar

  command:
    {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m '-Dspark.driver.port=53411' -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.ui.port=0' '-Dspark.driver.port=53411' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://sparkDriver@172.16.28.194:53411/user/CoarseGrainedScheduler --executor-id 2 --hostname node4 --cores 1 --app-id application_1463479181441_0004 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
      
16/05/17 18:40:55 INFO impl.ContainerManagementProtocolProxy: Opening proxy : node4:58299
16/05/17 18:40:57 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. node3:40418
16/05/17 18:40:57 INFO cluster.YarnClusterSchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@node3:57322/user/Executor#-1353850614]) with ID 1
16/05/17 18:40:57 INFO storage.BlockManagerMasterEndpoint: Registering block manager node3:53989 with 530.0 MB RAM, BlockManagerId(1, node3, 53989)
16/05/17 18:41:24 INFO cluster.YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after waiting maxRegisteredResourcesWaitingTime: 30000(ms)
16/05/17 18:41:24 INFO cluster.YarnClusterScheduler: YarnClusterScheduler.postStartHook done
16/05/17 18:41:24 INFO spark.SparkTweetStreamingHDFSLoad: dayOfTheWeek .........[Ljava.lang.String;@42c6ef6d
16/05/17 18:41:24 INFO rate.PIDRateEstimator: Created PIDRateEstimator with proportional = 1.0, integral = 0.2, derivative = 0.0, min rate = 100.0
16/05/17 18:41:24 INFO spark.SparkTweetStreamingHDFSLoad: Terminate DAte............Tue May 17 00:00:00 IST 2016
16/05/17 18:41:24 INFO spark.SparkTweetStreamingHDFSLoad: outputURI--------------hdfs://namenode:54310/spark/TweetData/twitterRawDataTest
16/05/17 18:41:24 INFO spark.SparkTweetStreamingHDFSLoad: outputURI--------------hdfs://namenode:54310/spark/TweetData/twitterSeggDataTest
16/05/17 18:41:25 INFO spark.SparkContext: Starting job: start at SparkTweetStreamingHDFSLoad.java:1743
16/05/17 18:41:25 INFO scheduler.DAGScheduler: Registering RDD 1 (start at SparkTweetStreamingHDFSLoad.java:1743)
16/05/17 18:41:25 INFO scheduler.DAGScheduler: Got job 0 (start at SparkTweetStreamingHDFSLoad.java:1743) with 20 output partitions
16/05/17 18:41:25 INFO scheduler.DAGScheduler: Final stage: ResultStage 1(start at SparkTweetStreamingHDFSLoad.java:1743)
16/05/17 18:41:25 INFO scheduler.DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
16/05/17 18:41:25 INFO scheduler.DAGScheduler: Missing parents: List(ShuffleMapStage 0)
16/05/17 18:41:25 INFO scheduler.DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[1] at start at SparkTweetStreamingHDFSLoad.java:1743), which has no missing parents
16/05/17 18:41:25 INFO storage.MemoryStore: ensureFreeSpace(2736) called with curMem=0, maxMem=2577200578
16/05/17 18:41:25 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 2.7 KB, free 2.4 GB)
16/05/17 18:41:25 INFO storage.MemoryStore: ensureFreeSpace(1655) called with curMem=2736, maxMem=2577200578
16/05/17 18:41:25 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1655.0 B, free 2.4 GB)
16/05/17 18:41:25 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 172.16.28.194:49472 (size: 1655.0 B, free: 2.4 GB)
16/05/17 18:41:25 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:861
16/05/17 18:41:25 INFO scheduler.DAGScheduler: Submitting 50 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[1] at start at SparkTweetStreamingHDFSLoad.java:1743)
16/05/17 18:41:25 INFO cluster.YarnClusterScheduler: Adding task set 0.0 with 50 tasks
16/05/17 18:41:25 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:25 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on node3:53989 (size: 1655.0 B, free: 530.0 MB)
16/05/17 18:41:25 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:25 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 606 ms on node3 (1/50)
16/05/17 18:41:25 INFO scheduler.TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:25 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 76 ms on node3 (2/50)
16/05/17 18:41:25 INFO scheduler.TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:25 INFO scheduler.TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 45 ms on node3 (3/50)
16/05/17 18:41:25 INFO scheduler.TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:25 INFO scheduler.TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 33 ms on node3 (4/50)
16/05/17 18:41:25 INFO scheduler.TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:25 INFO scheduler.TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 35 ms on node3 (5/50)
16/05/17 18:41:25 INFO scheduler.TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 28 ms on node3 (6/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 39 ms on node3 (7/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 30 ms on node3 (8/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 29 ms on node3 (9/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 10.0 in stage 0.0 (TID 10, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 34 ms on node3 (10/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 11.0 in stage 0.0 (TID 11, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 10.0 in stage 0.0 (TID 10) in 31 ms on node3 (11/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 12.0 in stage 0.0 (TID 12, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 11.0 in stage 0.0 (TID 11) in 28 ms on node3 (12/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 13.0 in stage 0.0 (TID 13, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 12.0 in stage 0.0 (TID 12) in 26 ms on node3 (13/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 14.0 in stage 0.0 (TID 14, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 13.0 in stage 0.0 (TID 13) in 24 ms on node3 (14/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 15.0 in stage 0.0 (TID 15, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 14.0 in stage 0.0 (TID 14) in 31 ms on node3 (15/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 16.0 in stage 0.0 (TID 16, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 15.0 in stage 0.0 (TID 15) in 26 ms on node3 (16/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 17.0 in stage 0.0 (TID 17, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 16.0 in stage 0.0 (TID 16) in 26 ms on node3 (17/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 17.0 in stage 0.0 (TID 17) in 44 ms on node3 (18/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 18.0 in stage 0.0 (TID 18, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 19.0 in stage 0.0 (TID 19, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 18.0 in stage 0.0 (TID 18) in 27 ms on node3 (19/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 20.0 in stage 0.0 (TID 20, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 19.0 in stage 0.0 (TID 19) in 32 ms on node3 (20/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 21.0 in stage 0.0 (TID 21, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 20.0 in stage 0.0 (TID 20) in 37 ms on node3 (21/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 22.0 in stage 0.0 (TID 22, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 21.0 in stage 0.0 (TID 21) in 32 ms on node3 (22/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 23.0 in stage 0.0 (TID 23, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 22.0 in stage 0.0 (TID 22) in 31 ms on node3 (23/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 24.0 in stage 0.0 (TID 24, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 23.0 in stage 0.0 (TID 23) in 45 ms on node3 (24/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 25.0 in stage 0.0 (TID 25, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 24.0 in stage 0.0 (TID 24) in 32 ms on node3 (25/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 25.0 in stage 0.0 (TID 25) in 34 ms on node3 (26/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 26.0 in stage 0.0 (TID 26, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 26.0 in stage 0.0 (TID 26) in 24 ms on node3 (27/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 27.0 in stage 0.0 (TID 27, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 28.0 in stage 0.0 (TID 28, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 27.0 in stage 0.0 (TID 27) in 25 ms on node3 (28/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 29.0 in stage 0.0 (TID 29, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 28.0 in stage 0.0 (TID 28) in 40 ms on node3 (29/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 29.0 in stage 0.0 (TID 29) in 27 ms on node3 (30/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 30.0 in stage 0.0 (TID 30, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 31.0 in stage 0.0 (TID 31, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 30.0 in stage 0.0 (TID 30) in 40 ms on node3 (31/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 32.0 in stage 0.0 (TID 32, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 31.0 in stage 0.0 (TID 31) in 32 ms on node3 (32/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 33.0 in stage 0.0 (TID 33, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 32.0 in stage 0.0 (TID 32) in 28 ms on node3 (33/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 34.0 in stage 0.0 (TID 34, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 33.0 in stage 0.0 (TID 33) in 37 ms on node3 (34/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 35.0 in stage 0.0 (TID 35, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 34.0 in stage 0.0 (TID 34) in 32 ms on node3 (35/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 36.0 in stage 0.0 (TID 36, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 35.0 in stage 0.0 (TID 35) in 29 ms on node3 (36/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 37.0 in stage 0.0 (TID 37, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 36.0 in stage 0.0 (TID 36) in 31 ms on node3 (37/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 38.0 in stage 0.0 (TID 38, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 37.0 in stage 0.0 (TID 37) in 27 ms on node3 (38/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 39.0 in stage 0.0 (TID 39, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 38.0 in stage 0.0 (TID 38) in 29 ms on node3 (39/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 40.0 in stage 0.0 (TID 40, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 39.0 in stage 0.0 (TID 39) in 31 ms on node3 (40/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 41.0 in stage 0.0 (TID 41, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 40.0 in stage 0.0 (TID 40) in 27 ms on node3 (41/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 42.0 in stage 0.0 (TID 42, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 41.0 in stage 0.0 (TID 41) in 26 ms on node3 (42/50)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Starting task 43.0 in stage 0.0 (TID 43, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:26 INFO scheduler.TaskSetManager: Finished task 42.0 in stage 0.0 (TID 42) in 42 ms on node3 (43/50)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 44.0 in stage 0.0 (TID 44, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 43.0 in stage 0.0 (TID 43) in 42 ms on node3 (44/50)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 45.0 in stage 0.0 (TID 45, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 44.0 in stage 0.0 (TID 44) in 29 ms on node3 (45/50)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 46.0 in stage 0.0 (TID 46, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 45.0 in stage 0.0 (TID 45) in 34 ms on node3 (46/50)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 47.0 in stage 0.0 (TID 47, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 46.0 in stage 0.0 (TID 46) in 45 ms on node3 (47/50)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 48.0 in stage 0.0 (TID 48, node3, PROCESS_LOCAL, 1962 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 47.0 in stage 0.0 (TID 47) in 33 ms on node3 (48/50)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 49.0 in stage 0.0 (TID 49, node3, PROCESS_LOCAL, 1929 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 48.0 in stage 0.0 (TID 48) in 23 ms on node3 (49/50)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 49.0 in stage 0.0 (TID 49) in 42 ms on node3 (50/50)
16/05/17 18:41:27 INFO scheduler.DAGScheduler: ShuffleMapStage 0 (start at SparkTweetStreamingHDFSLoad.java:1743) finished in 1.956 s
16/05/17 18:41:27 INFO cluster.YarnClusterScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool 
16/05/17 18:41:27 INFO scheduler.DAGScheduler: looking for newly runnable stages
16/05/17 18:41:27 INFO scheduler.DAGScheduler: running: Set()
16/05/17 18:41:27 INFO scheduler.DAGScheduler: waiting: Set(ResultStage 1)
16/05/17 18:41:27 INFO scheduler.DAGScheduler: failed: Set()
16/05/17 18:41:27 INFO scheduler.DAGScheduler: Missing parents for ResultStage 1: List()
16/05/17 18:41:27 INFO scheduler.DAGScheduler: Submitting ResultStage 1 (ShuffledRDD[2] at start at SparkTweetStreamingHDFSLoad.java:1743), which is now runnable
16/05/17 18:41:27 INFO storage.MemoryStore: ensureFreeSpace(2344) called with curMem=4391, maxMem=2577200578
16/05/17 18:41:27 INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 2.3 KB, free 2.4 GB)
16/05/17 18:41:27 INFO storage.MemoryStore: ensureFreeSpace(1400) called with curMem=6735, maxMem=2577200578
16/05/17 18:41:27 INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1400.0 B, free 2.4 GB)
16/05/17 18:41:27 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on 172.16.28.194:49472 (size: 1400.0 B, free: 2.4 GB)
16/05/17 18:41:27 INFO spark.SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:861
16/05/17 18:41:27 INFO scheduler.DAGScheduler: Submitting 20 missing tasks from ResultStage 1 (ShuffledRDD[2] at start at SparkTweetStreamingHDFSLoad.java:1743)
16/05/17 18:41:27 INFO cluster.YarnClusterScheduler: Adding task set 1.0 with 20 tasks
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1.0 (TID 50, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on node3:53989 (size: 1400.0 B, free: 530.0 MB)
16/05/17 18:41:27 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to node3:57322
16/05/17 18:41:27 INFO spark.MapOutputTrackerMaster: Size of output statuses for shuffle 0 is 254 bytes
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 1.0 (TID 51, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1.0 (TID 50) in 85 ms on node3 (1/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 2.0 in stage 1.0 (TID 52, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 1.0 (TID 51) in 26 ms on node3 (2/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 3.0 in stage 1.0 (TID 53, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 2.0 in stage 1.0 (TID 52) in 22 ms on node3 (3/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 4.0 in stage 1.0 (TID 54, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 3.0 in stage 1.0 (TID 53) in 18 ms on node3 (4/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 5.0 in stage 1.0 (TID 55, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 4.0 in stage 1.0 (TID 54) in 23 ms on node3 (5/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 6.0 in stage 1.0 (TID 56, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 5.0 in stage 1.0 (TID 55) in 20 ms on node3 (6/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 7.0 in stage 1.0 (TID 57, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 6.0 in stage 1.0 (TID 56) in 18 ms on node3 (7/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 8.0 in stage 1.0 (TID 58, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 7.0 in stage 1.0 (TID 57) in 24 ms on node3 (8/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 9.0 in stage 1.0 (TID 59, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 8.0 in stage 1.0 (TID 58) in 17 ms on node3 (9/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 10.0 in stage 1.0 (TID 60, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 9.0 in stage 1.0 (TID 59) in 16 ms on node3 (10/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 11.0 in stage 1.0 (TID 61, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 10.0 in stage 1.0 (TID 60) in 30 ms on node3 (11/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 12.0 in stage 1.0 (TID 62, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 11.0 in stage 1.0 (TID 61) in 15 ms on node3 (12/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 13.0 in stage 1.0 (TID 63, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 12.0 in stage 1.0 (TID 62) in 17 ms on node3 (13/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 14.0 in stage 1.0 (TID 64, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 13.0 in stage 1.0 (TID 63) in 13 ms on node3 (14/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 15.0 in stage 1.0 (TID 65, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 14.0 in stage 1.0 (TID 64) in 22 ms on node3 (15/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 16.0 in stage 1.0 (TID 66, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 15.0 in stage 1.0 (TID 65) in 16 ms on node3 (16/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 17.0 in stage 1.0 (TID 67, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 16.0 in stage 1.0 (TID 66) in 26 ms on node3 (17/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 18.0 in stage 1.0 (TID 68, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 17.0 in stage 1.0 (TID 67) in 27 ms on node3 (18/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 19.0 in stage 1.0 (TID 69, node3, PROCESS_LOCAL, 1901 bytes)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 18.0 in stage 1.0 (TID 68) in 27 ms on node3 (19/20)
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Finished task 19.0 in stage 1.0 (TID 69) in 20 ms on node3 (20/20)
16/05/17 18:41:27 INFO cluster.YarnClusterScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool 
16/05/17 18:41:27 INFO scheduler.DAGScheduler: ResultStage 1 (start at SparkTweetStreamingHDFSLoad.java:1743) finished in 0.433 s
16/05/17 18:41:27 INFO scheduler.DAGScheduler: Job 0 finished: start at SparkTweetStreamingHDFSLoad.java:1743, took 2.646352 s
16/05/17 18:41:27 INFO scheduler.ReceiverTracker: Starting 1 receivers
16/05/17 18:41:27 INFO scheduler.ReceiverTracker: ReceiverTracker started
16/05/17 18:41:27 INFO dstream.ForEachDStream: metadataCleanupDelay = -1
16/05/17 18:41:27 INFO dstream.FilteredDStream: metadataCleanupDelay = -1
16/05/17 18:41:27 INFO dstream.MappedDStream: metadataCleanupDelay = -1
16/05/17 18:41:27 INFO twitter.TwitterInputDStream: metadataCleanupDelay = -1
16/05/17 18:41:27 INFO twitter.TwitterInputDStream: Slide time = 60000 ms
16/05/17 18:41:27 INFO twitter.TwitterInputDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/05/17 18:41:27 INFO twitter.TwitterInputDStream: Checkpoint interval = null
16/05/17 18:41:27 INFO twitter.TwitterInputDStream: Remember duration = 60000 ms
16/05/17 18:41:27 INFO twitter.TwitterInputDStream: Initialized and validated org.apache.spark.streaming.twitter.TwitterInputDStream@dc3f01b
16/05/17 18:41:27 INFO dstream.MappedDStream: Slide time = 60000 ms
16/05/17 18:41:27 INFO dstream.MappedDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/05/17 18:41:27 INFO dstream.MappedDStream: Checkpoint interval = null
16/05/17 18:41:27 INFO dstream.MappedDStream: Remember duration = 60000 ms
16/05/17 18:41:27 INFO dstream.MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream@1046509
16/05/17 18:41:27 INFO dstream.FilteredDStream: Slide time = 60000 ms
16/05/17 18:41:27 INFO dstream.FilteredDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/05/17 18:41:27 INFO dstream.FilteredDStream: Checkpoint interval = null
16/05/17 18:41:27 INFO dstream.FilteredDStream: Remember duration = 60000 ms
16/05/17 18:41:27 INFO dstream.FilteredDStream: Initialized and validated org.apache.spark.streaming.dstream.FilteredDStream@1bbed24f
16/05/17 18:41:27 INFO dstream.ForEachDStream: Slide time = 60000 ms
16/05/17 18:41:27 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/05/17 18:41:27 INFO dstream.ForEachDStream: Checkpoint interval = null
16/05/17 18:41:27 INFO dstream.ForEachDStream: Remember duration = 60000 ms
16/05/17 18:41:27 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@5cab052f
16/05/17 18:41:27 INFO dstream.ForEachDStream: metadataCleanupDelay = -1
16/05/17 18:41:27 INFO dstream.FilteredDStream: metadataCleanupDelay = -1
16/05/17 18:41:27 INFO dstream.MappedDStream: metadataCleanupDelay = -1
16/05/17 18:41:27 INFO twitter.TwitterInputDStream: metadataCleanupDelay = -1
16/05/17 18:41:27 INFO twitter.TwitterInputDStream: Slide time = 60000 ms
16/05/17 18:41:27 INFO twitter.TwitterInputDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/05/17 18:41:27 INFO twitter.TwitterInputDStream: Checkpoint interval = null
16/05/17 18:41:27 INFO twitter.TwitterInputDStream: Remember duration = 60000 ms
16/05/17 18:41:27 INFO twitter.TwitterInputDStream: Initialized and validated org.apache.spark.streaming.twitter.TwitterInputDStream@dc3f01b
16/05/17 18:41:27 INFO dstream.MappedDStream: Slide time = 60000 ms
16/05/17 18:41:27 INFO dstream.MappedDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/05/17 18:41:27 INFO dstream.MappedDStream: Checkpoint interval = null
16/05/17 18:41:27 INFO dstream.MappedDStream: Remember duration = 60000 ms
16/05/17 18:41:27 INFO dstream.MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream@4e7a7589
16/05/17 18:41:27 INFO dstream.FilteredDStream: Slide time = 60000 ms
16/05/17 18:41:27 INFO dstream.FilteredDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/05/17 18:41:27 INFO dstream.FilteredDStream: Checkpoint interval = null
16/05/17 18:41:27 INFO dstream.FilteredDStream: Remember duration = 60000 ms
16/05/17 18:41:27 INFO dstream.FilteredDStream: Initialized and validated org.apache.spark.streaming.dstream.FilteredDStream@6c747d62
16/05/17 18:41:27 INFO dstream.ForEachDStream: Slide time = 60000 ms
16/05/17 18:41:27 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/05/17 18:41:27 INFO dstream.ForEachDStream: Checkpoint interval = null
16/05/17 18:41:27 INFO dstream.ForEachDStream: Remember duration = 60000 ms
16/05/17 18:41:27 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@1b5f609
16/05/17 18:41:27 INFO util.RecurringTimer: Started timer for JobGenerator at time 1463490720000
16/05/17 18:41:27 INFO scheduler.JobGenerator: Started JobGenerator at 1463490720000 ms
16/05/17 18:41:27 INFO scheduler.JobScheduler: Started JobScheduler
16/05/17 18:41:27 INFO scheduler.DAGScheduler: Got job 1 (start at SparkTweetStreamingHDFSLoad.java:1743) with 1 output partitions
16/05/17 18:41:27 INFO scheduler.DAGScheduler: Final stage: ResultStage 2(start at SparkTweetStreamingHDFSLoad.java:1743)
16/05/17 18:41:27 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/05/17 18:41:27 INFO scheduler.DAGScheduler: Missing parents: List()
16/05/17 18:41:27 INFO scheduler.DAGScheduler: Submitting ResultStage 2 (Receiver 0 ParallelCollectionRDD[3] at makeRDD at ReceiverTracker.scala:556), which has no missing parents
16/05/17 18:41:27 INFO streaming.StreamingContext: StreamingContext started
16/05/17 18:41:27 INFO scheduler.ReceiverTracker: Receiver 0 started
16/05/17 18:41:27 INFO storage.MemoryStore: ensureFreeSpace(62448) called with curMem=8135, maxMem=2577200578
16/05/17 18:41:27 INFO storage.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 61.0 KB, free 2.4 GB)
16/05/17 18:41:27 INFO storage.MemoryStore: ensureFreeSpace(21085) called with curMem=70583, maxMem=2577200578
16/05/17 18:41:27 INFO storage.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 20.6 KB, free 2.4 GB)
16/05/17 18:41:27 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on 172.16.28.194:49472 (size: 20.6 KB, free: 2.4 GB)
16/05/17 18:41:27 INFO spark.SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:861
16/05/17 18:41:27 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (Receiver 0 ParallelCollectionRDD[3] at makeRDD at ReceiverTracker.scala:556)
16/05/17 18:41:27 INFO cluster.YarnClusterScheduler: Adding task set 2.0 with 1 tasks
16/05/17 18:41:27 INFO impl.StdSchedulerFactory: Using default implementation for ThreadExecutor
16/05/17 18:41:27 INFO simpl.SimpleThreadPool: Job execution threads will use class loader of thread: Driver
16/05/17 18:41:27 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 2.0 (TID 70, node3, NODE_LOCAL, 3094 bytes)
16/05/17 18:41:27 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on node3:53989 (size: 20.6 KB, free: 530.0 MB)
16/05/17 18:41:27 INFO core.SchedulerSignalerImpl: Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
16/05/17 18:41:27 INFO core.QuartzScheduler: Quartz Scheduler v.1.8.6 created.
16/05/17 18:41:27 INFO simpl.RAMJobStore: RAMJobStore initialized.
16/05/17 18:41:27 INFO core.QuartzScheduler: Scheduler meta-data: Quartz Scheduler (v1.8.6) 'DefaultQuartzScheduler' with instanceId 'NON_CLUSTERED'
  Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
  NOT STARTED.
  Currently in standby mode.
  Number of jobs executed: 0
  Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads.
  Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.

16/05/17 18:41:27 INFO impl.StdSchedulerFactory: Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties'
16/05/17 18:41:27 INFO impl.StdSchedulerFactory: Quartz scheduler version: 1.8.6
16/05/17 18:41:27 INFO core.QuartzScheduler: Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.
16/05/17 18:41:27 INFO spark.SparkTweetStreamingHDFSLoad: END {}TwitterTweets
16/05/17 18:41:27 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
16/05/17 18:41:27 INFO streaming.StreamingContext: Invoking stop(stopGracefully=false) from shutdown hook
16/05/17 18:41:27 INFO scheduler.ReceiverTracker: Sent stop signal to all 1 receivers
16/05/17 18:41:28 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 2.0 (TID 70) in 555 ms on node3 (1/1)
16/05/17 18:41:28 INFO cluster.YarnClusterScheduler: Removed TaskSet 2.0, whose tasks have all completed, from pool 
16/05/17 18:41:28 INFO scheduler.DAGScheduler: ResultStage 2 (start at SparkTweetStreamingHDFSLoad.java:1743) finished in 0.556 s
16/05/17 18:41:28 INFO scheduler.ReceiverTracker: All of the receivers have deregistered successfully
16/05/17 18:41:28 INFO scheduler.ReceiverTracker: ReceiverTracker stopped
16/05/17 18:41:28 INFO scheduler.JobGenerator: Stopping JobGenerator immediately
16/05/17 18:41:28 INFO util.RecurringTimer: Stopped timer for JobGenerator after time -1
16/05/17 18:41:28 INFO scheduler.JobGenerator: Stopped JobGenerator
16/05/17 18:41:28 INFO scheduler.JobScheduler: Stopped JobScheduler
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/streaming,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/streaming/batch,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static/streaming,null}
16/05/17 18:41:28 INFO streaming.StreamingContext: StreamingContext stopped successfully
16/05/17 18:41:28 INFO spark.SparkContext: Invoking stop() from shutdown hook
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/streaming/batch/json,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/streaming/json,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
16/05/17 18:41:28 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
16/05/17 18:41:28 INFO ui.SparkUI: Stopped Spark web UI at http://172.16.28.194:59690
16/05/17 18:41:28 INFO scheduler.DAGScheduler: Stopping DAGScheduler
16/05/17 18:41:28 INFO cluster.YarnClusterSchedulerBackend: Shutting down all executors
16/05/17 18:41:28 INFO cluster.YarnClusterSchedulerBackend: Asking each executor to shut down
16/05/17 18:41:28 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. node3:57322
16/05/17 18:41:28 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/05/17 18:41:28 INFO storage.MemoryStore: MemoryStore cleared
16/05/17 18:41:28 INFO storage.BlockManager: BlockManager stopped
16/05/17 18:41:28 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
16/05/17 18:41:28 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/05/17 18:41:28 INFO spark.SparkContext: Successfully stopped SparkContext
16/05/17 18:41:28 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
16/05/17 18:41:28 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/05/17 18:41:28 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/05/17 18:41:28 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
16/05/17 18:41:28 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remoting shut down.
16/05/17 18:41:28 INFO yarn.ApplicationMaster: Deleting staging directory .sparkStaging/application_1463479181441_0004
16/05/17 18:41:28 INFO util.ShutdownHookManager: Shutdown hook called
16/05/17 18:41:28 INFO util.ShutdownHookManager: Deleting directory /tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/appcache/application_1463479181441_0004/spark-74b8b507-32cb-41fb-aecd-b687429648ac



log2:
SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/filecache/15/spark-assembly-1.5.2-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/05/17 16:17:41 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
16/05/17 16:17:42 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/05/17 16:17:42 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1463479181441_0003_000001
16/05/17 16:17:43 INFO spark.SecurityManager: Changing view acls to: hadoop
16/05/17 16:17:43 INFO spark.SecurityManager: Changing modify acls to: hadoop
16/05/17 16:17:43 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
16/05/17 16:17:44 INFO yarn.ApplicationMaster: Starting the user application in a separate Thread
16/05/17 16:17:44 INFO yarn.ApplicationMaster: Waiting for spark context initialization
16/05/17 16:17:44 INFO spark.SparkTweetStreamingHDFSLoad: found keyword== userTwitterToken=9ACWejzaHVyxpPDYCHnDsO98U 01safwuyLO8B8S94v5i0p90SzxEPZqUUmCaDkYOj1FKN1dXKZC 702828259411521536-PNoSkM8xNIvuEVvoQ9Pj8fj7D8CkYp1 OntoQStrmwrztnzi1MSlM56sKc23bqUCC2WblbDPiiP8P
16/05/17 16:17:44 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = 9ACWejzaHVyxpPDYCHnDsO98U 01safwuyLO8B8S94v5i0p90SzxEPZqUUmCaDkYOj1FKN1dXKZC 702828259411521536-PNoSkM8xNIvuEVvoQ9Pj8fj7D8CkYp1 OntoQStrmwrztnzi1MSlM56sKc23bqUCC2WblbDPiiP8P
16/05/17 16:17:44 INFO yarn.ApplicationMaster: Waiting for spark context initialization ... 
16/05/17 16:17:44 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = 1
16/05/17 16:17:44 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = 2
16/05/17 16:17:44 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = Tue May 17 00:00:00 IST 2016
16/05/17 16:17:44 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = Tue May 17 00:00:00 IST 2016
16/05/17 16:17:44 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = nokia,samsung,iphone,blackberry
16/05/17 16:17:44 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = All
16/05/17 16:17:44 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = mo
16/05/17 16:17:44 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = en
16/05/17 16:17:44 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = retweet
16/05/17 16:17:44 INFO spark.SparkTweetStreamingHDFSLoad: Twitter Token...........[Ljava.lang.String;@4c4d744f
16/05/17 16:17:44 INFO spark.SparkContext: Running Spark version 1.5.2
16/05/17 16:17:44 WARN spark.SparkConf: 
SPARK_JAVA_OPTS was detected (set to '-Dspark.driver.port=53411').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with conf/spark-defaults.conf to set defaults for an application
 - ./spark-submit with --driver-java-options to set -X options for a driver
 - spark.executor.extraJavaOptions to set -X options for executors
 - SPARK_DAEMON_JAVA_OPTS to set java options for standalone daemons (master or worker)
        
16/05/17 16:17:44 WARN spark.SparkConf: Setting 'spark.executor.extraJavaOptions' to '-Dspark.driver.port=53411' as a work-around.
16/05/17 16:17:44 WARN spark.SparkConf: Setting 'spark.driver.extraJavaOptions' to '-Dspark.driver.port=53411' as a work-around.
16/05/17 16:17:44 INFO spark.SecurityManager: Changing view acls to: hadoop
16/05/17 16:17:44 INFO spark.SecurityManager: Changing modify acls to: hadoop
16/05/17 16:17:44 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
16/05/17 16:17:45 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/05/17 16:17:45 INFO Remoting: Starting remoting
16/05/17 16:17:45 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@172.16.28.195:53411]
16/05/17 16:17:45 INFO util.Utils: Successfully started service 'sparkDriver' on port 53411.
16/05/17 16:17:45 INFO spark.SparkEnv: Registering MapOutputTracker
16/05/17 16:17:45 INFO spark.SparkEnv: Registering BlockManagerMaster
16/05/17 16:17:45 INFO storage.DiskBlockManager: Created local directory at /tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/appcache/application_1463479181441_0003/blockmgr-9192371b-3a09-4a70-984c-bb0bacb8bad9
16/05/17 16:17:45 INFO storage.MemoryStore: MemoryStore started with capacity 1966.1 MB
16/05/17 16:17:45 INFO spark.HttpFileServer: HTTP File server directory is /tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/appcache/application_1463479181441_0003/spark-ee3ab1c5-6b04-4b32-846b-bec378fa4c1d/httpd-d79feae4-6316-4816-8b14-ffcbeaec3e07
16/05/17 16:17:45 INFO spark.HttpServer: Starting HTTP Server
16/05/17 16:17:45 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/05/17 16:17:45 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:36667
16/05/17 16:17:45 INFO util.Utils: Successfully started service 'HTTP file server' on port 36667.
16/05/17 16:17:45 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/05/17 16:17:45 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/05/17 16:17:47 ERROR yarn.ApplicationMaster: RECEIVED SIGNAL 15: SIGTERM
16/05/17 16:17:47 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0, (reason: Shutdown hook called before final status was reported.)
16/05/17 16:17:47 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED (diag message: Shutdown hook called before final status was reported.)
16/05/17 16:17:47 INFO yarn.ApplicationMaster: Deleting staging directory .sparkStaging/application_1463479181441_0003
16/05/17 16:17:47 INFO storage.DiskBlockManager: Shutdown hook called
16/05/17 16:17:47 INFO util.ShutdownHookManager: Shutdown hook called
16/05/17 16:17:47 INFO util.ShutdownHookManager: Deleting directory /tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/appcache/application_1463479181441_0003/spark-ee3ab1c5-6b04-4b32-846b-bec378fa4c1d 

Sent from Yahoo Mail. Get the app 

    On Tuesday, May 17, 2016 7:24 PM, ayan guha <guha.ayan@gmail.com> wrote:
 

 it says: hdfs://namenode:54310/user/hadoop/.sparkStaging/application_1463479181441_0003/SparkTwittterStreamingJob-0.0.1-SNAPSHOT-jar-with-dependencies.jarjava.io.FileNotFoundException: File does not exist: hdfs://namenode:54310/user/hadoop/.sparkStaging/application_1463479181441_0003/SparkTwittterStreamingJob-0.0.1-SNAPSHOT-jar-with-dependencies.jar
so looks like you are missing a jar from the location you are running the program
On Tue, May 17, 2016 at 10:38 PM, <spark.raj@yahoo.com.invalid> wrote:

Hi friends,
I am running spark streaming job on yarn cluster mode but it is failing. It is working fine in yarn-client mode. and also spark-examples are running good in spark-cluster mode. below is the log file for the spark streaming job on yarn-cluster mode. Can anyone help me on this.

SLF4J: Class path contains multiple SLF4J bindings.
SLF4J: Found binding in [jar:file:/tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/filecache/15/spark-assembly-1.5.2-hadoop2.6.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/usr/local/hadoop/share/hadoop/common/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
16/05/17 16:17:47 INFO yarn.ApplicationMaster: Registered signal handlers for [TERM, HUP, INT]
16/05/17 16:17:48 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/05/17 16:17:48 INFO yarn.ApplicationMaster: ApplicationAttemptId: appattempt_1463479181441_0003_000002
16/05/17 16:17:49 INFO spark.SecurityManager: Changing view acls to: hadoop
16/05/17 16:17:49 INFO spark.SecurityManager: Changing modify acls to: hadoop
16/05/17 16:17:49 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
16/05/17 16:17:49 INFO yarn.ApplicationMaster: Starting the user application in a separate Thread
16/05/17 16:17:49 INFO yarn.ApplicationMaster: Waiting for spark context initialization
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: found keyword== userTwitterToken=9ACWejzaHVyxpPDYCHnDsO98U 01safwuyLO8B8S94v5i0p90SzxEPZqUUmCaDkYOj1FKN1dXKZC 702828259411521536-PNoSkM8xNIvuEVvoQ9Pj8fj7D8CkYp1 OntoQStrmwrztnzi1MSlM56sKc23bqUCC2WblbDPiiP8P
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = 9ACWejzaHVyxpPDYCHnDsO98U 01safwuyLO8B8S94v5i0p90SzxEPZqUUmCaDkYOj1FKN1dXKZC 702828259411521536-PNoSkM8xNIvuEVvoQ9Pj8fj7D8CkYp1 OntoQStrmwrztnzi1MSlM56sKc23bqUCC2WblbDPiiP8P
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = 1
16/05/17 16:17:49 INFO yarn.ApplicationMaster: Waiting for spark context initialization ... 
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = 2
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = Tue May 17 00:00:00 IST 2016
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = Tue May 17 00:00:00 IST 2016
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = nokia,samsung,iphone,blackberry
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = All
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = mo
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = en
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: DemoJava called = retweet
16/05/17 16:17:49 INFO spark.SparkTweetStreamingHDFSLoad: Twitter Token...........[Ljava.lang.String;@3ee5e48d
16/05/17 16:17:49 INFO spark.SparkContext: Running Spark version 1.5.2
16/05/17 16:17:49 WARN spark.SparkConf: 
SPARK_JAVA_OPTS was detected (set to '-Dspark.driver.port=53411').
This is deprecated in Spark 1.0+.

Please instead use:
 - ./spark-submit with conf/spark-defaults.conf to set defaults for an application
 - ./spark-submit with --driver-java-options to set -X options for a driver
 - spark.executor.extraJavaOptions to set -X options for executors
 - SPARK_DAEMON_JAVA_OPTS to set java options for standalone daemons (master or worker)
        
16/05/17 16:17:49 WARN spark.SparkConf: Setting 'spark.executor.extraJavaOptions' to '-Dspark.driver.port=53411' as a work-around.
16/05/17 16:17:49 WARN spark.SparkConf: Setting 'spark.driver.extraJavaOptions' to '-Dspark.driver.port=53411' as a work-around.
16/05/17 16:17:49 INFO spark.SecurityManager: Changing view acls to: hadoop
16/05/17 16:17:49 INFO spark.SecurityManager: Changing modify acls to: hadoop
16/05/17 16:17:49 INFO spark.SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(hadoop); users with modify permissions: Set(hadoop)
16/05/17 16:17:49 INFO slf4j.Slf4jLogger: Slf4jLogger started
16/05/17 16:17:49 INFO Remoting: Starting remoting
16/05/17 16:17:50 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@172.16.28.195:53411]
16/05/17 16:17:50 INFO util.Utils: Successfully started service 'sparkDriver' on port 53411.
16/05/17 16:17:50 INFO spark.SparkEnv: Registering MapOutputTracker
16/05/17 16:17:50 INFO spark.SparkEnv: Registering BlockManagerMaster
16/05/17 16:17:50 INFO storage.DiskBlockManager: Created local directory at /tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/appcache/application_1463479181441_0003/blockmgr-fe61bf50-b650-4db9-989a-11199df6c1ac
16/05/17 16:17:50 INFO storage.MemoryStore: MemoryStore started with capacity 1966.1 MB
16/05/17 16:17:50 INFO spark.HttpFileServer: HTTP File server directory is /tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/appcache/application_1463479181441_0003/spark-5b36342a-6212-4cea-80da-b1961cab161c/httpd-20144975-e972-4b5a-8592-be94029cd0eb
16/05/17 16:17:50 INFO spark.HttpServer: Starting HTTP Server
16/05/17 16:17:50 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/05/17 16:17:50 INFO server.AbstractConnector: Started SocketConnector@0.0.0.0:47195
16/05/17 16:17:50 INFO util.Utils: Successfully started service 'HTTP file server' on port 47195.
16/05/17 16:17:50 INFO spark.SparkEnv: Registering OutputCommitCoordinator
16/05/17 16:17:50 INFO ui.JettyUtils: Adding filter: org.apache.hadoop.yarn.server.webproxy.amfilter.AmIpFilter
16/05/17 16:17:55 INFO server.Server: jetty-8.y.z-SNAPSHOT
16/05/17 16:17:55 INFO server.AbstractConnector: Started SelectChannelConnector@0.0.0.0:59320
16/05/17 16:17:55 INFO util.Utils: Successfully started service 'SparkUI' on port 59320.
16/05/17 16:17:55 INFO ui.SparkUI: Started SparkUI at http://172.16.28.195:59320
16/05/17 16:17:55 INFO cluster.YarnClusterScheduler: Created YarnClusterScheduler
16/05/17 16:17:55 WARN metrics.MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
16/05/17 16:17:55 INFO util.Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 57488.
16/05/17 16:17:55 INFO netty.NettyBlockTransferService: Server created on 57488
16/05/17 16:17:55 INFO storage.BlockManagerMaster: Trying to register BlockManager
16/05/17 16:17:55 INFO storage.BlockManagerMasterEndpoint: Registering block manager 172.16.28.195:57488 with 1966.1 MB RAM, BlockManagerId(driver, 172.16.28.195, 57488)
16/05/17 16:17:55 INFO storage.BlockManagerMaster: Registered BlockManager
16/05/17 16:17:56 INFO cluster.YarnSchedulerBackend$YarnSchedulerEndpoint: ApplicationMaster registered as AkkaRpcEndpointRef(Actor[akka://sparkDriver/user/YarnAM#-174037885])
16/05/17 16:17:56 INFO client.RMProxy: Connecting to ResourceManager at namenode/172.16.28.190:8030
16/05/17 16:17:56 INFO yarn.YarnRMClient: Registering the ApplicationMaster
16/05/17 16:17:56 INFO yarn.YarnAllocator: Will request 2 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
16/05/17 16:17:56 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
16/05/17 16:17:56 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
16/05/17 16:17:56 INFO yarn.ApplicationMaster: Started progress reporter thread with (heartbeat : 3000, initial allocation : 200) intervals
16/05/17 16:17:56 INFO impl.AMRMClientImpl: Received new token for : node4:58299
16/05/17 16:17:56 INFO yarn.YarnAllocator: Launching container container_1463479181441_0003_02_000002 for on host node4
16/05/17 16:17:56 INFO yarn.YarnAllocator: Launching ExecutorRunnable. driverUrl: akka.tcp://sparkDriver@172.16.28.195:53411/user/CoarseGrainedScheduler,  executorHostname: node4
16/05/17 16:17:56 INFO yarn.ExecutorRunnable: Starting Executor Container
16/05/17 16:17:56 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
16/05/17 16:17:56 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
16/05/17 16:17:56 INFO yarn.ExecutorRunnable: Setting up ContainerLaunchContext
16/05/17 16:17:56 INFO yarn.ExecutorRunnable: Preparing Local resources
16/05/17 16:17:56 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "namenode" port: 54310 file: "/user/hadoop/.sparkStaging/application_1463479181441_0003/SparkTwittterStreamingJob-0.0.1-SNAPSHOT-jar-with-dependencies.jar" } size: 216515519 timestamp: 1463481955892 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "namenode" port: 54310 file: "/user/hadoop/.sparkStaging/application_1463479181441_0003/spark-assembly-1.5.2-hadoop2.6.0.jar" } size: 183993445 timestamp: 1463481933738 type: FILE visibility: PRIVATE)
16/05/17 16:17:56 INFO yarn.ExecutorRunnable: 
===============================================================================
YARN executor launch context:
  env:
    CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
    SPARK_LOG_URL_STDERR -> http://node4:8042/node/containerlogs/container_1463479181441_0003_02_000002/hadoop/stderr?start=-4096
    SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1463479181441_0003
    SPARK_YARN_CACHE_FILES_FILE_SIZES -> 183993445,216515519
    SPARK_USER -> hadoop
    SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE
    SPARK_YARN_MODE -> true
    SPARK_JAVA_OPTS -> -Dspark.driver.port=53411
    SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1463481933738,1463481955892
    SPARK_LOG_URL_STDOUT -> http://node4:8042/node/containerlogs/container_1463479181441_0003_02_000002/hadoop/stdout?start=-4096
    SPARK_YARN_CACHE_FILES -> hdfs://namenode:54310/user/hadoop/.sparkStaging/application_1463479181441_0003/spark-assembly-1.5.2-hadoop2.6.0.jar#__spark__.jar,hdfs://namenode:54310/user/hadoop/.sparkStaging/application_1463479181441_0003/SparkTwittterStreamingJob-0.0.1-SNAPSHOT-jar-with-dependencies.jar#__app__.jar

  command:
    {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m '-Dspark.driver.port=53411' -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.ui.port=0' '-Dspark.driver.port=53411' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://sparkDriver@172.16.28.195:53411/user/CoarseGrainedScheduler --executor-id 1 --hostname node4 --cores 1 --app-id application_1463479181441_0003 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
      
16/05/17 16:17:56 INFO impl.ContainerManagementProtocolProxy: Opening proxy : node4:58299
16/05/17 16:17:56 INFO impl.AMRMClientImpl: Received new token for : node2:52751
16/05/17 16:17:56 INFO yarn.YarnAllocator: Launching container container_1463479181441_0003_02_000003 for on host node2
16/05/17 16:17:56 INFO yarn.YarnAllocator: Launching ExecutorRunnable. driverUrl: akka.tcp://sparkDriver@172.16.28.195:53411/user/CoarseGrainedScheduler,  executorHostname: node2
16/05/17 16:17:56 INFO yarn.ExecutorRunnable: Starting Executor Container
16/05/17 16:17:56 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
16/05/17 16:17:56 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
16/05/17 16:17:56 INFO yarn.ExecutorRunnable: Setting up ContainerLaunchContext
16/05/17 16:17:56 INFO yarn.ExecutorRunnable: Preparing Local resources
16/05/17 16:17:56 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "namenode" port: 54310 file: "/user/hadoop/.sparkStaging/application_1463479181441_0003/SparkTwittterStreamingJob-0.0.1-SNAPSHOT-jar-with-dependencies.jar" } size: 216515519 timestamp: 1463481955892 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "namenode" port: 54310 file: "/user/hadoop/.sparkStaging/application_1463479181441_0003/spark-assembly-1.5.2-hadoop2.6.0.jar" } size: 183993445 timestamp: 1463481933738 type: FILE visibility: PRIVATE)
16/05/17 16:17:56 INFO yarn.ExecutorRunnable: 
===============================================================================
YARN executor launch context:
  env:
    CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
    SPARK_LOG_URL_STDERR -> http://node2:8042/node/containerlogs/container_1463479181441_0003_02_000003/hadoop/stderr?start=-4096
    SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1463479181441_0003
    SPARK_YARN_CACHE_FILES_FILE_SIZES -> 183993445,216515519
    SPARK_USER -> hadoop
    SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE
    SPARK_YARN_MODE -> true
    SPARK_JAVA_OPTS -> -Dspark.driver.port=53411
    SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1463481933738,1463481955892
    SPARK_LOG_URL_STDOUT -> http://node2:8042/node/containerlogs/container_1463479181441_0003_02_000003/hadoop/stdout?start=-4096
    SPARK_YARN_CACHE_FILES -> hdfs://namenode:54310/user/hadoop/.sparkStaging/application_1463479181441_0003/spark-assembly-1.5.2-hadoop2.6.0.jar#__spark__.jar,hdfs://namenode:54310/user/hadoop/.sparkStaging/application_1463479181441_0003/SparkTwittterStreamingJob-0.0.1-SNAPSHOT-jar-with-dependencies.jar#__app__.jar

  command:
    {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m '-Dspark.driver.port=53411' -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.ui.port=0' '-Dspark.driver.port=53411' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://sparkDriver@172.16.28.195:53411/user/CoarseGrainedScheduler --executor-id 2 --hostname node2 --cores 1 --app-id application_1463479181441_0003 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
      
16/05/17 16:17:56 INFO impl.ContainerManagementProtocolProxy: Opening proxy : node2:52751
16/05/17 16:17:59 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. node4:39430
16/05/17 16:17:59 INFO cluster.YarnClusterSchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@node4:50089/user/Executor#1750526367]) with ID 1
16/05/17 16:17:59 INFO storage.BlockManagerMasterEndpoint: Registering block manager node4:47743 with 530.0 MB RAM, BlockManagerId(1, node4, 47743)
16/05/17 16:17:59 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 0 of them.
16/05/17 16:17:59 INFO yarn.YarnAllocator: Completed container container_1463479181441_0003_02_000003 (state: COMPLETE, exit status: -1000)
16/05/17 16:17:59 INFO yarn.YarnAllocator: Container marked as failed: container_1463479181441_0003_02_000003. Exit status: -1000. Diagnostics: File does not exist: hdfs://namenode:54310/user/hadoop/.sparkStaging/application_1463479181441_0003/SparkTwittterStreamingJob-0.0.1-SNAPSHOT-jar-with-dependencies.jar
java.io.FileNotFoundException: File does not exist: hdfs://namenode:54310/user/hadoop/.sparkStaging/application_1463479181441_0003/SparkTwittterStreamingJob-0.0.1-SNAPSHOT-jar-with-dependencies.jar
	at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1122)
	at org.apache.hadoop.hdfs.DistributedFileSystem$18.doCall(DistributedFileSystem.java:1114)
	at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)
	at org.apache.hadoop.hdfs.DistributedFileSystem.getFileStatus(DistributedFileSystem.java:1114)
	at org.apache.hadoop.yarn.util.FSDownload.copy(FSDownload.java:251)
	at org.apache.hadoop.yarn.util.FSDownload.access$000(FSDownload.java:61)
	at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:359)
	at org.apache.hadoop.yarn.util.FSDownload$2.run(FSDownload.java:357)
	at java.security.AccessController.doPrivileged(Native Method)
	at javax.security.auth.Subject.doAs(Subject.java:422)
	at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)
	at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:356)
	at org.apache.hadoop.yarn.util.FSDownload.call(FSDownload.java:60)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
	at java.util.concurrent.FutureTask.run(FutureTask.java:266)
	at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
	at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
	at java.lang.Thread.run(Thread.java:745)


16/05/17 16:17:59 INFO cluster.YarnClusterSchedulerBackend: Asked to remove non-existent executor 2
16/05/17 16:18:02 INFO yarn.YarnAllocator: Will request 1 executor containers, each with 1 cores and 1408 MB memory including 384 MB overhead
16/05/17 16:18:02 INFO yarn.YarnAllocator: Container request (host: Any, capability: <memory:1408, vCores:1>)
16/05/17 16:18:03 INFO yarn.YarnAllocator: Launching container container_1463479181441_0003_02_000005 for on host node4
16/05/17 16:18:03 INFO yarn.YarnAllocator: Launching ExecutorRunnable. driverUrl: akka.tcp://sparkDriver@172.16.28.195:53411/user/CoarseGrainedScheduler,  executorHostname: node4
16/05/17 16:18:03 INFO yarn.YarnAllocator: Received 1 containers from YARN, launching executors on 1 of them.
16/05/17 16:18:03 INFO yarn.ExecutorRunnable: Starting Executor Container
16/05/17 16:18:03 INFO impl.ContainerManagementProtocolProxy: yarn.client.max-cached-nodemanagers-proxies : 0
16/05/17 16:18:03 INFO yarn.ExecutorRunnable: Setting up ContainerLaunchContext
16/05/17 16:18:03 INFO yarn.ExecutorRunnable: Preparing Local resources
16/05/17 16:18:03 INFO yarn.ExecutorRunnable: Prepared Local resources Map(__app__.jar -> resource { scheme: "hdfs" host: "namenode" port: 54310 file: "/user/hadoop/.sparkStaging/application_1463479181441_0003/SparkTwittterStreamingJob-0.0.1-SNAPSHOT-jar-with-dependencies.jar" } size: 216515519 timestamp: 1463481955892 type: FILE visibility: PRIVATE, __spark__.jar -> resource { scheme: "hdfs" host: "namenode" port: 54310 file: "/user/hadoop/.sparkStaging/application_1463479181441_0003/spark-assembly-1.5.2-hadoop2.6.0.jar" } size: 183993445 timestamp: 1463481933738 type: FILE visibility: PRIVATE)
16/05/17 16:18:03 INFO yarn.ExecutorRunnable: 
===============================================================================
YARN executor launch context:
  env:
    CLASSPATH -> {{PWD}}<CPS>{{PWD}}/__spark__.jar<CPS>$HADOOP_CONF_DIR<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/*<CPS>$HADOOP_COMMON_HOME/share/hadoop/common/lib/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/*<CPS>$HADOOP_HDFS_HOME/share/hadoop/hdfs/lib/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/*<CPS>$HADOOP_YARN_HOME/share/hadoop/yarn/lib/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/*<CPS>$HADOOP_MAPRED_HOME/share/hadoop/mapreduce/lib/*
    SPARK_LOG_URL_STDERR -> http://node4:8042/node/containerlogs/container_1463479181441_0003_02_000005/hadoop/stderr?start=-4096
    SPARK_YARN_STAGING_DIR -> .sparkStaging/application_1463479181441_0003
    SPARK_YARN_CACHE_FILES_FILE_SIZES -> 183993445,216515519
    SPARK_USER -> hadoop
    SPARK_YARN_CACHE_FILES_VISIBILITIES -> PRIVATE,PRIVATE
    SPARK_YARN_MODE -> true
    SPARK_JAVA_OPTS -> -Dspark.driver.port=53411
    SPARK_YARN_CACHE_FILES_TIME_STAMPS -> 1463481933738,1463481955892
    SPARK_LOG_URL_STDOUT -> http://node4:8042/node/containerlogs/container_1463479181441_0003_02_000005/hadoop/stdout?start=-4096
    SPARK_YARN_CACHE_FILES -> hdfs://namenode:54310/user/hadoop/.sparkStaging/application_1463479181441_0003/spark-assembly-1.5.2-hadoop2.6.0.jar#__spark__.jar,hdfs://namenode:54310/user/hadoop/.sparkStaging/application_1463479181441_0003/SparkTwittterStreamingJob-0.0.1-SNAPSHOT-jar-with-dependencies.jar#__app__.jar

  command:
    {{JAVA_HOME}}/bin/java -server -XX:OnOutOfMemoryError='kill %p' -Xms1024m -Xmx1024m '-Dspark.driver.port=53411' -Djava.io.tmpdir={{PWD}}/tmp '-Dspark.ui.port=0' '-Dspark.driver.port=53411' -Dspark.yarn.app.container.log.dir=<LOG_DIR> org.apache.spark.executor.CoarseGrainedExecutorBackend --driver-url akka.tcp://sparkDriver@172.16.28.195:53411/user/CoarseGrainedScheduler --executor-id 3 --hostname node4 --cores 1 --app-id application_1463479181441_0003 --user-class-path file:$PWD/__app__.jar 1> <LOG_DIR>/stdout 2> <LOG_DIR>/stderr
===============================================================================
      
16/05/17 16:18:03 INFO impl.ContainerManagementProtocolProxy: Opening proxy : node4:58299
16/05/17 16:18:06 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. node4:35884
16/05/17 16:18:06 INFO cluster.YarnClusterSchedulerBackend: Registered executor: AkkaRpcEndpointRef(Actor[akka.tcp://sparkExecutor@node4:46484/user/Executor#-348284167]) with ID 3
16/05/17 16:18:06 INFO cluster.YarnClusterSchedulerBackend: SchedulerBackend is ready for scheduling beginning after reached minRegisteredResourcesRatio: 0.8
16/05/17 16:18:06 INFO cluster.YarnClusterScheduler: YarnClusterScheduler.postStartHook done
16/05/17 16:18:06 INFO storage.BlockManagerMasterEndpoint: Registering block manager node4:58845 with 530.0 MB RAM, BlockManagerId(3, node4, 58845)
16/05/17 16:18:06 INFO spark.SparkTweetStreamingHDFSLoad: dayOfTheWeek .........[Ljava.lang.String;@42c6ef6d
16/05/17 16:18:07 INFO rate.PIDRateEstimator: Created PIDRateEstimator with proportional = 1.0, integral = 0.2, derivative = 0.0, min rate = 100.0
16/05/17 16:18:07 INFO spark.SparkTweetStreamingHDFSLoad: Terminate DAte............Tue May 17 00:00:00 IST 2016
16/05/17 16:18:07 INFO spark.SparkTweetStreamingHDFSLoad: outputURI--------------hdfs://namenode:54310/spark/TweetData/twitterRawDataTest
16/05/17 16:18:07 INFO spark.SparkTweetStreamingHDFSLoad: outputURI--------------hdfs://namenode:54310/spark/TweetData/twitterSeggDataTest
16/05/17 16:18:07 INFO spark.SparkContext: Starting job: start at SparkTweetStreamingHDFSLoad.java:1743
16/05/17 16:18:07 INFO scheduler.DAGScheduler: Registering RDD 1 (start at SparkTweetStreamingHDFSLoad.java:1743)
16/05/17 16:18:07 INFO scheduler.DAGScheduler: Got job 0 (start at SparkTweetStreamingHDFSLoad.java:1743) with 20 output partitions
16/05/17 16:18:07 INFO scheduler.DAGScheduler: Final stage: ResultStage 1(start at SparkTweetStreamingHDFSLoad.java:1743)
16/05/17 16:18:07 INFO scheduler.DAGScheduler: Parents of final stage: List(ShuffleMapStage 0)
16/05/17 16:18:07 INFO scheduler.DAGScheduler: Missing parents: List(ShuffleMapStage 0)
16/05/17 16:18:07 INFO scheduler.DAGScheduler: Submitting ShuffleMapStage 0 (MapPartitionsRDD[1] at start at SparkTweetStreamingHDFSLoad.java:1743), which has no missing parents
16/05/17 16:18:08 INFO storage.MemoryStore: ensureFreeSpace(2736) called with curMem=0, maxMem=2061647216
16/05/17 16:18:08 INFO storage.MemoryStore: Block broadcast_0 stored as values in memory (estimated size 2.7 KB, free 1966.1 MB)
16/05/17 16:18:08 INFO storage.MemoryStore: ensureFreeSpace(1655) called with curMem=2736, maxMem=2061647216
16/05/17 16:18:08 INFO storage.MemoryStore: Block broadcast_0_piece0 stored as bytes in memory (estimated size 1655.0 B, free 1966.1 MB)
16/05/17 16:18:08 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on 172.16.28.195:57488 (size: 1655.0 B, free: 1966.1 MB)
16/05/17 16:18:08 INFO spark.SparkContext: Created broadcast 0 from broadcast at DAGScheduler.scala:861
16/05/17 16:18:08 INFO scheduler.DAGScheduler: Submitting 50 missing tasks from ShuffleMapStage 0 (MapPartitionsRDD[1] at start at SparkTweetStreamingHDFSLoad.java:1743)
16/05/17 16:18:08 INFO cluster.YarnClusterScheduler: Adding task set 0.0 with 50 tasks
16/05/17 16:18:08 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 0.0 (TID 0, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:08 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 0.0 (TID 1, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:12 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on node4:47743 (size: 1655.0 B, free: 530.0 MB)
16/05/17 16:18:12 INFO storage.BlockManagerInfo: Added broadcast_0_piece0 in memory on node4:58845 (size: 1655.0 B, free: 530.0 MB)
16/05/17 16:18:12 INFO scheduler.TaskSetManager: Starting task 2.0 in stage 0.0 (TID 2, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:12 INFO scheduler.TaskSetManager: Starting task 3.0 in stage 0.0 (TID 3, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:12 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 0.0 (TID 1) in 4243 ms on node4 (1/50)
16/05/17 16:18:12 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 0.0 (TID 0) in 4296 ms on node4 (2/50)
16/05/17 16:18:12 INFO scheduler.TaskSetManager: Starting task 4.0 in stage 0.0 (TID 4, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:12 INFO scheduler.TaskSetManager: Starting task 5.0 in stage 0.0 (TID 5, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:12 INFO scheduler.TaskSetManager: Finished task 2.0 in stage 0.0 (TID 2) in 149 ms on node4 (3/50)
16/05/17 16:18:12 INFO scheduler.TaskSetManager: Finished task 3.0 in stage 0.0 (TID 3) in 143 ms on node4 (4/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 6.0 in stage 0.0 (TID 6, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 7.0 in stage 0.0 (TID 7, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 4.0 in stage 0.0 (TID 4) in 109 ms on node4 (5/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 5.0 in stage 0.0 (TID 5) in 88 ms on node4 (6/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 8.0 in stage 0.0 (TID 8, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 6.0 in stage 0.0 (TID 6) in 74 ms on node4 (7/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 7.0 in stage 0.0 (TID 7) in 75 ms on node4 (8/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 9.0 in stage 0.0 (TID 9, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 8.0 in stage 0.0 (TID 8) in 83 ms on node4 (9/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 10.0 in stage 0.0 (TID 10, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 9.0 in stage 0.0 (TID 9) in 94 ms on node4 (10/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 11.0 in stage 0.0 (TID 11, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 12.0 in stage 0.0 (TID 12, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 10.0 in stage 0.0 (TID 10) in 70 ms on node4 (11/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 13.0 in stage 0.0 (TID 13, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 11.0 in stage 0.0 (TID 11) in 83 ms on node4 (12/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 14.0 in stage 0.0 (TID 14, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 15.0 in stage 0.0 (TID 15, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 14.0 in stage 0.0 (TID 14) in 64 ms on node4 (13/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 16.0 in stage 0.0 (TID 16, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 13.0 in stage 0.0 (TID 13) in 99 ms on node4 (14/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 12.0 in stage 0.0 (TID 12) in 169 ms on node4 (15/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 17.0 in stage 0.0 (TID 17, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 15.0 in stage 0.0 (TID 15) in 79 ms on node4 (16/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 18.0 in stage 0.0 (TID 18, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 16.0 in stage 0.0 (TID 16) in 112 ms on node4 (17/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 19.0 in stage 0.0 (TID 19, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 17.0 in stage 0.0 (TID 17) in 87 ms on node4 (18/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 20.0 in stage 0.0 (TID 20, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 18.0 in stage 0.0 (TID 18) in 73 ms on node4 (19/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 21.0 in stage 0.0 (TID 21, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 19.0 in stage 0.0 (TID 19) in 89 ms on node4 (20/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 22.0 in stage 0.0 (TID 22, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 20.0 in stage 0.0 (TID 20) in 113 ms on node4 (21/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 21.0 in stage 0.0 (TID 21) in 90 ms on node4 (22/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 23.0 in stage 0.0 (TID 23, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 24.0 in stage 0.0 (TID 24, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 22.0 in stage 0.0 (TID 22) in 85 ms on node4 (23/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 23.0 in stage 0.0 (TID 23) in 71 ms on node4 (24/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 25.0 in stage 0.0 (TID 25, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 26.0 in stage 0.0 (TID 26, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 24.0 in stage 0.0 (TID 24) in 79 ms on node4 (25/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 27.0 in stage 0.0 (TID 27, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 25.0 in stage 0.0 (TID 25) in 77 ms on node4 (26/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 28.0 in stage 0.0 (TID 28, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 26.0 in stage 0.0 (TID 26) in 84 ms on node4 (27/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 29.0 in stage 0.0 (TID 29, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 27.0 in stage 0.0 (TID 27) in 81 ms on node4 (28/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 30.0 in stage 0.0 (TID 30, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 28.0 in stage 0.0 (TID 28) in 70 ms on node4 (29/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 31.0 in stage 0.0 (TID 31, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 29.0 in stage 0.0 (TID 29) in 93 ms on node4 (30/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Finished task 30.0 in stage 0.0 (TID 30) in 74 ms on node4 (31/50)
16/05/17 16:18:13 INFO scheduler.TaskSetManager: Starting task 32.0 in stage 0.0 (TID 32, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Starting task 33.0 in stage 0.0 (TID 33, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Finished task 32.0 in stage 0.0 (TID 32) in 71 ms on node4 (32/50)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Finished task 31.0 in stage 0.0 (TID 31) in 98 ms on node4 (33/50)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Starting task 34.0 in stage 0.0 (TID 34, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Starting task 35.0 in stage 0.0 (TID 35, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Finished task 33.0 in stage 0.0 (TID 33) in 85 ms on node4 (34/50)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Starting task 36.0 in stage 0.0 (TID 36, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Finished task 34.0 in stage 0.0 (TID 34) in 93 ms on node4 (35/50)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Starting task 37.0 in stage 0.0 (TID 37, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Finished task 35.0 in stage 0.0 (TID 35) in 503 ms on node4 (36/50)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Starting task 38.0 in stage 0.0 (TID 38, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Finished task 36.0 in stage 0.0 (TID 36) in 496 ms on node4 (37/50)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Starting task 39.0 in stage 0.0 (TID 39, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Finished task 37.0 in stage 0.0 (TID 37) in 86 ms on node4 (38/50)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Starting task 40.0 in stage 0.0 (TID 40, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Finished task 38.0 in stage 0.0 (TID 38) in 68 ms on node4 (39/50)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Starting task 41.0 in stage 0.0 (TID 41, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Finished task 40.0 in stage 0.0 (TID 40) in 62 ms on node4 (40/50)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Finished task 39.0 in stage 0.0 (TID 39) in 87 ms on node4 (41/50)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Starting task 42.0 in stage 0.0 (TID 42, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Starting task 43.0 in stage 0.0 (TID 43, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Finished task 41.0 in stage 0.0 (TID 41) in 95 ms on node4 (42/50)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Starting task 44.0 in stage 0.0 (TID 44, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Finished task 42.0 in stage 0.0 (TID 42) in 110 ms on node4 (43/50)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Starting task 45.0 in stage 0.0 (TID 45, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Finished task 43.0 in stage 0.0 (TID 43) in 94 ms on node4 (44/50)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Starting task 46.0 in stage 0.0 (TID 46, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Finished task 44.0 in stage 0.0 (TID 44) in 95 ms on node4 (45/50)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Starting task 47.0 in stage 0.0 (TID 47, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:14 INFO scheduler.TaskSetManager: Finished task 45.0 in stage 0.0 (TID 45) in 90 ms on node4 (46/50)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Starting task 48.0 in stage 0.0 (TID 48, node4, PROCESS_LOCAL, 1962 bytes)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Finished task 46.0 in stage 0.0 (TID 46) in 103 ms on node4 (47/50)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Starting task 49.0 in stage 0.0 (TID 49, node4, PROCESS_LOCAL, 1929 bytes)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Finished task 47.0 in stage 0.0 (TID 47) in 93 ms on node4 (48/50)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Finished task 48.0 in stage 0.0 (TID 48) in 127 ms on node4 (49/50)
16/05/17 16:18:15 INFO scheduler.DAGScheduler: ShuffleMapStage 0 (start at SparkTweetStreamingHDFSLoad.java:1743) finished in 6.553 s
16/05/17 16:18:15 INFO scheduler.DAGScheduler: looking for newly runnable stages
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Finished task 49.0 in stage 0.0 (TID 49) in 94 ms on node4 (50/50)
16/05/17 16:18:15 INFO scheduler.DAGScheduler: running: Set()
16/05/17 16:18:15 INFO scheduler.DAGScheduler: waiting: Set(ResultStage 1)
16/05/17 16:18:15 INFO scheduler.DAGScheduler: failed: Set()
16/05/17 16:18:15 INFO cluster.YarnClusterScheduler: Removed TaskSet 0.0, whose tasks have all completed, from pool 
16/05/17 16:18:15 INFO scheduler.DAGScheduler: Missing parents for ResultStage 1: List()
16/05/17 16:18:15 INFO scheduler.DAGScheduler: Submitting ResultStage 1 (ShuffledRDD[2] at start at SparkTweetStreamingHDFSLoad.java:1743), which is now runnable
16/05/17 16:18:15 INFO storage.MemoryStore: ensureFreeSpace(2344) called with curMem=4391, maxMem=2061647216
16/05/17 16:18:15 INFO storage.MemoryStore: Block broadcast_1 stored as values in memory (estimated size 2.3 KB, free 1966.1 MB)
16/05/17 16:18:15 INFO storage.MemoryStore: ensureFreeSpace(1400) called with curMem=6735, maxMem=2061647216
16/05/17 16:18:15 INFO storage.MemoryStore: Block broadcast_1_piece0 stored as bytes in memory (estimated size 1400.0 B, free 1966.1 MB)
16/05/17 16:18:15 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on 172.16.28.195:57488 (size: 1400.0 B, free: 1966.1 MB)
16/05/17 16:18:15 INFO spark.SparkContext: Created broadcast 1 from broadcast at DAGScheduler.scala:861
16/05/17 16:18:15 INFO scheduler.DAGScheduler: Submitting 20 missing tasks from ResultStage 1 (ShuffledRDD[2] at start at SparkTweetStreamingHDFSLoad.java:1743)
16/05/17 16:18:15 INFO cluster.YarnClusterScheduler: Adding task set 1.0 with 20 tasks
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 1.0 (TID 50, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Starting task 1.0 in stage 1.0 (TID 51, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:15 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on node4:58845 (size: 1400.0 B, free: 530.0 MB)
16/05/17 16:18:15 INFO storage.BlockManagerInfo: Added broadcast_1_piece0 in memory on node4:47743 (size: 1400.0 B, free: 530.0 MB)
16/05/17 16:18:15 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to node4:50089
16/05/17 16:18:15 INFO spark.MapOutputTrackerMaster: Size of output statuses for shuffle 0 is 295 bytes
16/05/17 16:18:15 INFO spark.MapOutputTrackerMasterEndpoint: Asked to send map output locations for shuffle 0 to node4:46484
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Starting task 2.0 in stage 1.0 (TID 52, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Starting task 3.0 in stage 1.0 (TID 53, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Finished task 1.0 in stage 1.0 (TID 51) in 454 ms on node4 (1/20)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 1.0 (TID 50) in 457 ms on node4 (2/20)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Starting task 4.0 in stage 1.0 (TID 54, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Finished task 2.0 in stage 1.0 (TID 52) in 69 ms on node4 (3/20)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Starting task 5.0 in stage 1.0 (TID 55, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Finished task 3.0 in stage 1.0 (TID 53) in 86 ms on node4 (4/20)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Starting task 6.0 in stage 1.0 (TID 56, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Finished task 4.0 in stage 1.0 (TID 54) in 66 ms on node4 (5/20)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Starting task 7.0 in stage 1.0 (TID 57, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Finished task 5.0 in stage 1.0 (TID 55) in 55 ms on node4 (6/20)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Starting task 8.0 in stage 1.0 (TID 58, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Finished task 6.0 in stage 1.0 (TID 56) in 77 ms on node4 (7/20)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Starting task 9.0 in stage 1.0 (TID 59, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Finished task 7.0 in stage 1.0 (TID 57) in 87 ms on node4 (8/20)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Starting task 10.0 in stage 1.0 (TID 60, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Finished task 8.0 in stage 1.0 (TID 58) in 49 ms on node4 (9/20)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Starting task 11.0 in stage 1.0 (TID 61, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:15 INFO scheduler.TaskSetManager: Finished task 9.0 in stage 1.0 (TID 59) in 58 ms on node4 (10/20)
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Starting task 12.0 in stage 1.0 (TID 62, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Finished task 11.0 in stage 1.0 (TID 61) in 79 ms on node4 (11/20)
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Starting task 13.0 in stage 1.0 (TID 63, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Finished task 10.0 in stage 1.0 (TID 60) in 107 ms on node4 (12/20)
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Starting task 14.0 in stage 1.0 (TID 64, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Finished task 12.0 in stage 1.0 (TID 62) in 49 ms on node4 (13/20)
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Starting task 15.0 in stage 1.0 (TID 65, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Finished task 13.0 in stage 1.0 (TID 63) in 64 ms on node4 (14/20)
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Starting task 16.0 in stage 1.0 (TID 66, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Starting task 17.0 in stage 1.0 (TID 67, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Finished task 15.0 in stage 1.0 (TID 65) in 51 ms on node4 (15/20)
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Finished task 14.0 in stage 1.0 (TID 64) in 86 ms on node4 (16/20)
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Starting task 18.0 in stage 1.0 (TID 68, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Finished task 16.0 in stage 1.0 (TID 66) in 52 ms on node4 (17/20)
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Starting task 19.0 in stage 1.0 (TID 69, node4, PROCESS_LOCAL, 1901 bytes)
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Finished task 17.0 in stage 1.0 (TID 67) in 53 ms on node4 (18/20)
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Finished task 19.0 in stage 1.0 (TID 69) in 40 ms on node4 (19/20)
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Finished task 18.0 in stage 1.0 (TID 68) in 67 ms on node4 (20/20)
16/05/17 16:18:16 INFO cluster.YarnClusterScheduler: Removed TaskSet 1.0, whose tasks have all completed, from pool 
16/05/17 16:18:16 INFO scheduler.DAGScheduler: ResultStage 1 (start at SparkTweetStreamingHDFSLoad.java:1743) finished in 1.010 s
16/05/17 16:18:16 INFO scheduler.DAGScheduler: Job 0 finished: start at SparkTweetStreamingHDFSLoad.java:1743, took 8.825568 s
16/05/17 16:18:16 INFO scheduler.ReceiverTracker: Starting 1 receivers
16/05/17 16:18:16 INFO scheduler.ReceiverTracker: ReceiverTracker started
16/05/17 16:18:16 INFO dstream.ForEachDStream: metadataCleanupDelay = -1
16/05/17 16:18:16 INFO dstream.FilteredDStream: metadataCleanupDelay = -1
16/05/17 16:18:16 INFO dstream.MappedDStream: metadataCleanupDelay = -1
16/05/17 16:18:16 INFO twitter.TwitterInputDStream: metadataCleanupDelay = -1
16/05/17 16:18:16 INFO twitter.TwitterInputDStream: Slide time = 60000 ms
16/05/17 16:18:16 INFO twitter.TwitterInputDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/05/17 16:18:16 INFO twitter.TwitterInputDStream: Checkpoint interval = null
16/05/17 16:18:16 INFO twitter.TwitterInputDStream: Remember duration = 60000 ms
16/05/17 16:18:16 INFO twitter.TwitterInputDStream: Initialized and validated org.apache.spark.streaming.twitter.TwitterInputDStream@55861179
16/05/17 16:18:16 INFO dstream.MappedDStream: Slide time = 60000 ms
16/05/17 16:18:16 INFO dstream.MappedDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/05/17 16:18:16 INFO dstream.MappedDStream: Checkpoint interval = null
16/05/17 16:18:16 INFO dstream.MappedDStream: Remember duration = 60000 ms
16/05/17 16:18:16 INFO dstream.MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream@6e42c819
16/05/17 16:18:16 INFO dstream.FilteredDStream: Slide time = 60000 ms
16/05/17 16:18:16 INFO dstream.FilteredDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/05/17 16:18:16 INFO dstream.FilteredDStream: Checkpoint interval = null
16/05/17 16:18:16 INFO dstream.FilteredDStream: Remember duration = 60000 ms
16/05/17 16:18:16 INFO dstream.FilteredDStream: Initialized and validated org.apache.spark.streaming.dstream.FilteredDStream@479cccce
16/05/17 16:18:16 INFO dstream.ForEachDStream: Slide time = 60000 ms
16/05/17 16:18:16 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/05/17 16:18:16 INFO dstream.ForEachDStream: Checkpoint interval = null
16/05/17 16:18:16 INFO dstream.ForEachDStream: Remember duration = 60000 ms
16/05/17 16:18:16 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@667afcd2
16/05/17 16:18:16 INFO dstream.ForEachDStream: metadataCleanupDelay = -1
16/05/17 16:18:16 INFO dstream.FilteredDStream: metadataCleanupDelay = -1
16/05/17 16:18:16 INFO dstream.MappedDStream: metadataCleanupDelay = -1
16/05/17 16:18:16 INFO twitter.TwitterInputDStream: metadataCleanupDelay = -1
16/05/17 16:18:16 INFO twitter.TwitterInputDStream: Slide time = 60000 ms
16/05/17 16:18:16 INFO twitter.TwitterInputDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/05/17 16:18:16 INFO twitter.TwitterInputDStream: Checkpoint interval = null
16/05/17 16:18:16 INFO twitter.TwitterInputDStream: Remember duration = 60000 ms
16/05/17 16:18:16 INFO twitter.TwitterInputDStream: Initialized and validated org.apache.spark.streaming.twitter.TwitterInputDStream@55861179
16/05/17 16:18:16 INFO dstream.MappedDStream: Slide time = 60000 ms
16/05/17 16:18:16 INFO dstream.MappedDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/05/17 16:18:16 INFO dstream.MappedDStream: Checkpoint interval = null
16/05/17 16:18:16 INFO dstream.MappedDStream: Remember duration = 60000 ms
16/05/17 16:18:16 INFO dstream.MappedDStream: Initialized and validated org.apache.spark.streaming.dstream.MappedDStream@39234bd
16/05/17 16:18:16 INFO dstream.FilteredDStream: Slide time = 60000 ms
16/05/17 16:18:16 INFO dstream.FilteredDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/05/17 16:18:16 INFO dstream.FilteredDStream: Checkpoint interval = null
16/05/17 16:18:16 INFO dstream.FilteredDStream: Remember duration = 60000 ms
16/05/17 16:18:16 INFO dstream.FilteredDStream: Initialized and validated org.apache.spark.streaming.dstream.FilteredDStream@7b6836d6
16/05/17 16:18:16 INFO dstream.ForEachDStream: Slide time = 60000 ms
16/05/17 16:18:16 INFO dstream.ForEachDStream: Storage level = StorageLevel(false, false, false, false, 1)
16/05/17 16:18:16 INFO dstream.ForEachDStream: Checkpoint interval = null
16/05/17 16:18:16 INFO dstream.ForEachDStream: Remember duration = 60000 ms
16/05/17 16:18:16 INFO dstream.ForEachDStream: Initialized and validated org.apache.spark.streaming.dstream.ForEachDStream@5ab36fc9
16/05/17 16:18:16 INFO scheduler.DAGScheduler: Got job 1 (start at SparkTweetStreamingHDFSLoad.java:1743) with 1 output partitions
16/05/17 16:18:16 INFO scheduler.DAGScheduler: Final stage: ResultStage 2(start at SparkTweetStreamingHDFSLoad.java:1743)
16/05/17 16:18:16 INFO scheduler.DAGScheduler: Parents of final stage: List()
16/05/17 16:18:16 INFO scheduler.DAGScheduler: Missing parents: List()
16/05/17 16:18:16 INFO scheduler.DAGScheduler: Submitting ResultStage 2 (Receiver 0 ParallelCollectionRDD[3] at makeRDD at ReceiverTracker.scala:556), which has no missing parents
16/05/17 16:18:16 INFO scheduler.ReceiverTracker: Receiver 0 started
16/05/17 16:18:16 INFO storage.MemoryStore: ensureFreeSpace(62448) called with curMem=8135, maxMem=2061647216
16/05/17 16:18:16 INFO storage.MemoryStore: Block broadcast_2 stored as values in memory (estimated size 61.0 KB, free 1966.1 MB)
16/05/17 16:18:16 INFO storage.MemoryStore: ensureFreeSpace(21083) called with curMem=70583, maxMem=2061647216
16/05/17 16:18:16 INFO storage.MemoryStore: Block broadcast_2_piece0 stored as bytes in memory (estimated size 20.6 KB, free 1966.1 MB)
16/05/17 16:18:16 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on 172.16.28.195:57488 (size: 20.6 KB, free: 1966.1 MB)
16/05/17 16:18:16 INFO spark.SparkContext: Created broadcast 2 from broadcast at DAGScheduler.scala:861
16/05/17 16:18:16 INFO scheduler.DAGScheduler: Submitting 1 missing tasks from ResultStage 2 (Receiver 0 ParallelCollectionRDD[3] at makeRDD at ReceiverTracker.scala:556)
16/05/17 16:18:16 INFO cluster.YarnClusterScheduler: Adding task set 2.0 with 1 tasks
16/05/17 16:18:16 INFO util.RecurringTimer: Started timer for JobGenerator at time 1463482140000
16/05/17 16:18:16 INFO scheduler.JobGenerator: Started JobGenerator at 1463482140000 ms
16/05/17 16:18:16 INFO scheduler.JobScheduler: Started JobScheduler
16/05/17 16:18:16 INFO streaming.StreamingContext: StreamingContext started
16/05/17 16:18:16 INFO scheduler.TaskSetManager: Starting task 0.0 in stage 2.0 (TID 70, node4, NODE_LOCAL, 3094 bytes)
16/05/17 16:18:17 INFO impl.StdSchedulerFactory: Using default implementation for ThreadExecutor
16/05/17 16:18:17 INFO simpl.SimpleThreadPool: Job execution threads will use class loader of thread: Driver
16/05/17 16:18:17 INFO storage.BlockManagerInfo: Added broadcast_2_piece0 in memory on node4:58845 (size: 20.6 KB, free: 530.0 MB)
16/05/17 16:18:17 INFO core.SchedulerSignalerImpl: Initialized Scheduler Signaller of type: class org.quartz.core.SchedulerSignalerImpl
16/05/17 16:18:17 INFO core.QuartzScheduler: Quartz Scheduler v.1.8.6 created.
16/05/17 16:18:17 INFO simpl.RAMJobStore: RAMJobStore initialized.
16/05/17 16:18:17 INFO core.QuartzScheduler: Scheduler meta-data: Quartz Scheduler (v1.8.6) 'DefaultQuartzScheduler' with instanceId 'NON_CLUSTERED'
  Scheduler class: 'org.quartz.core.QuartzScheduler' - running locally.
  NOT STARTED.
  Currently in standby mode.
  Number of jobs executed: 0
  Using thread pool 'org.quartz.simpl.SimpleThreadPool' - with 10 threads.
  Using job-store 'org.quartz.simpl.RAMJobStore' - which does not support persistence. and is not clustered.

16/05/17 16:18:17 INFO impl.StdSchedulerFactory: Quartz scheduler 'DefaultQuartzScheduler' initialized from default resource file in Quartz package: 'quartz.properties'
16/05/17 16:18:17 INFO impl.StdSchedulerFactory: Quartz scheduler version: 1.8.6
16/05/17 16:18:17 INFO core.QuartzScheduler: Scheduler DefaultQuartzScheduler_$_NON_CLUSTERED started.
16/05/17 16:18:17 INFO spark.SparkTweetStreamingHDFSLoad: END {}TwitterTweets
16/05/17 16:18:17 INFO yarn.ApplicationMaster: Final app status: SUCCEEDED, exitCode: 0
16/05/17 16:18:17 INFO streaming.StreamingContext: Invoking stop(stopGracefully=false) from shutdown hook
16/05/17 16:18:17 INFO scheduler.ReceiverTracker: Sent stop signal to all 1 receivers
16/05/17 16:18:17 INFO scheduler.TaskSetManager: Finished task 0.0 in stage 2.0 (TID 70) in 718 ms on node4 (1/1)
16/05/17 16:18:17 INFO scheduler.DAGScheduler: ResultStage 2 (start at SparkTweetStreamingHDFSLoad.java:1743) finished in 0.717 s
16/05/17 16:18:17 INFO cluster.YarnClusterScheduler: Removed TaskSet 2.0, whose tasks have all completed, from pool 
16/05/17 16:18:17 INFO scheduler.ReceiverTracker: All of the receivers have deregistered successfully
16/05/17 16:18:17 INFO scheduler.ReceiverTracker: ReceiverTracker stopped
16/05/17 16:18:17 INFO scheduler.JobGenerator: Stopping JobGenerator immediately
16/05/17 16:18:17 INFO util.RecurringTimer: Stopped timer for JobGenerator after time -1
16/05/17 16:18:17 INFO scheduler.JobGenerator: Stopped JobGenerator
16/05/17 16:18:17 INFO scheduler.JobScheduler: Stopped JobScheduler
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/streaming,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/streaming/batch,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static/streaming,null}
16/05/17 16:18:17 INFO streaming.StreamingContext: StreamingContext stopped successfully
16/05/17 16:18:17 INFO spark.SparkContext: Invoking stop() from shutdown hook
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/streaming/batch/json,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/streaming/json,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/metrics/json,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/kill,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/api,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/static,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump/json,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/threadDump,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors/json,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/executors,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment/json,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/environment,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd/json,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/rdd,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage/json,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/storage,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool/json,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/pool,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage/json,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/stage,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages/json,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/stages,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job/json,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/job,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs/json,null}
16/05/17 16:18:17 INFO handler.ContextHandler: stopped o.s.j.s.ServletContextHandler{/jobs,null}
16/05/17 16:18:17 INFO ui.SparkUI: Stopped Spark web UI at http://172.16.28.195:59320
16/05/17 16:18:17 INFO scheduler.DAGScheduler: Stopping DAGScheduler
16/05/17 16:18:17 INFO cluster.YarnClusterSchedulerBackend: Shutting down all executors
16/05/17 16:18:17 INFO cluster.YarnClusterSchedulerBackend: Asking each executor to shut down
16/05/17 16:18:17 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. node4:50089
16/05/17 16:18:17 INFO yarn.ApplicationMaster$AMEndpoint: Driver terminated or disconnected! Shutting down. node4:46484
16/05/17 16:18:18 INFO spark.MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/05/17 16:18:18 INFO storage.MemoryStore: MemoryStore cleared
16/05/17 16:18:18 INFO storage.BlockManager: BlockManager stopped
16/05/17 16:18:18 INFO storage.BlockManagerMaster: BlockManagerMaster stopped
16/05/17 16:18:18 INFO scheduler.OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/05/17 16:18:18 INFO spark.SparkContext: Successfully stopped SparkContext
16/05/17 16:18:18 INFO remote.RemoteActorRefProvider$RemotingTerminator: Shutting down remote daemon.
16/05/17 16:18:18 INFO yarn.ApplicationMaster: Unregistering ApplicationMaster with SUCCEEDED
16/05/17 16:18:18 INFO remote.RemoteActorRefProvider$RemotingTerminator: Remote daemon shut down; proceeding with flushing remote transports.
16/05/17 16:18:18 INFO impl.AMRMClientImpl: Waiting for application to be successfully unregistered.
16/05/17 16:18:18 INFO yarn.ApplicationMaster: Deleting staging directory .sparkStaging/application_1463479181441_0003
16/05/17 16:18:19 INFO util.ShutdownHookManager: Shutdown hook called
16/05/17 16:18:19 INFO util.ShutdownHookManager: Deleting directory /tmp/hadoop-hadoop/nm-local-dir/usercache/hadoop/appcache/application_1463479181441_0003/spark-5b36342a-6212-4cea-80da-b1961cab161c
 

Sent from Yahoo Mail. Get the app



-- 
Best Regards,
Ayan Guha


  
Mime
View raw message