spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sandy Ryza <sandy.r...@cloudera.com>
Subject Re: How to view log on yarn-client mode?
Date Fri, 21 Nov 2014 01:51:07 GMT
I agree that using "yarn logs" is cumbersome.  We're working to improve
this in future releases.

On Thu, Nov 20, 2014 at 4:31 PM, innowireless TaeYun Kim <
taeyun.kim@innowireless.co.kr> wrote:

> Thank you.
>
>
>
> And setting yarn.log-aggregation-enable in yarm-site.xml to true was the
> key.
>
> It’s somewhat inconvenient that I must use ‘yarn logs’ rather than using
> YARN resource manager web UI after the app has completed (that is, it seems
> that the history server is not usable for Spark job), but it’s Ok.
>
>
>
> *From:* Sandy Ryza [mailto:sandy.ryza@cloudera.com]
> *Sent:* Thursday, November 20, 2014 2:44 PM
> *To:* innowireless TaeYun Kim
> *Cc:* user
> *Subject:* Re: How to view log on yarn-client mode?
>
>
>
> While the app is running, you can find logs from the YARN web UI by
> navigating to containers through the "Nodes" link.
>
>
>
> After the app has completed, you can use the YARN logs command:
>
> yarn logs -applicationId <your app ID>
>
>
>
> -Sandy
>
>
>
> On Wed, Nov 19, 2014 at 6:01 PM, innowireless TaeYun Kim <
> taeyun.kim@innowireless.co.kr> wrote:
>
> Hi,
>
>
>
> How can I view log on yarn-client mode?
>
> When I insert the following line on mapToPair function for example,
>
>
>
> System.out.println("TEST TEST");
>
>
>
> On local mode, it is displayed on console.
>
> But on yarn-client mode, it is not on anywhere.
>
> When I use yarn resource manager web UI, the size of ‘stdout’ file is 0.
>
> And the size of ‘stderr’ file is non-zero, but it has only the following
> lines. Maybe it’s from executor launcher, but not from executor process
> itself.
>
> (I’m using Spark 1.0.0)
>
>
>
> SLF4J: Class path contains multiple SLF4J bindings.
>
> SLF4J: Found binding in
> [jar:file:/grid/3/hadoop/yarn/local/filecache/10/spark-assembly-1.0.0-hadoop2.4.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>
> SLF4J: Found binding in
> [jar:file:/usr/lib/hadoop/lib/slf4j-log4j12-1.7.5.jar!/org/slf4j/impl/StaticLoggerBinder.class]
>
> SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an
> explanation.
>
> SLF4J: Actual binding is of type [org.slf4j.impl.Log4jLoggerFactory]
>
> log4j:WARN No appenders could be found for logger
> (org.apache.hadoop.util.Shell).
>
> log4j:WARN Please initialize the log4j system properly.
>
> log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for
> more info.
>
> 14/11/20 10:42:29 INFO YarnSparkHadoopUtil: Using Spark's default log4j
> profile: org/apache/spark/log4j-defaults.properties
>
> 14/11/20 10:42:29 INFO SecurityManager: Changing view acls to:
> yarn,xcapvuze
>
> 14/11/20 10:42:29 INFO SecurityManager: SecurityManager: authentication
> disabled; ui acls disabled; users with view permissions: Set(yarn, xcapvuze)
>
> 14/11/20 10:42:29 INFO Slf4jLogger: Slf4jLogger started
>
> 14/11/20 10:42:29 INFO Remoting: Starting remoting
>
> 14/11/20 10:42:29 INFO Remoting: Remoting started; listening on addresses
> :[akka.tcp://sparkYarnAM@cluster04:37065]
>
> 14/11/20 10:42:29 INFO Remoting: Remoting now listens on addresses:
> [akka.tcp://sparkYarnAM@cluster04:37065]
>
> 14/11/20 10:42:29 INFO RMProxy: Connecting to ResourceManager at cluster01/
> 10.254.0.11:8030
>
> 14/11/20 10:42:29 INFO ExecutorLauncher: ApplicationAttemptId:
> appattempt_1416441180745_0003_000001
>
> 14/11/20 10:42:29 INFO ExecutorLauncher: Registering the ApplicationMaster
>
> 14/11/20 10:42:29 INFO ExecutorLauncher: Waiting for Spark driver to be
> reachable.
>
> 14/11/20 10:42:29 INFO ExecutorLauncher: Driver now available:
> INNO-C-358:50050
>
> 14/11/20 10:42:29 INFO ExecutorLauncher: Listen to driver:
> akka.tcp://spark@INNO-C-358:50050/user/CoarseGrainedScheduler
>
> 14/11/20 10:42:29 INFO ExecutorLauncher: Allocating 3 executors.
>
> 14/11/20 10:42:29 INFO YarnAllocationHandler: Will Allocate 3 executor
> containers, each with 4480 memory
>
> 14/11/20 10:42:29 INFO YarnAllocationHandler: Container request (host:
> Any, priority: 1, capability: <memory:4480, vCores:4>
>
> 14/11/20 10:42:29 INFO YarnAllocationHandler: Container request (host:
> Any, priority: 1, capability: <memory:4480, vCores:4>
>
> 14/11/20 10:42:29 INFO YarnAllocationHandler: Container request (host:
> Any, priority: 1, capability: <memory:4480, vCores:4>
>
> 14/11/20 10:42:30 INFO AMRMClientImpl: Received new token for :
> cluster03:45454
>
> 14/11/20 10:42:30 INFO AMRMClientImpl: Received new token for :
> cluster04:45454
>
> 14/11/20 10:42:30 INFO AMRMClientImpl: Received new token for :
> cluster02:45454
>
> 14/11/20 10:42:30 INFO RackResolver: Resolved cluster03 to /default-rack
>
> 14/11/20 10:42:30 INFO RackResolver: Resolved cluster02 to /default-rack
>
> 14/11/20 10:42:30 INFO RackResolver: Resolved cluster04 to /default-rack
>
> 14/11/20 10:42:30 INFO YarnAllocationHandler: Launching container
> container_1416441180745_0003_01_000002 for on host cluster03
>
> 14/11/20 10:42:30 INFO YarnAllocationHandler: Launching ExecutorRunnable.
> driverUrl: akka.tcp://spark@INNO-C-358:50050/user/CoarseGrainedScheduler,
> executorHostname: cluster03
>
> 14/11/20 10:42:30 INFO YarnAllocationHandler: Launching container
> container_1416441180745_0003_01_000004 for on host cluster02
>
> 14/11/20 10:42:30 INFO ExecutorRunnable: Starting Executor Container
>
> 14/11/20 10:42:30 INFO YarnAllocationHandler: Launching ExecutorRunnable.
> driverUrl: akka.tcp://spark@INNO-C-358:50050/user/CoarseGrainedScheduler,
> executorHostname: cluster02
>
> 14/11/20 10:42:30 INFO ExecutorRunnable: Starting Executor Container
>
> 14/11/20 10:42:30 INFO YarnAllocationHandler: Launching container
> container_1416441180745_0003_01_000003 for on host cluster04
>
> 14/11/20 10:42:30 INFO YarnAllocationHandler: Launching ExecutorRunnable.
> driverUrl: akka.tcp://spark@INNO-C-358:50050/user/CoarseGrainedScheduler,
> executorHostname: cluster04
>
> 14/11/20 10:42:30 INFO ContainerManagementProtocolProxy:
> yarn.client.max-nodemanagers-proxies : 500
>
> 14/11/20 10:42:30 INFO ContainerManagementProtocolProxy:
> yarn.client.max-nodemanagers-proxies : 500
>
> 14/11/20 10:42:30 INFO ExecutorRunnable: Starting Executor Container
>
> 14/11/20 10:42:30 INFO ContainerManagementProtocolProxy:
> yarn.client.max-nodemanagers-proxies : 500
>
> 14/11/20 10:42:30 INFO ExecutorRunnable: Setting up ContainerLaunchContext
>
> 14/11/20 10:42:30 INFO ExecutorRunnable: Setting up ContainerLaunchContext
>
> 14/11/20 10:42:30 INFO ExecutorRunnable: Setting up ContainerLaunchContext
>
> 14/11/20 10:42:30 INFO ExecutorRunnable: Preparing Local resources
>
> 14/11/20 10:42:30 INFO ExecutorRunnable: Preparing Local resources
>
> 14/11/20 10:42:30 INFO ExecutorRunnable: Preparing Local resources
>
> 14/11/20 10:42:30 INFO ExecutorRunnable: Prepared Local resources
> Map(__spark__.jar -> resource { scheme: "hdfs" host: "cluster01" port: -1
> file: "/apps/spark/spark-assembly-1.0.0-hadoop2.4.0.jar" } size: 124439678
> timestamp: 1406511901745 type: FILE visibility: PUBLIC)
>
> 14/11/20 10:42:30 INFO ExecutorRunnable: Prepared Local resources
> Map(__spark__.jar -> resource { scheme: "hdfs" host: "cluster01" port: -1
> file: "/apps/spark/spark-assembly-1.0.0-hadoop2.4.0.jar" } size: 124439678
> timestamp: 1406511901745 type: FILE visibility: PUBLIC)
>
> 14/11/20 10:42:30 INFO ExecutorRunnable: Prepared Local resources
> Map(__spark__.jar -> resource { scheme: "hdfs" host: "cluster01" port: -1
> file: "/apps/spark/spark-assembly-1.0.0-hadoop2.4.0.jar" } size: 124439678
> timestamp: 1406511901745 type: FILE visibility: PUBLIC)
>
> 14/11/20 10:42:30 INFO ExecutorRunnable: Setting up executor with
> commands: List({{JAVA_HOME}}/bin/java, -server,
> -XX:OnOutOfMemoryError='kill %p', -Xms4096m -Xmx4096m ,
> -Drhino.opt.level=9, -Djava.io.tmpdir={{PWD}}/tmp,
> -Dlog4j.configuration=log4j-spark-container.properties,
> org.apache.spark.executor.CoarseGrainedExecutorBackend,
> akka.tcp://spark@INNO-C-358:50050/user/CoarseGrainedScheduler, 2,
> cluster02, 4, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)
>
> 14/11/20 10:42:30 INFO ExecutorRunnable: Setting up executor with
> commands: List({{JAVA_HOME}}/bin/java, -server,
> -XX:OnOutOfMemoryError='kill %p', -Xms4096m -Xmx4096m ,
> -Drhino.opt.level=9, -Djava.io.tmpdir={{PWD}}/tmp,
> -Dlog4j.configuration=log4j-spark-container.properties,
> org.apache.spark.executor.CoarseGrainedExecutorBackend,
> akka.tcp://spark@INNO-C-358:50050/user/CoarseGrainedScheduler, 1,
> cluster03, 4, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)
>
> 14/11/20 10:42:30 INFO ExecutorRunnable: Setting up executor with
> commands: List({{JAVA_HOME}}/bin/java, -server,
> -XX:OnOutOfMemoryError='kill %p', -Xms4096m -Xmx4096m ,
> -Drhino.opt.level=9, -Djava.io.tmpdir={{PWD}}/tmp,
> -Dlog4j.configuration=log4j-spark-container.properties,
> org.apache.spark.executor.CoarseGrainedExecutorBackend,
> akka.tcp://spark@INNO-C-358:50050/user/CoarseGrainedScheduler, 3,
> cluster04, 4, 1>, <LOG_DIR>/stdout, 2>, <LOG_DIR>/stderr)
>
> 14/11/20 10:42:30 INFO ContainerManagementProtocolProxy: Opening proxy :
> cluster02:45454
>
> 14/11/20 10:42:30 INFO ContainerManagementProtocolProxy: Opening proxy :
> cluster03:45454
>
> 14/11/20 10:42:30 INFO ContainerManagementProtocolProxy: Opening proxy :
> cluster04:45454
>
> 14/11/20 10:42:30 INFO ExecutorLauncher: All executors have launched.
>
> 14/11/20 10:42:30 INFO ExecutorLauncher: Started progress reporter thread
> - sleep time : 5000
>
> 14/11/20 10:43:07 INFO ExecutorLauncher: Driver terminated or
> disconnected! Shutting down. Disassociated [akka.tcp://sparkYarnAM@cluster04:37065]
> -> [akka.tcp://spark@INNO-C-358:50050]
>
> 14/11/20 10:43:10 INFO ExecutorLauncher: finish ApplicationMaster with
> SUCCEEDED
>
> 14/11/20 10:43:10 INFO AMRMClientImpl: Waiting for application to be
> successfully unregistered.
>
> 14/11/20 10:43:10 INFO ExecutorLauncher: Exited
>
>
>
> How can I view the log?
>
>
>
> Thanks.
>
>
>

Mime
View raw message