spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Harut Martirosyan <harut.martiros...@gmail.com>
Subject Standalone Scheduler VS YARN Performance
Date Tue, 24 Mar 2015 12:21:50 GMT
What is performance overhead caused by YARN, or what configurations are
being changed when the app is ran through YARN?

The following example:

sqlContext.sql("SELECT dayStamp(date),
count(distinct deviceId) AS c
FROM full
GROUP BY dayStamp(date)
ORDER BY c
DESC LIMIT 10")
.collect()

runs on shell when we use standalone scheduler:
./spark-shell --master sparkmaster:7077 --executor-memory 20g
--executor-cores 10  --driver-memory 10g --num-executors 8

and fails due to losing an executor, when we run it through YARN.
./spark-shell --master yarn-client --executor-memory 20g --executor-cores
10  --driver-memory 10g --num-executors 8

There are no evident logs, just messages that executors are being lost, and
connection refused errors, (apparently due to executor failures)
The cluster is the same, 8 nodes, 64Gb RAM each.
Format is parquet.

-- 
RGRDZ Harut

Mime
View raw message