Hi Sam,

Have a look at Sematext's SPM for your Spark monitoring needs. If the problem is CPU, IO, Network, etc. as Ahkil mentioned, you'll see that in SPM, too.
As for the number of jobs running, you have see a chart with that at http://sematext.com/spm/integrations/spark-monitoring.html

Otis
--
Monitoring * Alerting * Anomaly Detection * Centralized Log Management
Solr & Elasticsearch Support * http://sematext.com/


On Sun, Jun 7, 2015 at 6:37 AM, SamyaMaiti <samya.maiti2012@gmail.com> wrote:
Hi All,

I have a Spark SQL application to fetch data from Hive, on top I have a akka
layer to run multiple Queries in parallel.

*Please suggest a mechanism, so as to figure out the number of spark jobs
running in the cluster at a given instance of time. *

I need to do the above as, I see the average response time increasing with
increase in number of requests, in-spite of increasing the number of cores
in the cluster. I suspect there is a bottleneck somewhere else.

Regards,
Sam



--
View this message in context: http://apache-spark-user-list.1001560.n3.nabble.com/Monitoring-Spark-Jobs-tp23193.html
Sent from the Apache Spark User List mailing list archive at Nabble.com.

---------------------------------------------------------------------
To unsubscribe, e-mail: user-unsubscribe@spark.apache.org
For additional commands, e-mail: user-help@spark.apache.org