spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Mark Bonetti <>
Subject Which metrics would be best to alert on?
Date Thu, 05 Apr 2018 14:07:51 GMT
I'm building a monitoring system for Apache Spark and want to set up
default alerts (threshold or anomaly) on 2-3 key metrics everyone who uses
Spark typically wants to alert on, but I don't yet have production-grade
experience with Spark.

Importantly, alert rules have to be generally useful, so can't be on
metrics whose values vary wildly based on the size of deployment.

In other words, which metrics would be most significant indicators that
something went wrong with your Spark:
 - master
 - worker
 - driver
 - executor
 - streaming

I thought the best place to find experienced Spark users, who would find
answering this question trivial, would be here.

Thanks very much,
Mark Scott

View raw message