Hi Muhammad,
You should give people a bit more time to answer/help you (for free). :)
I don't have direct answer for you, but you can look at SPM for Spark
<https://sematext.com/blog/2014/10/07/apache-spark-monitoring/>, which has
all the instructions for getting all Spark metrics (Executors, etc.) into
SPM. It doesn't involve sink.csv stuff.
Otis
--
Monitoring - Log Management - Alerting - Anomaly Detection
Solr & Elasticsearch Consulting Support Training - http://sematext.com/
On Tue, Aug 16, 2016 at 11:21 AM, Muhammad Haris <
muhammad.haris.makhtar@gmail.com> wrote:
> Still waiting for response, any clue/suggestions?
>
>
> On Tue, Aug 16, 2016 at 4:48 PM, Muhammad Haris <
> muhammad.haris.makhtar@gmail.com> wrote:
>
>> Hi,
>> I have been trying to collect driver, master, worker and executors
>> metrics using Spark 2.0 in standalone mode, here is what my metrics
>> configuration file looks like:
>>
>> *.sink.csv.class=org.apache.spark.metrics.sink.CsvSink
>> *.sink.csv.period=1
>> *.sink.csv.unit=seconds
>> *.sink.csv.directory=/root/metrics/
>> executor.source.jvm.class=org.apache.spark.metrics.source.JvmSource
>> master.source.jvm.class=org.apache.spark.metrics.source.JvmSource
>> worker.source.jvm.class=org.apache.spark.metrics.source.JvmSource
>> driver.source.jvm.class=org.apache.spark.metrics.source.JvmSource
>>
>> Once application is complete, i can only see driver's metrics, have
>> checked directories on all the worker nodes as well.
>> Could anybody please help me what's i am doing wrong here.
>>
>>
>>
>> Regards
>>
>>
>>
>
|