spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From wenxing zheng <wenxing.zh...@gmail.com>
Subject Re: How to export the Spark SQL jobs from the HiveThriftServer2
Date Wed, 06 Dec 2017 09:51:32 GMT
the words: [app-id] will actually be [base-app-id]/[attempt-id], where
[base-app-id] is the YARN application ID. is not so correct. As after I
changed the [app-id] to [base-app-id], it works.

Maybe we need to fix the document?

>From the information of the spark job or the spark stages, I can't see the
statistics on the memory part? Appreciated for any hints

On Wed, Dec 6, 2017 at 2:08 PM, wenxing zheng <wenxing.zheng@gmail.com>
wrote:

> Dear all,
>
> I have a HiveThriftServer2 serer running and most of our spark SQLs will
> go there for calculation. From the Yarn GUI, I can see the application id
> and the attempt ID of the thrift server. But with the REST api described on
> the page (https://spark.apache.org/docs/latest/monitoring.html#rest-api),
> I still can't get the jobs for a given application with the endpoint:
> */applications/[app-id]/jobs*
>
> Can anyone kindly advice how to dump the spark SQL jobs for audit? Just
> like the one for the MapReduce jobs (https://hadoop.apache.org/
> docs/current/hadoop-yarn/hadoop-yarn-site/WebServicesIntro.html).
>
> Thanks again,
> Wenxing
>

Mime
View raw message