spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Akshay Bhardwaj <akshay.bhardwaj1...@gmail.com>
Subject Re: Running spark with javaagent configuration
Date Thu, 16 May 2019 05:41:19 GMT
Hi Anton,

Do you have the option of storing the JAR file on HDFS, which can be
accessed via spark in your cluster?

Akshay Bhardwaj
+91-97111-33849


On Thu, May 16, 2019 at 12:04 AM Oleg Mazurov <omazurov@splicemachine.com>
wrote:

> You can see what Uber JVM does at
> https://github.com/uber-common/jvm-profiler :
>
> --conf spark.jars=hdfs://hdfs_url/lib/jvm-profiler-1.0.0.jar
>> --conf spark.executor.extraJavaOptions=-javaagent:jvm-profiler-1.0.0.jar
>
>
>     -- Oleg
>
> On Wed, May 15, 2019 at 6:28 AM Anton Puzanov <antonpuzdevelop@gmail.com>
> wrote:
>
>> Hi everyone,
>>
>> I want to run my spark application with javaagent, specifically I want to
>> use newrelic with my application.
>>
>> When I run spark-submit I must pass --conf
>> "spark.driver.extraJavaOptions=-javaagent=<full path to newrelic jar>"
>>
>> My problem is that I can't specify the full path as I run in cluster mode
>> and I don't know the exact host which will serve as the driver.
>> *Important:* I know I can upload the jar to every node, but it seems
>> like a fragile solution as machines will be added and removed later.
>>
>> I have tried specifying the jar with --files but couldn't make it work,
>> as I didn't know where exactly I should point the javaagent
>>
>> Any suggestions on what is the best practice to handle this kind of
>> problems? and what can I do?
>>
>> Thanks a lot,
>> Anton
>>
>

Mime
View raw message