spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Yana Kadiyska <yana.kadiy...@gmail.com>
Subject Re: [SQL] HiveThriftServer2 failure detection
Date Wed, 19 Nov 2014 19:18:37 GMT
https://issues.apache.org/jira/browse/SPARK-4497

On Wed, Nov 19, 2014 at 1:48 PM, Michael Armbrust <michael@databricks.com>
wrote:

> This is not by design.  Can you please file a JIRA?
>
> On Wed, Nov 19, 2014 at 9:19 AM, Yana Kadiyska <yana.kadiyska@gmail.com>
> wrote:
>
>> Hi all, I am running HiveThriftServer2 and noticed that the process stays
>> up even though there is no driver connected to the Spark master.
>>
>> I started the server via sbin/start-thriftserver and my namenodes are
>> currently not operational. I can see from the log that there was an error
>> on startup:
>>
>> 14/11/19 16:32:58 ERROR HiveThriftServer2: Error starting
>> HiveThriftServer2
>>
>> and that the driver shut down as expected:
>>
>> 14/11/19 16:32:59 INFO SparkUI: Stopped Spark web UI at http://myip:4040
>> 14/11/19 16:32:59 INFO DAGScheduler: Stopping DAGScheduler
>> 14/11/19 16:32:59 INFO SparkDeploySchedulerBackend: Shutting down all executors
>> 14/11/19 16:32:59 INFO SparkDeploySchedulerBackend: Asking each executor to shut
down
>> 14/11/19 16:33:00 INFO MapOutputTrackerMasterActor: MapOutputTrackerActor stopped!
>> 14/11/19 16:33:00 INFO MemoryStore: MemoryStore cleared
>> 14/11/19 16:33:00 INFO BlockManager: BlockManager stopped
>> 14/11/19 16:33:00 INFO BlockManagerMaster: BlockManagerMaster stopped
>> 14/11/19 16:33:00 INFO SparkContext: Successfully stopped SparkContext
>>
>> ​
>> However, when I try to run start-thriftserver.sh again I see an error
>> message that the process is already running and indeed there is a process
>> with that PID:
>>
>> root     32334     1  0 16:32 ?        00:00:00 /usr/local/bin/java org.apache.spark.deploy.SparkSubmitDriverBootstrapper
--class org.apache.spark.sql.hive.thriftserver.HiveThriftServer2 --master spark://myip:7077
--conf -spark.executor.extraJavaOptions=-verbose:gc -XX:-PrintGCDetails -XX:+PrintGCTimeStamps
spark-internal --hiveconf hive.root.logger=INFO,console
>>
>> ​
>>  Is this a bug or design decision -- I am upgrading from Shark and we had
>> scripts that monitor the driver and restart on failure. Here it seems that
>> we would not be able to restart even though the driver died?
>>
>
>

Mime
View raw message