spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Dylan Wan <dylan....@gmail.com>
Subject Re: Spark 2.0.0 and Hive metastore
Date Wed, 06 Sep 2017 03:09:22 GMT
You can put the hive-site.xml in $SPARK_HOME/conf directory.

This property can control where the data are located.

<property> <name>spark.sql.warehouse.dir</name>
<value>/home/myuser/spark-2.2.0/spark-warehouse </value>
<description>location of the warehouse directory</description> </property>

~Dylan



On Tue, Aug 29, 2017 at 1:53 PM, Andrés Ivaldi <iaivaldi@gmail.com> wrote:

> Every comment are welcome
>
> If I´m not wrong it's because we are using percentile aggregation which
> comes with Hive support, apart from that nothing else.
>
>
> On Tue, Aug 29, 2017 at 11:23 AM, Jean Georges Perrin <jgp@jgp.net> wrote:
>
>> Sorry if my comment is not helping, but... why do you need Hive? Can't
>> you save your aggregation using parquet for example?
>>
>> jg
>>
>>
>> > On Aug 29, 2017, at 08:34, Andrés Ivaldi <iaivaldi@gmail.com> wrote:
>> >
>> > Hello, I'm using Spark API and with Hive support, I dont have a Hive
>> instance, just using Hive for some aggregation functions.
>> >
>> > The problem is that Hive crete the hive and metastore_db folder at the
>> temp folder, I want to change that location
>> >
>> > Regards.
>> >
>> > --
>> > Ing. Ivaldi Andres
>>
>>
>
>
> --
> Ing. Ivaldi Andres
>



-- 
Dylan Wan
Solution Architect - Enterprise Apps
Email: dylan.wan@gmail.com
My Blog: dylanwan.wordpress.com

Mime
View raw message