spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jörn Franke <jornfra...@gmail.com>
Subject Re: Sqoop on Spark
Date Wed, 06 Apr 2016 05:13:23 GMT
Why do you want to reimplement something which is already there?

> On 06 Apr 2016, at 06:47, ayan guha <guha.ayan@gmail.com> wrote:
> 
> Hi
> 
> Thanks for reply. My use case is query ~40 tables from Oracle (using index and incremental
only) and add data to existing Hive tables. Also, it would be good to have an option to create
Hive table, driven by job specific configuration. 
> 
> What do you think?
> 
> Best
> Ayan
> 
>> On Wed, Apr 6, 2016 at 2:30 PM, Takeshi Yamamuro <linguin.m.s@gmail.com> wrote:
>> Hi,
>> 
>> It depends on your use case using sqoop.
>> What's it like?
>> 
>> // maropu
>> 
>>> On Wed, Apr 6, 2016 at 1:26 PM, ayan guha <guha.ayan@gmail.com> wrote:
>>> Hi All
>>> 
>>> Asking opinion: is it possible/advisable to use spark to replace what sqoop does?
Any existing project done in similar lines?
>>> 
>>> -- 
>>> Best Regards,
>>> Ayan Guha
>> 
>> 
>> 
>> -- 
>> ---
>> Takeshi Yamamuro
> 
> 
> 
> -- 
> Best Regards,
> Ayan Guha

Mime
View raw message