phoenix-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "ASF GitHub Bot (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (PHOENIX-1071) Provide integration for exposing Phoenix tables as Spark RDDs
Date Wed, 01 Apr 2015 17:39:52 GMT

    [ https://issues.apache.org/jira/browse/PHOENIX-1071?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=14391048#comment-14391048
] 

ASF GitHub Bot commented on PHOENIX-1071:
-----------------------------------------

Github user jmahonin commented on the pull request:

    https://github.com/apache/phoenix/pull/59#issuecomment-88569122
  
    I was able to spend a bit more time on the RelationProvider work. The DDL for custom providers
doesn't work through the 'sql()' method on SparkSQLContext, perhaps by design or perhaps due
to a bug, but the 'load()' method does work to create DataFrames using arbitrary data sources.
    
    I'm still not entirely familiar with their API, and a lot of it is still very new and
probably subject to churn, but I tried to base the it on existing examples in the Spark repo,
such as the JDBC and Parquet data sources.
    
    I've got a new commit on a side branch here:
    https://github.com/FileTrek/phoenix/commit/16f4540ef0889fc6534c91b8638c16001114ba1a
    
    If you're all OK with those changes going in on this PR, I can push them up here. Otherwise,
I'll stash them aside for a new ticket.


> Provide integration for exposing Phoenix tables as Spark RDDs
> -------------------------------------------------------------
>
>                 Key: PHOENIX-1071
>                 URL: https://issues.apache.org/jira/browse/PHOENIX-1071
>             Project: Phoenix
>          Issue Type: New Feature
>            Reporter: Andrew Purtell
>
> A core concept of Apache Spark is the resilient distributed dataset (RDD), a "fault-tolerant
collection of elements that can be operated on in parallel". One can create a RDDs referencing
a dataset in any external storage system offering a Hadoop InputFormat, like PhoenixInputFormat
and PhoenixOutputFormat. There could be opportunities for additional interesting and deep
integration. 
> Add the ability to save RDDs back to Phoenix with a {{saveAsPhoenixTable}} action, implicitly
creating necessary schema on demand.
> Add support for {{filter}} transformations that push predicates to the server.
> Add a new {{select}} transformation supporting a LINQ-like DSL, for example:
> {code}
> // Count the number of different coffee varieties offered by each
> // supplier from Guatemala
> phoenixTable("coffees")
>     .select(c =>
>         where(c.origin == "GT"))
>     .countByKey()
>     .foreach(r => println(r._1 + "=" + r._2))
> {code} 
> Support conversions between Scala and Java types and Phoenix table data.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

Mime
View raw message