phoenix-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brandon Geise <>
Subject Re: Spark-Phoenix Plugin
Date Mon, 06 Aug 2018 12:10:52 GMT
Thanks for the reply Yun.  


I’m not quite clear how this would exactly help on the upsert side?  Are you suggesting
deriving the type from Phoenix then doing the encoding/decoding and writing/reading directly
from HBase?





From: Jaanai Zhang <>
Reply-To: <>
Date: Sunday, August 5, 2018 at 9:34 PM
To: <>
Subject: Re: Spark-Phoenix Plugin


You can get data type from Phoenix meta, then encode/decode data to write/read data. I think
this way is effective, FYI :)



   Yun Zhang

   Best regards!



2018-08-04 21:43 GMT+08:00 Brandon Geise <>:

Good morning,


I’m looking at using a combination of Hbase, Phoenix and Spark for a project and read that
using the Spark-Phoenix plugin directly is more efficient than JDBC, however it wasn’t entirely
clear from examples when writing a dataframe if an upsert is performed and how much fine-grained
options there are for executing the upsert.  Any information someone can share would be greatly






View raw message