hive-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Hive QA (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (HIVE-20377) Hive Kafka Storage Handler
Date Wed, 05 Sep 2018 04:05:00 GMT

    [ https://issues.apache.org/jira/browse/HIVE-20377?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=16603873#comment-16603873
] 

Hive QA commented on HIVE-20377:
--------------------------------

| (x) *{color:red}-1 overall{color}* |
\\
\\
|| Vote || Subsystem || Runtime || Comment ||
|| || || || {color:brown} Prechecks {color} ||
| {color:green}+1{color} | {color:green} @author {color} | {color:green}  0m  0s{color} |
{color:green} The patch does not contain any @author tags. {color} |
|| || || || {color:brown} master Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m 41s{color} | {color:blue}
Maven dependency ordering for branch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green}  8m 11s{color}
| {color:green} master passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green}  9m 43s{color} |
{color:green} master passed {color} |
| {color:green}+1{color} | {color:green} checkstyle {color} | {color:green}  2m 17s{color}
| {color:green} master passed {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 41s{color} | {color:blue}
serde in master has 195 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 22s{color} | {color:blue}
itests/qtest-druid in master has 6 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 46s{color} | {color:blue}
itests/util in master has 52 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  0m 42s{color} | {color:blue}
llap-server in master has 84 extant Findbugs warnings. {color} |
| {color:blue}0{color} | {color:blue} findbugs {color} | {color:blue}  4m  3s{color} | {color:blue}
ql in master has 2310 extant Findbugs warnings. {color} |
| {color:green}+1{color} | {color:green} javadoc {color} | {color:green}  9m 30s{color} |
{color:green} master passed {color} |
|| || || || {color:brown} Patch Compile Tests {color} ||
| {color:blue}0{color} | {color:blue} mvndep {color} | {color:blue}  0m  7s{color} | {color:blue}
Maven dependency ordering for patch {color} |
| {color:green}+1{color} | {color:green} mvninstall {color} | {color:green} 11m 14s{color}
| {color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} compile {color} | {color:green} 10m  9s{color} |
{color:green} the patch passed {color} |
| {color:green}+1{color} | {color:green} javac {color} | {color:green} 10m  9s{color} | {color:green}
the patch passed {color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 11s{color} | {color:red}
itests/qtest-druid: The patch generated 37 new + 3 unchanged - 0 fixed = 40 total (was 3)
{color} |
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 10s{color} | {color:red}
kafka-handler: The patch generated 33 new + 0 unchanged - 0 fixed = 33 total (was 0) {color}
|
| {color:red}-1{color} | {color:red} checkstyle {color} | {color:red}  0m 13s{color} | {color:red}
llap-server: The patch generated 1 new + 26 unchanged - 4 fixed = 27 total (was 30) {color}
|
| {color:green}+1{color} | {color:green} whitespace {color} | {color:green}  0m  0s{color}
| {color:green} The patch has no whitespace issues. {color} |
| {color:green}+1{color} | {color:green} xml {color} | {color:green}  0m  5s{color} | {color:green}
The patch has no ill-formed XML file. {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 39s{color} | {color:red}
patch/serde cannot run setBugDatabaseInfo from findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 21s{color} | {color:red}
patch/itests/qtest-druid cannot run setBugDatabaseInfo from findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 47s{color} | {color:red}
patch/itests/util cannot run setBugDatabaseInfo from findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  0m 24s{color} | {color:red}
patch/kafka-handler cannot run setBugDatabaseInfo from findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  1m 11s{color} | {color:red}
patch/llap-server cannot run setBugDatabaseInfo from findbugs {color} |
| {color:red}-1{color} | {color:red} findbugs {color} | {color:red}  7m 22s{color} | {color:red}
patch/ql cannot run setBugDatabaseInfo from findbugs {color} |
| {color:red}-1{color} | {color:red} javadoc {color} | {color:red}  0m 35s{color} | {color:red}
itests_util generated 1 new + 9 unchanged - 0 fixed = 10 total (was 9) {color} |
|| || || || {color:brown} Other Tests {color} ||
| {color:green}+1{color} | {color:green} asflicense {color} | {color:green}  0m 17s{color}
| {color:green} The patch does not generate ASF License warnings. {color} |
| {color:black}{color} | {color:black} {color} | {color:black} 90m  6s{color} | {color:black}
{color} |
\\
\\
|| Subsystem || Report/Notes ||
| Optional Tests |  asflicense  javac  javadoc  xml  compile  findbugs  checkstyle  |
| uname | Linux hiveptest-server-upstream 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u1 (2016-09-03)
x86_64 GNU/Linux |
| Build tool | maven |
| Personality | /data/hiveptest/working/yetus_PreCommit-HIVE-Build-13592/dev-support/hive-personality.sh
|
| git revision | master / 33fa62f |
| Default Java | 1.8.0_111 |
| findbugs | v3.0.0 |
| checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13592/yetus/diff-checkstyle-itests_qtest-druid.txt
|
| checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13592/yetus/diff-checkstyle-kafka-handler.txt
|
| checkstyle | http://104.198.109.242/logs//PreCommit-HIVE-Build-13592/yetus/diff-checkstyle-llap-server.txt
|
| findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13592/yetus/patch-findbugs-serde.txt
|
| findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13592/yetus/patch-findbugs-itests_qtest-druid.txt
|
| findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13592/yetus/patch-findbugs-itests_util.txt
|
| findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13592/yetus/patch-findbugs-kafka-handler.txt
|
| findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13592/yetus/patch-findbugs-llap-server.txt
|
| findbugs | http://104.198.109.242/logs//PreCommit-HIVE-Build-13592/yetus/patch-findbugs-ql.txt
|
| javadoc | http://104.198.109.242/logs//PreCommit-HIVE-Build-13592/yetus/diff-javadoc-javadoc-itests_util.txt
|
| modules | C: serde . itests itests/qtest itests/qtest-druid itests/util kafka-handler llap-server
packaging ql U: . |
| Console output | http://104.198.109.242/logs//PreCommit-HIVE-Build-13592/yetus.txt |
| Powered by | Apache Yetus    http://yetus.apache.org |


This message was automatically generated.



> Hive Kafka Storage Handler
> --------------------------
>
>                 Key: HIVE-20377
>                 URL: https://issues.apache.org/jira/browse/HIVE-20377
>             Project: Hive
>          Issue Type: New Feature
>    Affects Versions: 4.0.0
>            Reporter: slim bouguerra
>            Assignee: slim bouguerra
>            Priority: Major
>         Attachments: HIVE-20377.10.patch, HIVE-20377.11.patch, HIVE-20377.12.patch, HIVE-20377.15.patch,
HIVE-20377.18.patch, HIVE-20377.18.patch, HIVE-20377.19.patch, HIVE-20377.19.patch, HIVE-20377.19.patch,
HIVE-20377.4.patch, HIVE-20377.5.patch, HIVE-20377.6.patch, HIVE-20377.8.patch, HIVE-20377.8.patch,
HIVE-20377.patch
>
>
> h1. Goal
> * Read streaming data form Kafka queue as an external table.
> * Allow streaming navigation by pushing down filters on Kafka record partition id, offset
and timestamp. 
> * Insert streaming data form Kafka to an actual Hive internal table, using CTAS statement.
> h1. Example
> h2. Create the external table
> {code} 
> CREATE EXTERNAL TABLE kafka_table (`timestamp` timestamp, page string, `user` string,
language string, added int, deleted int, flags string,comment string, namespace string)
> STORED BY 'org.apache.hadoop.hive.kafka.KafkaStorageHandler'
> TBLPROPERTIES 
> ("kafka.topic" = "wikipedia", 
> "kafka.bootstrap.servers"="brokeraddress:9092",
> "kafka.serde.class"="org.apache.hadoop.hive.serde2.JsonSerDe");
> {code}
> h2. Kafka Metadata
> In order to keep track of Kafka records the storage handler will add automatically the
Kafka row metadata eg partition id, record offset and record timestamp. 
> {code}
> DESCRIBE EXTENDED kafka_table
> timestamp              	timestamp           	from deserializer   
> page                	string              	from deserializer   
> user                	string              	from deserializer   
> language            	string              	from deserializer   
> country             	string              	from deserializer   
> continent           	string              	from deserializer   
> namespace           	string              	from deserializer   
> newpage             	boolean             	from deserializer   
> unpatrolled         	boolean             	from deserializer   
> anonymous           	boolean             	from deserializer   
> robot               	boolean             	from deserializer   
> added               	int                 	from deserializer   
> deleted             	int                 	from deserializer   
> delta               	bigint              	from deserializer   
> __partition         	int                 	from deserializer   
> __offset            	bigint              	from deserializer   
> __timestamp         	bigint              	from deserializer   
> {code}
> h2. Filter push down.
> Newer Kafka consumers 0.11.0 and higher allow seeking on the stream based on a given
offset. The proposed storage handler will be able to leverage such API by pushing down filters
over metadata columns, namely __partition (int), __offset(long) and __timestamp(long)
> For instance Query like
> {code} 
> select `__offset` from kafka_table where (`__offset` < 10 and `__offset`>3 and
`__partition` = 0) or (`__partition` = 0 and `__offset` < 105 and `__offset` > 99) or
(`__offset` = 109);
> {code}
> Will result on a scan of partition 0 only then read only records between offset 4 and
109. 
> h2. With timestamp seeks 
> The seeking based on the internal timestamps allows the handler to run on recently arrived
data, by doing
> {code}
> select count(*) from kafka_table where `__timestamp` >  1000 * to_unix_timestamp(CURRENT_TIMESTAMP
- interval '20' hours) ;
> {code}
> This allows for implicit relationships between event timestamps and kafka timestamps
to be expressed in queries (i.e event_timestamp is always < than kafka __timestamp and
kafka __timestamp is never > 15 minutes from event etc).
> h2. More examples with Avro 
> {code}
> CREATE EXTERNAL TABLE wiki_kafka_avro_table
> STORED BY 'org.apache.hadoop.hive.kafka.KafkaStorageHandler'
> TBLPROPERTIES
> ("kafka.topic" = "wiki_kafka_avro_table",
> "kafka.bootstrap.servers"="localhost:9092",
> "kafka.serde.class"="org.apache.hadoop.hive.serde2.avro.AvroSerDe",
> 'avro.schema.literal'='{
>   "type" : "record",
>   "name" : "Wikipedia",
>   "namespace" : "org.apache.hive.kafka",
>   "version": "1",
>   "fields" : [ {
>     "name" : "isrobot",
>     "type" : "boolean"
>   }, {
>     "name" : "channel",
>     "type" : "string"
>   }, {
>     "name" : "timestamp",
>     "type" : "string"
>   }, {
>     "name" : "flags",
>     "type" : "string"
>   }, {
>     "name" : "isunpatrolled",
>     "type" : "boolean"
>   }, {
>     "name" : "page",
>     "type" : "string"
>   }, {
>     "name" : "diffurl",
>     "type" : "string"
>   }, {
>     "name" : "added",
>     "type" : "long"
>   }, {
>     "name" : "comment",
>     "type" : "string"
>   }, {
>     "name" : "commentlength",
>     "type" : "long"
>   }, {
>     "name" : "isnew",
>     "type" : "boolean"
>   }, {
>     "name" : "isminor",
>     "type" : "boolean"
>   }, {
>     "name" : "delta",
>     "type" : "long"
>   }, {
>     "name" : "isanonymous",
>     "type" : "boolean"
>   }, {
>     "name" : "user",
>     "type" : "string"
>   }, {
>     "name" : "deltabucket",
>     "type" : "double"
>   }, {
>     "name" : "deleted",
>     "type" : "long"
>   }, {
>     "name" : "namespace",
>     "type" : "string"
>   } ]
> }'
> );
> {code}



--
This message was sent by Atlassian JIRA
(v7.6.3#76005)

Mime
View raw message