spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Cody Koeninger (JIRA)" <>
Subject [jira] [Commented] (SPARK-17812) More granular control of starting offsets (assign)
Date Thu, 13 Oct 2016 21:01:20 GMT


Cody Koeninger commented on SPARK-17812:

Here's my concrete suggestion:

3 mutually exclusive ways of subscribing:

.option("assign","""{"topicfoo": [0, 1],"topicbar": [0, 1]}""")

where assign can only be specified that way, no inline offsets

2 non-mutually exclusive ways of specifying starting position, explicit startingOffsets obviously
take priority:

.option("startingOffsets", """{"topicFoo": {"0": 1234, "1", 4567}""")
.option("startingTime", "earliest" | "latest" | long)
where long is a timestamp, work to be done on that later.
Note that even kafka 0.8 has a (really crappy based on log file modification time) api for
time so later pursuing timestamps startingTime doesn't necessarily exclude it

> More granular control of starting offsets (assign)
> --------------------------------------------------
>                 Key: SPARK-17812
>                 URL:
>             Project: Spark
>          Issue Type: Sub-task
>          Components: SQL
>            Reporter: Michael Armbrust
> Right now you can only run a Streaming Query starting from either the earliest or latests
offsets available at the moment the query is started.  Sometimes this is a lot of data.  It
would be nice to be able to do the following:
>  - seek to user specified offsets for manually specified topicpartitions

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message