spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Cody Koeninger (JIRA)" <j...@apache.org>
Subject [jira] [Commented] (SPARK-17813) Maximum data per trigger
Date Sat, 15 Oct 2016 00:53:20 GMT

    [ https://issues.apache.org/jira/browse/SPARK-17813?page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel&focusedCommentId=15577037#comment-15577037
] 

Cody Koeninger commented on SPARK-17813:
----------------------------------------

To be clear, the current direct stream (and as a result structured stream) straight up will
not work with compacted topics currently, because of the assumption that offset ranges are
contiguous.  There's a ticket for it SPARK-17147 with a prototype solution, waiting for feedback
from a user on it.

So for global maxOffsetsPerTrigger are you saying a spark configuration?  Is there a reason
not to make that a maxRowsPerTrigger (or messages, or whatever name) so that it can potentially
be reused by other sources?  I think for this a proportional distribution of offsets shouldn't
be too hard.  I can pick this up once the assign stuff is stabilized.

> Maximum data per trigger
> ------------------------
>
>                 Key: SPARK-17813
>                 URL: https://issues.apache.org/jira/browse/SPARK-17813
>             Project: Spark
>          Issue Type: Sub-task
>          Components: SQL
>            Reporter: Michael Armbrust
>
> At any given point in a streaming query execution, we process all available data.  This
maximizes throughput at the cost of latency.  We should add something similar to the {{maxFilesPerTrigger}}
option available for files.



--
This message was sent by Atlassian JIRA
(v6.3.4#6332)

---------------------------------------------------------------------
To unsubscribe, e-mail: issues-unsubscribe@spark.apache.org
For additional commands, e-mail: issues-help@spark.apache.org


Mime
View raw message