spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "Cody Koeninger (JIRA)" <>
Subject [jira] [Commented] (SPARK-17813) Maximum data per trigger
Date Sat, 15 Oct 2016 00:53:20 GMT


Cody Koeninger commented on SPARK-17813:

To be clear, the current direct stream (and as a result structured stream) straight up will
not work with compacted topics currently, because of the assumption that offset ranges are
contiguous.  There's a ticket for it SPARK-17147 with a prototype solution, waiting for feedback
from a user on it.

So for global maxOffsetsPerTrigger are you saying a spark configuration?  Is there a reason
not to make that a maxRowsPerTrigger (or messages, or whatever name) so that it can potentially
be reused by other sources?  I think for this a proportional distribution of offsets shouldn't
be too hard.  I can pick this up once the assign stuff is stabilized.

> Maximum data per trigger
> ------------------------
>                 Key: SPARK-17813
>                 URL:
>             Project: Spark
>          Issue Type: Sub-task
>          Components: SQL
>            Reporter: Michael Armbrust
> At any given point in a streaming query execution, we process all available data.  This
maximizes throughput at the cost of latency.  We should add something similar to the {{maxFilesPerTrigger}}
option available for files.

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message