spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Girish Subramanian <>
Subject Re: [Structured Streaming] Two watermarks and StreamingQueryListener
Date Sat, 11 Aug 2018 02:47:10 GMT
Thanks for the explanation.

We are doing something like this.

The *first* watermark is to eliminate the late events from *kafka*

The *second* watermark is to eliminate older aggregated metrics across *sessions*

I know I can replace the second one with *window* but I was not able to come up with a solution.

val sessionLevelMetrics: Dataset[Metric] = kafkaEvents
  .withWatermark("timestamp", "30 minutes")
  .groupByKey(e => e._2.getSiteIdentifierConcatenatedSessionId)
  .flatMapGroupsWithState(OutputMode.Append(), GroupStateTimeout.EventTimeTimeout())(updateSessionState(broadcastWrapper))

val aggMetrics: Dataset[AggregatedMetric] = sessionLevelMetrics.withColumn("ts", conversion(col("timestamp")))
  .withWatermark("ts", "30 minutes")
  .groupByKey(m => m.getAs[String]("name") + "." + m.getAs[Long]("timestamp"))
  .flatMapGroupsWithState(OutputMode.Append(), GroupStateTimeout.EventTimeTimeout())(updateAggregateMetricsState)

  .option("checkpointLocation", checkpointLocation)

def updateAggregateMetricsState(metricKey: String, sessionLevelMetrics: Iterator[Row], state:
GroupState[AggregatedMetric]): Iterator[AggregatedMetric] = {
  if (state.hasTimedOut) {
  } else if (!sessionLevelMetrics.hasNext) {
  } else {
    val prev = state.getOption
    var sum =[Int]("count")).sum
    if (prev.isDefined) {
      sum += prev.get.count
    val aggMetric = AggregatedMetric(metricKey, sum)
    state.setTimeoutTimestamp(metricKey.substring(metricKey.lastIndexOf(".") + 1).toLong +
(30 * 60000))

From: Tathagata Das <>
Date: Friday, August 10, 2018 at 4:16 PM
To: subramgr <>
Cc: user <>
Subject: EXT: Re: [Structured Streaming] Two watermarks and StreamingQueryListener

Structured Streaming internally maintains one global watermark by taking a min of the two
watermarks. Thats why one gets reported. In Spark 2.4, there will be the option of choosing
max instead of min.

Just curious. Why do you have to two watermarks? Whats the query like.


On Thu, Aug 9, 2018 at 3:15 PM, subramgr <<>>

We have two *flatMapGroupWithState* in our job and we have two

We are getting the event max time, event time and watermarks from

Right now it just returns one *watermark* value.

Are two watermarks maintained by Spark or just one.
If one which one
If one watermark is maintained per *Dataframe* how do I get the values for
them ?

Sent from:

To unsubscribe e-mail:<>

View raw message