spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Jayesh Lalwani <jayesh.lalw...@capitalone.com>
Subject Re: [External Sender] re: streaming, batch / spark 2.2.1
Date Thu, 02 Aug 2018 20:11:57 GMT
What is differrent between the 2 systems? If one system processes records
faster than the other, simply because it does less processing, then you can
expect the first system to have a higher throughput than the second. It's
hard to say why one system has double the throughput of another without
knowing what it is doing internally.

"The number of records in one batch does not seem to be determined by the
batch interval (since it is zero by default in Spark2.2), but likely (at
least influenced) by the time it needs to process the previous batch."
This is expected Spark behavior. If you set batch interval to 0, it will
process a microbatch immediately after it has finished the previous
microbatch. Assuming your input is coming at a constant rate of R records
per second, and one microbatch takes T1 seconds, then the next microbatch
will take R.T1 records. If the second microbatch takes T2 seconds, then the
third microbatch will take R.T2 records. This is why it's important than
your throughput is higher than your input rate. If it's not, batches will
become bigger and bigger and take longer and longer until the application
fails



On Thu, Aug 2, 2018 at 2:43 PM Peter Liu <peter.pliu@gmail.com> wrote:

> Hello there,
>
> I'm new to spark streaming and have trouble to understand spark batch
> "composition" (google search keeps give me an older spark streaming
> concept). Would appreciate any help and clarifications.
> I'm using spark 2.2.1 for a streaming workload (see quoted code in (a)
> below). The general question I have is:
>
> How is the number of records for a spark batch (as seen on Spark Job UI)
> determined? (the default batch interval time is supposedly zero in Spark
> 2.2.1  by default settings)
>
> The Issue I'm facing is that for the same incoming streaming source (300K
> msg/sec to a kafka broker, 220bytes per message), I got different numbers
> (2x) of processed batches on two different systems for the same amount of
> application/consumer running time (30min). -- At batch level, the input
> data size per batch are the same (49.9KB), where the total input data size
> (under spark executor tab) is different , i.e. ~2x as the system also
> processed 2x of batches as expected. --- Note: on both systems, the spark
> consumer seems to hold well (no increased batch processing time lagging
> over the 30 min). see (c) for the real functional concern.
>
> (b) and (c) below have a bit more context info and the real concern in
> case relevant.
>
> Have been struggling with this. Any comments and help would be very much
> appreciated.
>
> Thanks!
>
> Regards,
>
> Peter
>
> =============
> (a) code in use:
>       .selectExpr("CAST(value AS STRING)", "CAST(timestamp AS
> TIMESTAMP)").as[(String, Timestamp)]
>       .select(from_json($"value", mySchema).as("data"), $"timestamp")
>       .select("data.*", "timestamp")
>       .where($"event_type" === "view")
>       .select($"ad_id", $"event_time")
>       .join(campaigns.toSeq.toDS().cache(), Seq("ad_id"))
>       .groupBy(millisTime(window($"event_time", "10
> seconds").getField("start")) as 'time_window, $"campaign_id")  //original
> code
>       .agg(count("*") as 'count, max('event_time) as 'lastUpdate)
>       .select(to_json(struct("*")) as 'value)
>       .writeStream
>       .format("kafka")
> ...      .
>        outputMode("update")
>       .start()
>
> (b)
> the number of records in one batch does not seem to be determined by the
> batch interval (since it is zero by default in Spark2.2), but likely (at
> least influenced) by the time it needs to process the previous batch. It is
> noted that the input data amount per batch seems to be quite consistent and
> kept the same on both systems from Spark UI (49.9 kb)- indicating there is
> a strict logic to prepare/cap the data per batch despite the fluctuation in
> the batch processing time - what is this logic?
>
>
> (c)
> the major question is a functional one: if one system processes the double
> amount of the data than the other, should it be an indication that either
> the system processed duplicated data or the other system processes half of
> the needed data? Or it is more a reporting issue?
>
>
>
________________________________________________________

The information contained in this e-mail is confidential and/or proprietary to Capital One
and/or its affiliates and may only be used solely in performance of work or services for Capital
One. The information transmitted herewith is intended only for use by the individual or entity
to which it is addressed. If the reader of this message is not the intended recipient, you
are hereby notified that any review, retransmission, dissemination, distribution, copying
or other use of, or taking of any action in reliance upon this information is strictly prohibited.
If you have received this communication in error, please contact the sender and delete the
material from your computer.
Mime
View raw message