Yes it does, I checked in the logs. Infact, if you see the first screenshot, stream processing was 'stuck' processing those many records for quite some time (~ 1hr).
One thing I noticed is initial batches took (maybe far?) longer than the configured batchDuration of 1.5mins, say in case screenshot 2, it took 5.8-7.1min and in case 1 it took 3-4 mins. 

On Wed, Nov 2, 2016 at 8:43 AM, Cody Koeninger <cody@koeninger.org> wrote:
Does that batch actually have that many records in it (you should be able to see beginning and ending offsets in the logs), or is it an error in the UI?


On Tue, Nov 1, 2016 at 11:59 PM, map reduced <k3t.git.1@gmail.com> wrote:
Hi guys,

I am using Spark 2.0.0 standalone cluster, regular streaming job consuming from kafka and writing to http endpoint. I have configuration:
executors 7 cores/executor, maxCores = 84 (so 12 executors)
batchsize - 90 seconds
maxRatePerPartition - 2000
backPressure enabled = true

My kafka topics have total of 300 partitions, so I am expecting to be max 54million records per batch (maxRatePerPartition * batchsize * #partitions) - and that's what I am getting. But it turns out that it can't process 54million records in 90sec batch, so I am expecting backpressure to kick in, but I see something strange there. It reduces batch size to lesser # of records, but then suddenly spits out a HUGE batch size of 13 billion records.

Inline image 1
I changed some configuration to see if above was a one off case but the same issue happened again. Check the below screenshot (huge batch size of 14 billion records again!) :

Inline image 2

Is this a bug? Any reasoning you know for this to happen?

Thanks,
KP