Hi Guys,
I've got Spark Streaming set up for a low data rate system (using spark's features for analysis, rather than high throughput). Messages are coming in throughout the day, at around 1-20 per second (finger in the air estimate...not analysed yet).  In the spark streaming UI for the application, I'm getting the following after 17 hours.


Statistics over last 100 processed batches

Receiver Statistics
  • Receiver
  • Status
  • Location
  • Records in last batch
  • [2015/01/21 11:23:18]
  • Minimum rate
  • [records/sec]
  • Median rate
  • [records/sec]
  • Maximum rate
  • [records/sec]
  • Last Error
Batch Processing Statistics

Are these "normal". I was wondering what the scheduling delay and total delay terms are, and if it's normal for them to be 9 hours.

I've got a standalone spark master and 4 spark nodes. The streaming app has been given 4 cores, and it's using 1 core per worker node. The streaming app is submitted from a 5th machine, and that machine has nothing but the driver running. The worker nodes are running alongside Cassandra (and reading and writing to it).

Any insights would be appreciated.