spark-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From olegshirokikh <>
Subject Apache Spark standalone mode: number of cores
Date Fri, 23 Jan 2015 22:44:58 GMT
I'm trying to understand the basics of Spark internals and Spark
documentation for submitting applications in local mode says for
spark-submit --master setting:

local[K] Run Spark locally with K worker threads (ideally, set this to the
number of cores on your machine).

local[*] Run Spark locally with as many worker threads as logical cores on
your machine.
Since all the data is stored on a single local machine, it does not benefit
from distributed operations on RDDs.

How does it benefit and what internally is going on when Spark utilizes
several logical cores?

View this message in context:
Sent from the Apache Spark User List mailing list archive at

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message