To be clear on what your configuration will do:
- SPARK_DAEMON_MEMORY=8g will make your standalone master and worker schedulers have a lot of memory. These do not impact the actual amount of useful memory given to executors or your driver, however, so you probably don't need to set this.
- SPARK_WORKER_MEMORY=8g allows each worker to provide up to 8g worth of executors. In itself, this does not actually give executors more memory, just allows them to get more. This is a necessary setting.
- *_JAVA_OPTS should not be used to set memory parameters, as they may or may not override their *_MEMORY counterparts.
The two things you are not configuring are the amount of memory for your driver (for a 0.8.1 spark-shell, you must use SPARK_MEM) and the amount of memory given to each executor (spark.executor.memory). By default, Spark executors are only 512MB in size, so you will probably want to increase this up to the value of SPARK_WORKER_MEMORY. This will provide you with 1 executor per worker that uses all available memory, which is probably what you want for testing purposes (it is less ideal for sharing a cluster).