In addition to usual binary artifacts, this is the first release where
we have installable packages for Python  and R  that are part of
the release. I'm including instructions to test the R package below.
Holden / other Python developers can chime in if there are special
instructions to test the pip package.
To test the R source package you can follow the following commands.
1. Download the SparkR source package from
pwendell/spark-releases/spark- 2.1.0-rc5-bin/SparkR_2.1.0. tar.gz
2. Install the source package with R CMD INSTALL SparkR_2.1.0.tar.gz
3. As the SparkR package doesn't contain Spark JARs (this is due to
package size limits from CRAN), we'll need to run 
ttp://people.apache.org/~" pwendell/spark-releases/spark- 2.1.0-rc5-bin/spark-2.1.0-bin- hadoop2.6.tgz
4. Launch R. You can now use include SparkR with `library(SparkR)` and
test it with your applications.
5. Note that the first time a SparkSession is created the binary
artifacts will the downloaded.
 Note that this isn't required once 2.1.0 has been released as
SparkR can automatically resolve and download releases.
On Thu, Dec 15, 2016 at 9:16 PM, Reynold Xin <firstname.lastname@example.org> wrote:
> Please vote on releasing the following candidate as Apache Spark version
> 2.1.0. The vote is open until Sun, December 18, 2016 at 21:30 PT and passes
> if a majority of at least 3 +1 PMC votes are cast.
> [ ] +1 Release this package as Apache Spark 2.1.0
> [ ] -1 Do not release this package because ...
> To learn more about Apache Spark, please see http://spark.apache.org/
> The tag to be voted on is v2.1.0-rc5
> List of JIRA tickets resolved are:
jira/issues/?jql=project%20% 3D%20SPARK%20AND%20fixVersion% 20%3D%202.1.0
> The release files, including signatures, digests, etc. can be found at:
> Release artifacts are signed with the following key:
> The staging repository for this release can be found at:
> The documentation corresponding to this release can be found at:
> How can I help test this release?
> If you are a Spark user, you can help us test this release by taking an
> existing Spark workload and running on this release candidate, then
> reporting any regressions.
> What should happen to JIRA tickets still targeting 2.1.0?
> Committers should look at those and triage. Extremely important bug fixes,
> documentation, and API tweaks that impact compatibility should be worked on
> immediately. Everything else please retarget to 2.1.1 or 2.2.0.
> What happened to RC3/RC5?
> They had issues withe release packaging and as a result were skipped.