Hi,
This has worked before including 1.6.1 etc
Build Spark without Hive jars. The idea being to use Spark as Hive execution engine.
The usual process is to do
dev/make-distribution
.sh --name
"hadoop2-without-hive"
--tgz
"-Pyarn,hadoop-provided,hadoop-2.6,parquet-provided"
However, now I am getting this warning
[INFO] BUILD SUCCESS
[INFO] ------------------------------------------------------------------------
[INFO] Total time: 10:08 min (Wall Clock)
[INFO] Finished at: 2016-07-27T15:07:11+01:00
[INFO] Final Memory: 98M/1909M
[INFO] ------------------------------------------------------------------------
+ rm -rf /data6/hduser/spark-2.0.0/dist
+ mkdir -p /data6/hduser/spark-2.0.0/dist/jars
+ echo 'Spark [WARNING] The requested profile "parquet-provided" could not be activated because it does not exist. built for Hadoop [WARNING] The requested profile "parquet-provided" could not be activated because it does not exist.'
+ echo 'Build flags: -Pyarn,hadoop-provided,hadoop-2.6,parquet-provided'
And this is the only tgz file I see
./spark-[WARNING] The requested profile "parquet-provided" could not be activated because it does not exist.-bin-hadoop2-without-hive.tgz
Any clues what is happening and the correct way of creating the build:
My interest is to extract the jar file similar to below from the build
spark-assembly-1.3.1-hadoop2.4.0.jar