spark-issues mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From "DeepakVohra (JIRA)" <>
Subject [jira] [Commented] (SPARK-2356) Exception: Could not locate executable null\bin\winutils.exe in the Hadoop
Date Sun, 01 Feb 2015 15:45:35 GMT


DeepakVohra commented on SPARK-2356:

Thanks Sean. 

HADOOP_CONF_DIR shouldn't be required to be set if Hadoop is not used. 

Hadoop doesn't even get installed on Windows.

> Exception: Could not locate executable null\bin\winutils.exe in the Hadoop 
> ---------------------------------------------------------------------------
>                 Key: SPARK-2356
>                 URL:
>             Project: Spark
>          Issue Type: Bug
>          Components: Spark Core
>    Affects Versions: 1.0.0
>            Reporter: Kostiantyn Kudriavtsev
>            Priority: Critical
> I'm trying to run some transformation on Spark, it works fine on cluster (YARN, linux
machines). However, when I'm trying to run it on local machine (Windows 7) under unit test,
I got errors (I don't use Hadoop, I'm read file from local filesystem):
> {code}
> 14/07/02 19:59:31 WARN NativeCodeLoader: Unable to load native-hadoop library for your
platform... using builtin-java classes where applicable
> 14/07/02 19:59:31 ERROR Shell: Failed to locate the winutils binary in the hadoop binary
> Could not locate executable null\bin\winutils.exe in the Hadoop
> 	at org.apache.hadoop.util.Shell.getQualifiedBinPath(
> 	at org.apache.hadoop.util.Shell.getWinUtilsPath(
> 	at org.apache.hadoop.util.Shell.<clinit>(
> 	at org.apache.hadoop.util.StringUtils.<clinit>(
> 	at
> 	at<init>(
> 	at
> 	at
> 	at
> 	at org.apache.spark.deploy.SparkHadoopUtil.<init>(SparkHadoopUtil.scala:36)
> 	at org.apache.spark.deploy.SparkHadoopUtil$.<init>(SparkHadoopUtil.scala:109)
> 	at org.apache.spark.deploy.SparkHadoopUtil$.<clinit>(SparkHadoopUtil.scala)
> 	at org.apache.spark.SparkContext.<init>(SparkContext.scala:228)
> 	at org.apache.spark.SparkContext.<init>(SparkContext.scala:97)
> {code}
> It's happened because Hadoop config is initialized each time when spark context is created
regardless is hadoop required or not.
> I propose to add some special flag to indicate if hadoop config is required (or start
this configuration manually)

This message was sent by Atlassian JIRA

To unsubscribe, e-mail:
For additional commands, e-mail:

View raw message