hadoop-mapreduce-dev mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From yuliya Feldman <yufeld...@yahoo.com.INVALID>
Subject Re: Using hadoop with other distributed filesystems
Date Thu, 18 Dec 2014 07:38:48 GMT
You forgot one important property:
fs.<yourfs>.impl   to map it to the class that has implementation of your FS.

The way you were trying to set up other properties also looks like usage of local FS, you
should probably not use file:///, but your FS prefix: foo:///  - or full URI.
Is your FS using NameNode, DataNode or it is different. If it is different you don't need
to try to bring those up, if it is using NN and DN then you need to define URI in fs.default.name
and/or fs.defaultFS (foo://namenode:8030  ).

 

     From: Behrooz Shafiee <shafiee01@gmail.com>
 To: mapreduce-dev@hadoop.apache.org 
 Sent: Wednesday, December 17, 2014 5:05 PM
 Subject: Using hadoop with other distributed filesystems
   
Hello folks,

 I have developed my own distributed file system and I want to try it with
hadoop MapReduce. It is a POSIX compatible file system and can be mounted
under a directory; eg." /myfs". I was wondering how I can configure hadoop
to use my own fs instead of hdfs. What are the configurations that need to
be changed? Or what source files should I modify?  Using google I came
across the sample of using lustre with hadoop and tried to apply them but
it failed.

I setup a cluster and mounted my own filesystem under /myfs in all of my
nodes and changed the core-site.xml  and maprd-site.xml following:

core-site.xml:

fs.default.name -> file:///
fs.defaultFS -> file:///
hadoop.tmp.dir -> /myfs


in mapred-site.xml:

mapreduce.jobtracker.staging.root.dir -> /myfs/user
mapred.system.dir -> /myfs/system
mapred.local.dir -> /myfs/mapred_${host.name}

and finally, hadoop-env.sh:

added "-Dhost.name=`hostname -s`" to  HADOOP_OPTS

However, when I try to start my namenode, I get this error:

2014-12-17 19:44:35,902 FATAL
org.apache.hadoop.hdfs.server.namenode.NameNode: Failed to start namenode.
java.lang.IllegalArgumentException: Invalid URI for NameNode address (check
fs.defaultFS): file:///home/kos/msthesis/BFS/mountdir has no authority.
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:423)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getAddress(NameNode.java:413)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.getRpcServerAddress(NameNode.java:464)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.loginAsNameNodeUser(NameNode.java:564)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.initialize(NameNode.java:584)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:762)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.<init>(NameNode.java:746)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.createNameNode(NameNode.java:1438)
        at
org.apache.hadoop.hdfs.server.namenode.NameNode.main(NameNode.java:1504)
2014-12-17 19:44:35,914 INFO org.apache.hadoop.util.ExitUtil: Exiting with
status 1

for starting datanodes I get this error:
2014-12-17 20:02:34,028 FATAL
org.apache.hadoop.hdfs.server.datanode.DataNode: Exception in secureMain
java.io.IOException: Incorrect configuration: namenode address
dfs.namenode.servicerpc-address or dfs.namenode.rpc-address is not
configured.
        at
org.apache.hadoop.hdfs.DFSUtil.getNNServiceRpcAddressesForCluster(DFSUtil.java:866)
        at
org.apache.hadoop.hdfs.server.datanode.BlockPoolManager.refreshNamenodes(BlockPoolManager.java:155)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.startDataNode(DataNode.java:1074)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.<init>(DataNode.java:415)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.makeInstance(DataNode.java:2268)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.instantiateDataNode(DataNode.java:2155)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.createDataNode(DataNode.java:2202)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.secureMain(DataNode.java:2378)
        at
org.apache.hadoop.hdfs.server.datanode.DataNode.main(DataNode.java:2402)
2014-12-17 20:02:34,036 INFO org.apache.hadoop.util.ExitUtil: Exiting with
status 1


I really appreciate if any one help about these problems.
Thanks in advance,

-- 
Behrooz


  
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message