hbase-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Matthew Dunn <mattdun...@hotmail.com>
Subject Hadoop on PVFS2
Date Mon, 26 Jul 2010 20:03:20 GMT

Hello, I'm doing research comparing HDFS and PVFS2. Therefore, I need to get
hadoop jobs to work off PVFS2. 

Currently, I have PVFS mounted to look like a local filesystem on all the
computers in my cluster (7 total, 1 master, 6 slaves). So, in core-site.xml,
fs.default.name is set to "file:///home/matt/mnt/pvfs2" (where pvfs2 is
mounted), and hadoop.tmp.dir is set to a local folder. In mapred-site.xml,
mapred.local.dir is in a local folder, and mapred.temp.dir and
mapred.system.dir are in the shared pvfs2 directory.

With this configuration, when I start up mapreduce daemons with
bin/start-mapred.sh, the jobtracker and tasktrackers are starting properly,
and I can run a wordcount. However, most other jobs, including the
benchmarks I want to use, let's use mapredtest for example, do not work. 

For some reason, temporary files that should be saved by the jobtracker in
the shared directory are being saved in the local hadoop folder, which is
odd since I don't even have anything pointing there in the configuration
files. The job fails because the tasktrackers can't find those files, since
they should be shared (or at least that's what it seems is happening).

Can anyone help me out?
-- 
View this message in context: http://old.nabble.com/Hadoop-on-PVFS2-tp29270163p29270163.html
Sent from the HBase User mailing list archive at Nabble.com.


Mime
View raw message