mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Brian Rogoff <>
Subject Questions on compressed input, custom tokenizers, and feature selection
Date Sat, 16 Nov 2013 00:05:23 GMT
    I'm using Mahout 0.7 with Hadoop 0.20.2-cdh3u2, evaluating it for use
within our company. I have a few questions

    I'd like to use Mahout classification on some data that I have which is
stored as gzipped files. I'd like to create the sequence data directly from
those compressed files. Is there some file filter class I can use which
will enable me to transparently work from the compressed data?

    In case that isn't clear, consider the 20news example in the
mahout-distribution-0.7. If I create a parallel directory to 20news-all
where all of the leaf files are gzipped, say gzipped-news-all, I'd like to

./bin/mahout seqdirectory -i ${WORK_DIR}/gzipped-news-all -o

perhaps with another argument to indicate that the data input data is
compressed, and have gzipped-news-seq be identical to 20news-seq dir
resulting from running

./bin/mahout seqdirectory -i ${WORK_DIR}/20news-all -o

    I'd like to see how to substitute custom tokenizers into this flow, if
someone could point me to an example, and I'd also like to know if there
are examples of tweaking the feature selection algorithms.

    Thanks in advance!

-- Brian

  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message