mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Stanley Ipkiss <saurabhnan...@gmail.com>
Subject Evaluation approach in AbstractDifferenceRecommenderEvaluator
Date Thu, 23 Sep 2010 23:12:56 GMT

For AbstractDifferenceRecommenderEvaluator (in o.a.m.cf.taste.impl.eval), in
the function processOneUser we have - 


      if (random.nextDouble() < trainingPercentage) {
        if (trainingPrefs == null) {
          trainingPrefs = new ArrayList<Preference>(3);
        }
        trainingPrefs.add(newPref);
      } else {
        if (testPrefs == null) {
          testPrefs = new ArrayList<Preference>(3);
        }
        testPrefs.add(newPref);
      }
    }

Why do you want to limit the number of preferences (per user) being used in
training or testing set to 3? Why not increase it to a more significant
number (say, 10) or better make it include all of them? The evaluation
results that we get because of this may not be right. I know that it will be
much faster by limiting it to 3. But, I was just curious if this has any
other advantage, that I am missing out on. 

And, in the documentation you mention that training % + evaluation % may not
necessarily sum up to 1. But, out here, for each user, you either put his
preference in training or testing user preferences. This effectively means,
that the training and evaluation percentage sums up to 1, for each data
point falls in either of the two categories out here, and not both. 
-- 
View this message in context: http://lucene.472066.n3.nabble.com/Evaluation-approach-in-AbstractDifferenceRecommenderEvaluator-tp1571032p1571032.html
Sent from the Mahout User List mailing list archive at Nabble.com.

Mime
View raw message