mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Sean Owen <sro...@gmail.com>
Subject Re: Training Data and Precision/Recall evaluation
Date Wed, 30 May 2012 10:19:05 GMT
Yes, because it tests on a user-by-user basis. There's not the same notion
of test/training set. Each user is split individually one at a time,
according to the "at" parameter.

On Wed, May 30, 2012 at 10:16 AM, Daniel Quach <danquach@cs.ucla.edu> wrote:

> I want to use the GenericRecommenderIRStatsEvaluator to get
> precision/recall scores for my recommenders.
>
> I know with the other recommender evaluators, you can specify the split of
> training data and test data. However, I see no such parameter for the IR
> Stats evaluator. Am I missing something here, or is it not sensible to
> split the data for this kind of evaluation?

Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message