mahout-user mailing list archives

Site index · List index
Message view « Date » · « Thread »
Top « Date » · « Thread »
From Saikat Kanjilal <sxk1...@hotmail.com>
Subject RE: evaluating distributed recommendation results
Date Fri, 07 Sep 2012 22:39:10 GMT

You could do this several ways:
1) You could see whether or not users respond to 1 style of recommendations obtained through
1 type of similarity coefficient versus the others, meaning did they click on a particular
recommendation obtained through Tanimoto versus loglikelihood2) You could also use something
similar to DCG (http://en.wikipedia.org/wiki/Discounted_cumulative_gain) to figure out how
good each algorithm is compared to another

> From: goodieboy@gmail.com
> Date: Fri, 7 Sep 2012 18:22:47 -0400
> Subject: evaluating distributed recommendation results
> To: user@mahout.apache.org
> 
> Hi,
> 
> I'm generating item similarities and recommendations using the
> distributed jobs. Is there a way I can evaluate the results? The MIA
> book describes how to do this with the non-distributed recommenders,
> but I can't find anything on evaluating the distributed stuff. Any
> tips on doing this?
> 
> Thanks,
> Matt
 		 	   		  
Mime
  • Unnamed multipart/alternative (inline, None, 0 bytes)
View raw message