Abstract
Recent growing interest in predicting and influencing consumer behavior has generated a parallel increase in research efforts on Recommender Systems. Many of the state-of-the-art Recommender Systems algorithms rely on obtaining user ratings in order to later predict unknown ratings. An underlying assumption in this approach is that the user ratings can be treated as ground truth of the user’s taste. However, users are inconsistent in giving their feedback, thus introducing an unknown amount of noise that challenges the validity of this assumption.
In this paper, we tackle the problem of analyzing and characterizing the noise in user feedback through ratings of movies. We present a user study aimed at quantifying the noise in user ratings that is due to inconsistencies. We measure RMSE values that range from 0.557 to 0.8156. We also analyze how factors such as item sorting and time of rating affect this noise.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Adomavicius, G., Tuzhilin, A.: Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE Trans. on Knowl. and Data Eng. 17(6), 734–749 (2005)
Bennet, J., Lanning, S.: The netflix prize. In: Proc. of KDD Work. on Large-scale Rec. Sys. (2007)
Choicestream. Personalization Survey. Technical report, Choicestream Inc. (2007)
Cosley, D., Lam, S.K., Albert, I., Konstan, J.A., Riedl, J.: Is seeing believing?: how recommender system interfaces affect users’ opinions. In: Proc. of CHI 2003 (2003)
Dijksterhuis, A., Spears, R., Lepinasse, V.: Reflecting and deflecting stereotypes: Assimilation and contrast in impression formation and automatic behavior. J. of Exp. Social Psych. 37, 286–299 (2001)
Harper, M., Li, X., Chen, Y., Konstan, J.: An economic model of user rating in an online recommender system. In: Ardissono, L., Brna, P., Mitrović, A. (eds.) UM 2005. LNCS, vol. 3538, pp. 307–316. Springer, Heidelberg (2005)
Heise, D.: Separating reliability and stability in test-retest correlation. Amer. Sociol. Rev. 34(1), 93–101 (1969)
Herlocker, J.L., Konstan, J.A., Terveen, L.G., Riedl, J.T.: Evaluating collaborative filtering recommender systems. ACM Trans. on Inf. Syst. 22(1), 5–53 (2004)
Hill, W., Stead, L., Rosenstein, M., Furnas, G.: Recommending and evaluating choices in a virtual community of use. In: Proc. of CHI 1995 (1995)
Lord, F.M., Novick, M.R.: Statistical theories of mental test scores. Addison Wesley, Reading (1968)
Murphy, K., Davidshofer, C.: Psychological testing: Principles and applications, 4th edn. Addison-Wesley, Reading (1996)
Oard, D.W., Kim, J.: Implicit feedback for recommender systems. In: AAAI Works. on Rec. Sys. (1998)
O’Mahony, M.P.: Detecting noise in recommender system databases. In: Proc. of IUI 2006 (2006)
Sherif, M., Hovland, C.I.: Social judgment: Assimilation and contrast effects in communication and attitude change. Yale University Press, New Haven (1961)
Torkzadeh, G., Doll, W.J.: The test-retest reliability of user involvement instruments. Inf. Manag. 26(1), 21–31 (1994)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2009 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Amatriain, X., Pujol, J.M., Oliver, N. (2009). I Like It... I Like It Not: Evaluating User Ratings Noise in Recommender Systems. In: Houben, GJ., McCalla, G., Pianesi, F., Zancanaro, M. (eds) User Modeling, Adaptation, and Personalization. UMAP 2009. Lecture Notes in Computer Science, vol 5535. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02247-0_24
Download citation
DOI: https://doi.org/10.1007/978-3-642-02247-0_24
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-02246-3
Online ISBN: 978-3-642-02247-0
eBook Packages: Computer ScienceComputer Science (R0)