iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://link.springer.com/doi/10.1007/978-3-642-02247-0_24
I Like It... I Like It Not: Evaluating User Ratings Noise in Recommender Systems | SpringerLink
Skip to main content

I Like It... I Like It Not: Evaluating User Ratings Noise in Recommender Systems

  • Conference paper
User Modeling, Adaptation, and Personalization (UMAP 2009)

Part of the book series: Lecture Notes in Computer Science ((LNISA,volume 5535))

Abstract

Recent growing interest in predicting and influencing consumer behavior has generated a parallel increase in research efforts on Recommender Systems. Many of the state-of-the-art Recommender Systems algorithms rely on obtaining user ratings in order to later predict unknown ratings. An underlying assumption in this approach is that the user ratings can be treated as ground truth of the user’s taste. However, users are inconsistent in giving their feedback, thus introducing an unknown amount of noise that challenges the validity of this assumption.

In this paper, we tackle the problem of analyzing and characterizing the noise in user feedback through ratings of movies. We present a user study aimed at quantifying the noise in user ratings that is due to inconsistencies. We measure RMSE values that range from 0.557 to 0.8156. We also analyze how factors such as item sorting and time of rating affect this noise.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 39.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 54.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

Similar content being viewed by others

References

  1. Adomavicius, G., Tuzhilin, A.: Toward the next generation of recommender systems: A survey of the state-of-the-art and possible extensions. IEEE Trans. on Knowl. and Data Eng. 17(6), 734–749 (2005)

    Article  Google Scholar 

  2. Bennet, J., Lanning, S.: The netflix prize. In: Proc. of KDD Work. on Large-scale Rec. Sys. (2007)

    Google Scholar 

  3. Choicestream. Personalization Survey. Technical report, Choicestream Inc. (2007)

    Google Scholar 

  4. Cosley, D., Lam, S.K., Albert, I., Konstan, J.A., Riedl, J.: Is seeing believing?: how recommender system interfaces affect users’ opinions. In: Proc. of CHI 2003 (2003)

    Google Scholar 

  5. Dijksterhuis, A., Spears, R., Lepinasse, V.: Reflecting and deflecting stereotypes: Assimilation and contrast in impression formation and automatic behavior. J. of Exp. Social Psych. 37, 286–299 (2001)

    Article  Google Scholar 

  6. Harper, M., Li, X., Chen, Y., Konstan, J.: An economic model of user rating in an online recommender system. In: Ardissono, L., Brna, P., Mitrović, A. (eds.) UM 2005. LNCS, vol. 3538, pp. 307–316. Springer, Heidelberg (2005)

    Chapter  Google Scholar 

  7. Heise, D.: Separating reliability and stability in test-retest correlation. Amer. Sociol. Rev. 34(1), 93–101 (1969)

    Article  Google Scholar 

  8. Herlocker, J.L., Konstan, J.A., Terveen, L.G., Riedl, J.T.: Evaluating collaborative filtering recommender systems. ACM Trans. on Inf. Syst. 22(1), 5–53 (2004)

    Article  Google Scholar 

  9. Hill, W., Stead, L., Rosenstein, M., Furnas, G.: Recommending and evaluating choices in a virtual community of use. In: Proc. of CHI 1995 (1995)

    Google Scholar 

  10. Lord, F.M., Novick, M.R.: Statistical theories of mental test scores. Addison Wesley, Reading (1968)

    MATH  Google Scholar 

  11. Murphy, K., Davidshofer, C.: Psychological testing: Principles and applications, 4th edn. Addison-Wesley, Reading (1996)

    Google Scholar 

  12. Oard, D.W., Kim, J.: Implicit feedback for recommender systems. In: AAAI Works. on Rec. Sys. (1998)

    Google Scholar 

  13. O’Mahony, M.P.: Detecting noise in recommender system databases. In: Proc. of IUI 2006 (2006)

    Google Scholar 

  14. Sherif, M., Hovland, C.I.: Social judgment: Assimilation and contrast effects in communication and attitude change. Yale University Press, New Haven (1961)

    Google Scholar 

  15. Torkzadeh, G., Doll, W.J.: The test-retest reliability of user involvement instruments. Inf. Manag. 26(1), 21–31 (1994)

    Article  Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2009 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Amatriain, X., Pujol, J.M., Oliver, N. (2009). I Like It... I Like It Not: Evaluating User Ratings Noise in Recommender Systems. In: Houben, GJ., McCalla, G., Pianesi, F., Zancanaro, M. (eds) User Modeling, Adaptation, and Personalization. UMAP 2009. Lecture Notes in Computer Science, vol 5535. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-02247-0_24

Download citation

  • DOI: https://doi.org/10.1007/978-3-642-02247-0_24

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-642-02246-3

  • Online ISBN: 978-3-642-02247-0

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics