iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: https://doi.org/10.1007/978-3-031-09316-6_12
Keyword Recommendation for Fair Search | SpringerLink
Skip to main content

Keyword Recommendation for Fair Search

  • Conference paper
  • First Online:
Advances in Bias and Fairness in Information Retrieval (BIAS 2022)

Part of the book series: Communications in Computer and Information Science ((CCIS,volume 1610))

Abstract

Online search engines are an extremely popular tool for seeking information. However, the results returned sometimes exhibit undesirable or even wrongful forms of bias, such as with respect to gender or race. In this paper, we consider the problem of fair keyword recommendation, in which the goal is to suggest keywords that are relevant to a user’s search query, but exhibit less (or opposite) bias. We present a multi-objective optimization method that uses word embeddings to suggest alternate keywords for biased keywords present in a search query. We perform a qualitative analysis on pairs of subReddits from Reddit.com (r/Republican vs. r/democrats). Our results demonstrate the efficacy of the proposed method and illustrate subtle linguistic differences between subReddits.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 54.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 69.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    https://github.com/harshdsdh/fairKR.

References

  1. Blaubergs, M.S.: Changing the sexist language: the theory behind the practice. Psychol. Women Q. 2(3) (1978)

    Google Scholar 

  2. Bolukbasi, T., Chang, K.W., Zou, J.Y., Saligrama, V., Kalai, A.T.: Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In: Advances in Neural Information Processing Systems, pp. 4349–4357 (2016)

    Google Scholar 

  3. Candillier, L., Chevalier, M., Dudognon, D., Mothe, J.: Diversity in recommender systems. In: Proceedings of the Fourth International Conference on Advances in Human-oriented and Personalized Mechanisms, Technologies, and Services. CENTRIC, pp. 23–29 (2011)

    Google Scholar 

  4. Dev, S., Li, T., Phillips, J.M., Srikumar, V.: On measuring and mitigating biased inferences of word embeddings. In: AAAI, pp. 7659–7666 (2020)

    Google Scholar 

  5. Dutta, R.: System, method, and program for ranking search results using user category weighting. US Patent App. 09/737,995, 20 June 2002

    Google Scholar 

  6. Flaxman, S., Goel, S., Rao, J.M.: Filter bubbles, echo chambers, and online news consumption. Public Opin. Q. 80(S1), 298–320 (2016)

    Article  Google Scholar 

  7. Geyik, S.C., Ambler, S., Kenthapadi, K.: Fairness-aware ranking in search & recommendation systems with application to Linkedin talent search. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2221–2231 (2019)

    Google Scholar 

  8. Gonen, H., Goldberg, Y.: Lipstick on a pig: debiasing methods cover up systematic gender biases in word embeddings but do not remove them. arXiv preprint arXiv:1903.03862 (2019)

  9. Himelboim, I., McCreery, S., Smith, M.: Birds of a feather tweet together: Integrating network and content analyses to examine cross-ideology exposure on twitter. J. Comput.-Mediat. Commun. 18(2), 154–174 (2013)

    Article  Google Scholar 

  10. Juhi Kulshrestha, Muhammad B. Zafar, M.E.S.G.J.M., Gummadi, K.P.: Quantifying search bias: investigating sources of bias for political searches in social media (2017). https://arxiv.org/pdf/1704.01347.pdf

  11. Kaneko, M., Bollegala, D.: Gender-preserving debiasing for pre-trained word embeddings. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL) (2019)

    Google Scholar 

  12. Zehlike, M., Bonchi, F., Castillo, C., Hajian, S., Megahed, M., Baeza-Yates, R.: Fa*ir: a fair top-k ranking algorithm (2018). https://arxiv.org/pdf/1706.06368.pdf

  13. Nguyen, C.T.: Echo chambers and epistemic bubbles. Episteme 17(2), 141–161 (2020)

    Article  Google Scholar 

  14. Noble, S.U.: Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, New York (2018)

    Google Scholar 

  15. Pennington, J., Socher, R., Manning, C.: GloVe: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar, October 2014. https://doi.org/10.3115/v1/D14-1162, https://www.aclweb.org/anthology/D14-1162

  16. Radlinski, F., Bennett, P.N., Carterette, B., Joachims, T.: Redundancy, diversity and interdependent document relevance. In: ACM SIGIR Forum, vol. 43, pp. 46–52. ACM New York, NY, USA (2009)

    Google Scholar 

  17. Trends, G.: (2021). https://trends.google.com/trends/?geo=US

  18. Zehlike, M., Castillo, C.: Reducing disparate exposure in ranking: a learning to rank approach. In: Proceedings of The Web Conference 2020, pp. 2849–2855 (2020)

    Google Scholar 

  19. Zehlike, M., Sühr, T., Castillo, C., Kitanovski, I.: Fairsearch: a tool for fairness in ranked search results. In: Companion Proceedings of the Web Conference 2020, pp. 172–175 (2020)

    Google Scholar 

  20. Zhao, J., Zhou, Y., Li, Z., Wang, W., Chang, K.: Learning gender-neutral word embeddings. CoRR abs/1809.01496 (2018). http://arxiv.org/abs/1809.01496

Download references

Acknowledgements

S. Soundarajan is supported by NSF #2047224.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Harshit Mishra .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Mishra, H., Soundarajan, S. (2022). Keyword Recommendation for Fair Search. In: Boratto, L., Faralli, S., Marras, M., Stilo, G. (eds) Advances in Bias and Fairness in Information Retrieval. BIAS 2022. Communications in Computer and Information Science, vol 1610. Springer, Cham. https://doi.org/10.1007/978-3-031-09316-6_12

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-09316-6_12

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-09315-9

  • Online ISBN: 978-3-031-09316-6

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics