Abstract
Online search engines are an extremely popular tool for seeking information. However, the results returned sometimes exhibit undesirable or even wrongful forms of bias, such as with respect to gender or race. In this paper, we consider the problem of fair keyword recommendation, in which the goal is to suggest keywords that are relevant to a user’s search query, but exhibit less (or opposite) bias. We present a multi-objective optimization method that uses word embeddings to suggest alternate keywords for biased keywords present in a search query. We perform a qualitative analysis on pairs of subReddits from Reddit.com (r/Republican vs. r/democrats). Our results demonstrate the efficacy of the proposed method and illustrate subtle linguistic differences between subReddits.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Blaubergs, M.S.: Changing the sexist language: the theory behind the practice. Psychol. Women Q. 2(3) (1978)
Bolukbasi, T., Chang, K.W., Zou, J.Y., Saligrama, V., Kalai, A.T.: Man is to computer programmer as woman is to homemaker? Debiasing word embeddings. In: Advances in Neural Information Processing Systems, pp. 4349–4357 (2016)
Candillier, L., Chevalier, M., Dudognon, D., Mothe, J.: Diversity in recommender systems. In: Proceedings of the Fourth International Conference on Advances in Human-oriented and Personalized Mechanisms, Technologies, and Services. CENTRIC, pp. 23–29 (2011)
Dev, S., Li, T., Phillips, J.M., Srikumar, V.: On measuring and mitigating biased inferences of word embeddings. In: AAAI, pp. 7659–7666 (2020)
Dutta, R.: System, method, and program for ranking search results using user category weighting. US Patent App. 09/737,995, 20 June 2002
Flaxman, S., Goel, S., Rao, J.M.: Filter bubbles, echo chambers, and online news consumption. Public Opin. Q. 80(S1), 298–320 (2016)
Geyik, S.C., Ambler, S., Kenthapadi, K.: Fairness-aware ranking in search & recommendation systems with application to Linkedin talent search. In: Proceedings of the 25th ACM SIGKDD International Conference on Knowledge Discovery & Data Mining, pp. 2221–2231 (2019)
Gonen, H., Goldberg, Y.: Lipstick on a pig: debiasing methods cover up systematic gender biases in word embeddings but do not remove them. arXiv preprint arXiv:1903.03862 (2019)
Himelboim, I., McCreery, S., Smith, M.: Birds of a feather tweet together: Integrating network and content analyses to examine cross-ideology exposure on twitter. J. Comput.-Mediat. Commun. 18(2), 154–174 (2013)
Juhi Kulshrestha, Muhammad B. Zafar, M.E.S.G.J.M., Gummadi, K.P.: Quantifying search bias: investigating sources of bias for political searches in social media (2017). https://arxiv.org/pdf/1704.01347.pdf
Kaneko, M., Bollegala, D.: Gender-preserving debiasing for pre-trained word embeddings. In: Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (ACL) (2019)
Zehlike, M., Bonchi, F., Castillo, C., Hajian, S., Megahed, M., Baeza-Yates, R.: Fa*ir: a fair top-k ranking algorithm (2018). https://arxiv.org/pdf/1706.06368.pdf
Nguyen, C.T.: Echo chambers and epistemic bubbles. Episteme 17(2), 141–161 (2020)
Noble, S.U.: Algorithms of Oppression: How Search Engines Reinforce Racism. NYU Press, New York (2018)
Pennington, J., Socher, R., Manning, C.: GloVe: global vectors for word representation. In: Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 1532–1543. Association for Computational Linguistics, Doha, Qatar, October 2014. https://doi.org/10.3115/v1/D14-1162, https://www.aclweb.org/anthology/D14-1162
Radlinski, F., Bennett, P.N., Carterette, B., Joachims, T.: Redundancy, diversity and interdependent document relevance. In: ACM SIGIR Forum, vol. 43, pp. 46–52. ACM New York, NY, USA (2009)
Trends, G.: (2021). https://trends.google.com/trends/?geo=US
Zehlike, M., Castillo, C.: Reducing disparate exposure in ranking: a learning to rank approach. In: Proceedings of The Web Conference 2020, pp. 2849–2855 (2020)
Zehlike, M., Sühr, T., Castillo, C., Kitanovski, I.: Fairsearch: a tool for fairness in ranked search results. In: Companion Proceedings of the Web Conference 2020, pp. 172–175 (2020)
Zhao, J., Zhou, Y., Li, Z., Wang, W., Chang, K.: Learning gender-neutral word embeddings. CoRR abs/1809.01496 (2018). http://arxiv.org/abs/1809.01496
Acknowledgements
S. Soundarajan is supported by NSF #2047224.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2022 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Mishra, H., Soundarajan, S. (2022). Keyword Recommendation for Fair Search. In: Boratto, L., Faralli, S., Marras, M., Stilo, G. (eds) Advances in Bias and Fairness in Information Retrieval. BIAS 2022. Communications in Computer and Information Science, vol 1610. Springer, Cham. https://doi.org/10.1007/978-3-031-09316-6_12
Download citation
DOI: https://doi.org/10.1007/978-3-031-09316-6_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-09315-9
Online ISBN: 978-3-031-09316-6
eBook Packages: Computer ScienceComputer Science (R0)