Abstract
With the widespread use of web applications, a large amount of textual content is generated on the Internet continuously. In order to analyze and mine the information contained in the text, machine reading comprehension (MRC) is receiving increasingly more attention. As an important technique, MRC can boost the business value of Internet applications. Traditional MRC models use an extractive approach, resulting in poor performance on unanswerable questions. To address this problem, we propose a clue-inspired MRC model. Specifically, we mimic the human reading comprehension process through a combination of sketchy and intensive reading. Experimental results show that the proposed model achieves better performance on several public datasets, especially for unanswerable questions.
This work is supported by the grant from the National Social Science Foundation of China (No. 21BXW047).
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
Notes
References
Bahdanau, D., Cho, K., Bengio, Y.: Neural machine translation by jointly learning to align and translate. In: 3rd International Conference on Learning Representations, ICLR, San Diego (2015)
Vaswani, A., et al.: Attention is all you need. In: Advances in Neural Information Processing Systems, Long Beach, USA, pp. 5998–6008 (2017)
Seo, M.J., Kembhavi, A., Farhadi, A., Hajishirzi, H.: Bidirectional attention flow for machine comprehension. In: 5th International Conference on Learning Representations, ICLR, Toulon (2017)
Rajpurkar, P., Zhang, J., Lopyrev, K., Liang, P.: Squad: 100, 000+ questions for machine comprehension of text. In: Su, J., Carreras, X., Duh, K. (eds.) Proceedings of the 2016 Conference on Empirical Methods in Natural Language Processing, EMNLP, Austin, pp. 2383–2392. ACL (2016)
Wang, W., Yang, N., Wei, F., Chang, B., Zhou, M.: Gated self-matching networks for reading comprehension and question answering. In: Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics, ACL, Vancouver, Volume 1: Long Papers, pp. 189–198. ACL (2017)
Peters, M.E., et al.: Deep contextualized word representations. In: Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, New Orleans, Volume 1 (Long Papers), pp. 2227–2237. ACL (2018)
Radford, A., Wu, J., Child, R., Luan, D., Amodei, D., Sutskever, I., et al.: Language models are unsupervised multitask learners. OpenAI Blog 1(8), 9 (2019)
Devlin, J., Chang, M., Lee, K., Toutanova, K.: BERT: pre-training of deep bidirectional transformers for language understanding. In: Proceedings of the 2019 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, NAACL-HLT, Minneapolis, Volume 1 (Long and Short Papers), pp. 4171–4186. ACL (2019)
Sukhbaatar, S., Szlam, A., Weston, J., Fergus, R.: End-to-end memory networks. In: Advances in Neural Information Processing Systems, Montreal, pp. 2440–2448 (2015)
Shen, Y., Huang, P., Gao, J., Chen, W.: ReasoNet: learning to stop reading in machine comprehension. In: Proceedings of the 23rd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining, Halifax, pp. 1047–1055. ACM (2017)
Zhang, Z., Yang, J., Zhao, H.: Retrospective reader for machine reading comprehension. In: The 35th AAAI Conference on Artificial Intelligence, AAAI, 33rd Conference on Innovative Applications of Artificial Intelligence, IAAI, The 11th Symposium on Educational Advances in Artificial Intelligence, EAAI, Virtual Event, pp. 14506–14514. AAAI Press (2021)
Liu, Y., et al.: RoBERTa: a robustly optimized BERT pretraining approach. CoRR abs/1907.11692 (2019). http://arxiv.org/abs/1907.11692
Zhang, Y., Zhang, M., Liu, Y., Ma, S., Feng, S.: Localized matrix factorization for recommendation based on matrix block diagonal forms. In: 22nd International World Wide Web Conference, WWW 2013, Rio de Janeiro, pp. 1511–1520. International World Wide Web Conferences Steering Committee/ACM (2013)
Zhang, Y., Zhang, M., Liu, Y., Ma, S.: Improve collaborative filtering through bordered block diagonal form matrices. In: Jones, G.J.F., Sheridan, P., Kelly, D., de Rijke, M., Sakai, T. (eds.) The 36th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2013, Dublin, pp. 313–322. ACM (2013)
Zhang, Y., Zhang, H., Zhang, M., Liu, Y., Ma, S.: Do users rate or review?: Boost phrase-level sentiment labeling with review-level sentiment classification. In: The 37th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2014, Gold Coast, pp. 1027–1030. ACM (2014)
Zhang, Y., Lai, G., Zhang, M., Zhang, Y., Liu, Y., Ma, S.: Explicit factor models for explainable recommendation based on phrase-level sentiment analysis. In: The 37th International ACM SIGIR Conference on Research and Development in Information Retrieval, SIGIR 2014, Gold Coast, pp. 83–92. ACM (2014)
Rajpurkar, P., Jia, R., Liang, P.: Know what you don’t know: unanswerable questions for squad. In: Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics, ACL, Melbourne, Volume 2: Short Papers, pp. 784–789. ACL (2018)
Lan, Z., Chen, M., Goodman, S., Gimpel, K., Sharma, P., Soricut, R.: ALBERT: a lite BERT for self-supervised learning of language representations. In: 8th International Conference on Learning Representations, ICLR, Addis Ababa (2020)
Clark, K., Luong, M., Le, Q.V., Manning, C.D.: ELECTRA: pre-training text encoders as discriminators rather than generators. In: 8th International Conference on Learning Representations, ICLR 2020, Addis Ababa, Ethiopia (2020)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG
About this paper
Cite this paper
Kang, J., Yang, L., Sun, Y., Lin, Y., Zhang, S., Lin, H. (2024). P-Reader: A Clue-Inspired Model for Machine Reading Comprehension. In: Pan, X., Jin, T., Zhang, LJ. (eds) Cognitive Computing – ICCC 2023. ICCC 2023. Lecture Notes in Computer Science, vol 14207. Springer, Cham. https://doi.org/10.1007/978-3-031-51671-9_2
Download citation
DOI: https://doi.org/10.1007/978-3-031-51671-9_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-031-51670-2
Online ISBN: 978-3-031-51671-9
eBook Packages: Computer ScienceComputer Science (R0)