Abstract
Within intelligent tutoring systems, considerable research has investigated hints, including how to generate data-driven hints, what hint content to present, and when to provide hints for optimal learning outcomes. However, less attention has been paid to how hints are presented. In this paper, we propose a new hint delivery mechanism called “Assertions” for providing unsolicited hints in a data-driven intelligent tutor. Assertions are partially-worked example steps designed to appear within a student workspace, and in the same format as student-derived steps, to show students a possible subgoal leading to the solution. We hypothesized that Assertions can help address the well-known hint avoidance problem. In systems that only provide hints upon request, hint avoidance results in students not receiving hints when they are needed. Our unsolicited Assertions do not seek to improve student help-seeking, but rather seek to ensure students receive the help they need. We contrast Assertions with Messages, text-based, unsolicited hints that appear after student inactivity. Our results show that Assertions significantly increase unsolicited hint usage compared to Messages. Further, they show a significant aptitude-treatment interaction between Assertions and prior proficiency, with Assertions leading students with low prior proficiency to generate shorter (more efficient) posttest solutions faster. We also present a clustering analysis that shows patterns of productive persistence among students with low prior knowledge when the tutor provides unsolicited help in the form of Assertions. Overall, this work provides encouraging evidence that hint presentation can significantly impact how students use them and using Assertions can be an effective way to address help avoidance.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Change history
02 December 2020
A Correction to this paper has been published: <ExternalRef><RefSource>https://doi.org/10.1007/s40593-020-00232-0</RefSource><RefTarget Address="10.1007/s40593-020-00232-0" TargetType="DOI"/></ExternalRef>
Notes
More details can be found on Fall 2018 student demographics at NCSU at https://www.engr.ncsu.edu/ir/fast-facts/fall-2018-fast-facts/ The CSC 226 course is typically composed of about 60% sophomores, 30% juniors, 9% seniors, and 1% freshmen
The tutor allows students to delete assertions but only two Assertions were deleted in the entire dataset, suggesting that students did not realize this was possible
Note that solution length can only be calculated for complete solutions, and our data consists only of students who successfully completed the study by completing the mandatory pre- and post-test problems. N = 5 (10%) in Messages, and N = 12 (16%) in Assertions did not finish the tutor. A chi-square test shows no significant difference in the completion and non-completion group sizes between the two conditions (χ2(1,N = 122) = 0.95,p = 0.33)
The 99th percentile of interaction action time in Fall 2018 was 99.03s; 811 out of 260,750 interaction logs for 100 students in the study, had an action time greater than 5min
HJR and HNR are the proportion of hints justified and needed respectively
Shapiro-Wilk’s test on Unsolicited Hints Given for the Assertions group: W = 0.904, p< 0.001, and the Messages group: W = 0.942, p = 0.030; Shapiro-Wilk’s test on Unsolicited HJR for the Assertions group: W = 0.887, p< 0.001, the Messages group: W = 0.959, p < 0.001; and Shapiro-Wilk’s test on Unsolicited HNR for the Assertions group: W = 0.904, p< 0.001, and the Messages group: W = 0.945, p < 0.001
We did not test for the significance in the difference between the two correlation coefficients because the samples are not independent. Hints Needed are a subset of Hints Justified
References
Aleven, V., & Koedinger, K.R. (2000). Limitations of student control: Do students know when they need help?. In International conference on intelligent tutoring systems (pp. 292–303): Springer.
Aleven, V., Ogan, A., Popescu, O., Torrey, C., & Koedinger, K. (2004). Evaluating the effectiveness of a tutorial dialogue system for self-explanation. In International conference on intelligent tutoring systems (pp. 443–454): Springer.
Aleven, V., Mclaren, B., Roll, I., & Koedinger, K. (2006). Toward meta-cognitive tutoring: a model of help seeking with a cognitive tutor. International Journal of Artificial Intelligence in Education, 16(2), 101–128.
Almeda, M.V.Q., Baker, R.S., & Corbett, A. (2017). Help avoidance: When students should seek help, and the consequences of failing to do so. In Meeting of the cognitive science society (Vol. 2428, pp. 2433).
Arroyo, I., Beck, J.E., Beal, C.R., Wing, R., & Woolf, B.P. (2001). Analyzing students’ response to help provision in an elementary mathematics intelligent tutoring system. In Papers of the AIED-2001 workshop on help provision and help seeking in interactive learning environments (pp. 34–46): Citeseer.
Ausin, M.S., Azizsoltani, H., Barnes, T., & Chi, M. (2019). Leveraging deep reinforcement learning for pedagogical policy induction in an intelligent tutoring system. In Proceedings of The 12th International Conference on Educational Data Mining (EDM 2019) (Vol. 168, pp. 177): ERIC.
Bakke, S. (2014). Immediacy in user interfaces: an activity theoretical approach. In International conference on human-computer interaction (pp. 14–22): Springer.
Barnes, T., Stamper, J.C., Lehmann, L., & Croy, M.J. (2008). A pilot study on logic proof tutoring using hints generated from historical student data. In EDM (pp. 197–201).
Barnes, T., & Stamper, J. (2010). Automatic hint generation for logic proof tutoring using historical data. Journal of Educational Technology & Society, 13(1), 3.
Barnes, T., Stamper, J., & Croy, M. (2011). Using markov decision processes for automatic hint generation. Handbook of Educational Data Mining 467.
Bartholomé, T., Stahl, E., Pieschl, S., & Bromme, R. (2006). What matters in help-seeking? a study of help effectiveness and learner-related factors. Computers in Human Behavior, 22(1), 113–129.
Beck, J.E., & Gong, Y. (2013). Wheel-spinning: Students who fail to master a skill. In International conference on artificial intelligence in education (pp. 431–440): Springer.
Borghans, L., Duckworth, A.L., Heckman, J.J., & Ter Weel, B. (2008). The economics and psychology of personality traits. Journal of Human Resources, 43(4), 972–1059.
Botelho, A., Varatharaj, A., Patikorn, T., Doherty, D., Adjei, S., & Beck, J. (2019). Developing early detectors of student attrition and wheel spinning using deep learning. IEEE Transactions on Learning Technologies.
Bunt, A., Conati, C., & Muldner, K. (2004). Scaffolding self-explanation to improve learning in exploratory learning environments. In International conference on intelligent tutoring systems (pp. 656–667): Springer.
Butcher, K.R., & Aleven, V. (2007). Integrating visual and verbal knowledge during classroom learning with computer tutors. In Proceedings of the Annual Meeting of the Cognitive Science Society (Vol. 29).
Butcher, K.R., & Aleven, V. (2013). Using student interactions to foster rule–diagram mapping during problem solving in an intelligent tutoring system. Journal of Educational Psychology, 105(4), 988.
Chi, M.T., & Bassok, M. (1988). Learning from examples via self-explanations. Technica report, Pittsburgh University Pa Learning Research And Development Center.
Chi, M.T., De Leeuw, N., Chiu, M.H., & LaVancher, C. (1994). Eliciting self-explanations improves understanding. Cognitive Science, 18(3), 439–477.
Cialdini, R. B. (2009). Influence: Science and practice Vol. 4. Boston: Pearson education.
Cody, C., & Mostafavi, B. (2017). Investigating the impact of unsolicited next-step and subgoal hints on dropout in a logic proof tutor. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on Computer Science Education (pp. 705–705): ACM.
Conati, C., & Vanlehn, K. (2000). Toward computer-based support of meta-cognitive skills: A computational framework to coach self-explanation.
Conati, C., & Manske, M. (2009). Evaluating adaptive feedback in an educational computer game. In International workshop on intelligent virtual agents (pp. 146–158): Springer.
Conati, C., Jaques, N., & Muir, M. (2013). Understanding attention to adaptive hints in educational games: an eye-tracking study. International Journal of Artificial Intelligence in Education, 23(1-4), 136–161.
Cronbach, L.J., & Snow, R.E. (1977). Aptitudes and instructional methods: A handbook for research on interactions. Irvington.
Davies, W., & Cormican, K. (2013). An analysis of the use of multimedia technology in computer aided design training: Towards effective design goals. Procedia Technology, 9, 200–208.
Deke, J., & Haimson, J. (2006). Valuing student competencies: Which ones predict postsecondary educational attainment and earnings, and for whom? final report. Mathematica Policy Research, Inc.
DiCerbo, K.E. (2014). Game-based assessment of persistence. Journal of Educational Technology & Society, 17(1), 17–28.
Dillard, J.P., & Seo, K. (2013). Affect and persuasion. James PRICE dILLARd y Lijian sHEn (coord.) The Sage handbook of persuasion, 150–166.
Dumdumaya, C., & Rodrigo, M.M. (2018). Predicting task persistence within a learning-by-teaching environment. In Proceedings of the 26th International Conference on Computers in Education (pp. 1–10).
Duong, H., Zhu, L., Wang, Y., & Heffernan, N.T. (2013). A prediction model that uses the sequence of attempts and hints to better predict knowledge:” better to attempt the problem first, rather than ask for a hint”. In EDM (pp. 316–317).
Fossati, D., Di Eugenio, B., Ohlsson, S., Brown, C., & Chen, L. (2010). Generating proactive feedback to help students stay on track. In International conference on intelligent tutoring systems (pp. 315–317): Springer.
Fossati, D., Di Eugenio, B., Ohlsson, S., Brown, C., & Chen, L. (2015). Data driven automatic feedback generation in the ilist intelligent tutoring system. Technology, Instruction, Cognition and Learning, 10(1), 5–26.
Healey, C., & Enns, J. (2012). Attention and visual memory in visualization and computer graphics. IEEE Transactions on Visualization and Computer Graphics, 18(7), 1170–1188.
Heckman, J.J., Stixrud, J., & Urzua, S. (2006). The effects of cognitive and noncognitive abilities on labor market outcomes and social behavior. Journal of Labor Economics, 24(3), 411–482.
Hegarty, M., & Just, M. A. (1993). Constructing mental models of machines from text and diagrams. Journal of Memory and Language, 32(6), 717–742.
Jin, W., Lehmann, L., Johnson, M., Eagle, M., Mostafavi, B., Barnes, T., & Stamper, J. (2011). Towards automatic hint generation for a data-driven novice programming tutor. In Workshop on knowledge discovery in educational data, 17th ACM conference on knowledge discovery and data mining. Citeseer.
Jin, W., Barnes, T., Stamper, J., Eagle, M.J., Johnson, M.W., & Lehmann, L. (2012). Program representation for automatic hint generation for a data-driven novice programming tutor. In International conference on intelligent tutoring systems (pp. 304–309): Springer.
Kai, S., Almeda, M.V., Baker, R.S., Heffernan, C., & Heffernan, N. (2018). Decision tree modeling of wheel-spinning and productive persistence in skill builders. JEDM— Journal of Educational Data Mining, 10(1), 36–71.
Kanfer, R., & Ackerman, P.L. (1989). Motivation and cognitive abilities: an integrative/aptitude-treatment interaction approach to skill acquisition. Journal of Applied Psychology, 74(4), 657.
Kardan, S., & Conati, C. (2015). Providing adaptive support in an interactive simulation for learning: an experimental evaluation. In Proceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems (pp. 3671–3680): ACM.
Koedinger, K.R., Aleven, V., Heffernan, N., McLaren, B., & Hockenberry, M. (2004). Opening the door to non-programmers: Authoring intelligent tutor behavior by demonstration. In International conference on intelligent tutoring systems (pp. 162–174): Springer.
Liu, Z., Mostafavi, B., & Barnes, T. (2016). Combining worked examples and problem solving in a data-driven logic tutor. In International conference on intelligent tutoring systems (pp. 347–353): Springer.
Maniktala, M., Barnes, T., & Chi, M. (2020a). Extending the hint factory: Towards modelling productivity for open-ended problem-solving. In Proceedings of the 13th International Conference on Educational Data Mining.
Maniktala, M., Cody, C., Isvik, A., Lytle, N., Chi, M., & Barnes, T. (2020b). Extending the Hint Factory for the assistance dilemma: A novel, data-driven HelpNeed Predictor for proactive problem-solving help. In Journal of Educational Data Mining.
Marwan, S., Lytle, N., Williams, J.J., & Price, T. (2019). The impact of adding textual explanations to next-step hints in a novice programming environment. In Proceedings of the 2019 ACM Conference on Innovation and Technology in Computer Science Education (pp. 520–526): ACM.
Mathews, M., & Mitrovic, A. (2008). How does students’ help-seeking behaviour affect learning?. In International conference on intelligent tutoring systems (pp. 363–372): Springer.
McLaren, B.M., Koedinger, K.R., Schneider, M., Harrer, A., & Bollen, L. (2004). Bootstrapping novice data: Semi-automated tutor authoring using student log files.
McLaren, B.M., Lim, S.J., & Koedinger, K.R. (2008). When is assistance helpful to learning? results in combining worked examples and intelligent tutoring. In International conference on intelligent tutoring systems (pp. 677–680): Springer.
Moreno, R., & Mayer, R.E. (1999). Cognitive principles of multimedia learning: The role of modality and contiguity. Journal of Educational Psychology, 91(2), 358.
Mostafavi, B., Eagle, M., & Barnes, T. (2015a). Towards data-driven mastery learning. In Proceedings of the Fifth International Conference on Learning Analytics And Knowledge (pp. 270–274).
Mostafavi, B., Zhou, G., Lynch, C., Chi, M., & Barnes, T. (2015b). Data-driven worked examples improve retention and completion in a logic tutor. In International conference on artificial intelligence in education (pp. 726–729): Springer.
Mostafavi, B., & Barnes, T. (2016). Data-driven proficiency profiling: proof of concept. In Proceedings of the Sixth International Conference on Learning Analytics & Knowledge (pp. 324–328).
Mostafavi, B., & Barnes, T. (2017). Evolution of an intelligent deductive logic tutor using data-driven elements. International Journal of Artificial Intelligence in Education, 27(1), 5–36.
Muir, M., & Conati, C. (2012). An analysis of attention to student–adaptive hints in an educational game. In International conference on intelligent tutoring systems (pp. 112–122): Springer.
Murray, T. (2003). An overview of intelligent tutoring system authoring tools: Updated analysis of the state of the art. In Authoring tools for advanced technology learning environments (pp. 491–544): Springer.
Murray, R.C., & VanLehn, K. (2006). A comparison of decision-theoretic, fixed-policy and random tutorial action selection. In International conference on intelligent tutoring systems (pp. 114–123): Springer.
Nelson-Le Gall, S. (1981). Help-seeking: an understudied problem-solving skill in children. Developmental Review, 1(3), 224–246.
Newell, A., Simon, H.A., & et al. (1972). Human problem solving Vol. 104. Englewood Cliffs: Prentice-Hall.
Paaßen, B., Hammer, B., Price, T.W., Barnes, T., Gross, S., & Pinkwart, N. (2017). The continuous hint factory-providing hints in vast and sparsely populated edit distance spaces. arXiv:1708.06564.
Paunonen, S.V., & Ashton, M.C. (2001). Big five predictors of academic achievement. Journal of Research in Personality, 35(1), 78–90.
Price, T.W., Dong, Y., & Barnes, T. (2016). Generating data-driven hints for open-ended programming. EDM, 16, 191–198.
Price, T., Zhi, R., & Barnes, T. (2017a). Evaluation of a data-driven feedback algorithm for open-ended programming. International Educational Data Mining Society.
Price, T.W., Dong, Y., & Lipovac, D. (2017b). isnap: towards intelligent tutoring in novice programming environments. In Proceedings of the 2017 ACM SIGCSE Technical Symposium on computer science education (pp. 483–488).
Price, T.W., Liu, Z., Cateté, V., & Barnes, T. (2017c). Factors influencing students’ help-seeking behavior while programming with human and computer tutors. In Proceedings of the 2017 ACM Conference on International Computing Education Research (pp. 127–135).
Price, T.W., Zhi, R., & Barnes, T. (2017d). Hint generation under uncertainty: The effect of hint quality on help-seeking behavior. In International conference on artificial intelligence in education (pp. 311–322): Springer.
Price, T.W., Zhi, R., Dong, Y., Lytle, N., & Barnes, T. (2018). The impact of data quantity and source on the quality of data-driven hints for programming. In International conference on artificial intelligence in education (pp. 476–490): Springer.
Puustinen, M. (1998). Help-seeking behavior in a problem-solving situation: Development of self-regulation. European Journal of Psychology of Education, 13(2), 271.
Razzaq, L., & Heffernan, N.T. (2010). Hints: is it better to give or wait to be asked?. In International conference on intelligent tutoring systems (pp. 349–358): Springer.
Rivers, K., & Koedinger, K.R. (2017). Data-driven hint generation in vast solution spaces: a self-improving python programming tutor. International Journal of Artificial Intelligence in Education, 27(1), 37–64.
Roll, I., Aleven, V., McLaren, B.M., Ryu, E., d Baker, R.S., & Koedinger, K. R. (2006). The help tutor: Does metacognitive feedback improve students’ help-seeking actions, skills and learning?. In International conference on intelligent tutoring systems (pp. 360–369): Springer.
Shen, S., & Chi, M. (2016). Reinforcement learning: the sooner the better, or the later the better?. In Proceedings of the 2016 Conference on User Modeling Adaptation and Personalization (pp. 37–44): ACM.
Shen, S., Mostafavi, B., Lynch, C., Barnes, T., & Chi, M. (2018). Empirically evaluating the effectiveness of pomdp vs. mdp towards the pedagogical strategies induction. In International conference on artificial intelligence in education (pp. 327–331): Springer.
Snow, R.E. (1991). Aptitude-treatment interaction as a framework for research on individual differences in psychotherapy. Journal of Consulting and Clinical Psychology, 59(2), 205.
Stamper, J., Barnes, T., Lehmann, L., & Croy, M. (2008). The hint factory: Automatic generation of contextualized help for existing computer aided instruction. In Proceedings of the 9th International Conference on Intelligent Tutoring Systems Young Researchers Track (pp. 71–78).
Summerfield, C., & Egner, T. (2009). Expectation (and attention) in visual cognition. Trends in Cognitive Sciences, 13(9), 403–409.
Sweller, J. (1988). Cognitive load during problem solving: Effects on learning. Cognitive Science, 12(2), 257–285.
Sweller, J. (1999). Instructional design in technical areas. Camberwell: ACER Press.
Sweller, J. (2006). The worked example effect and human cognition. Learning and instruction.
Sweller, J. (2008). Human cognitive architecture. Handbook of research on educational communications and technology (pp. 369–381).
Tchétagni, J.M., & Nkambou, R. (2002). Hierarchical representation and evaluation of the student in an intelligent tutoring system. In International conference on intelligent tutoring systems (pp. 708–717): Springer.
Timms, M.J. (2007). Using item response theory (irt) to select hints in an its. Frontiers in Artificial Intelligence and Applications, 158, 213.
Ueno, M., & Miyazawa, Y. (2017). Irt-based adaptive hints to scaffold learning in programming. IEEE Transactions on Learning Technologies, 11(4), 415–428.
Vanlehn, K. (2006). The behavior of tutoring systems. International Journal of Artificial Intelligence in Education, 16(3), 227–265.
Ventura, M., & Shute, V. (2013a). The validity of a game-based assessment of persistence. Computers in Human Behavior, 29(6), 2568–2572.
Ventura, M., Shute, V., & Zhao, W. (2013b). The relationship between video game use and a performance-based measure of persistence. Computers & Education, 60(1), 52–58.
Villesseche, J., Le Bohec, O., Quaireau, C., Nogues, J., Besnard, A.L., Oriez, S., De La Haye, F., Noel, Y., & Lavandier, K. (2018). Enhancing reading skills through adaptive e-learning. Interactive Technology and Smart Education.
Weerasinghe, A., & Mitrovic, A. (2002). Enhancing learning through self-explanation. In International conference on computers in education, 2002. Proceedings. (pp. 244–248. IEEE.
Weerasinghe, A., & Mitrovic, A. (2004). Supporting self-explanation in an open-ended domain. In International conference on knowledge-based and intelligent information and engineering systems (pp. 306–313): Springer.
Wobbrock, J.O., Findlater, L., Gergle, D., & Higgins, J.J. (2011). The aligned rank transform for nonparametric factorial analyses using only anova procedures. In Proceedings of the SIGCHI conference on human factors in computing systems (pp. 143–146): ACM.
Wood, H., & Wood, D. (1999). Help seeking, learning and contingent tutoring. Computers & Education, 33(2-3), 153–169.
Zhou, G., Price, T.W., Lynch, C., Barnes, T., & Chi, M. (2015). The impact of granularity on worked examples and problem solving. In CogSci.
Acknowledgments
This material is based upon work supported by the National Science Foundation under Grant No. 1726550, “Integrated Data-driven Technologies for Individualized Instruction in STEM Learning Environments.”, led by Min Chi and Tiffany Barnes. We would like to thank Nicholas Lytle (nalytle@ncsu.edu) for suggesting edits in the introduction section to enhance its clarity.
Author information
Authors and Affiliations
Corresponding author
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Appendices
Appendix A: Unsolicited Hint Metrics for each prior proficiency group
Prior | #Given | HJR | HNR | |||
---|---|---|---|---|---|---|
Profic- | Assertions | Messages | Assertions | Messages | Assertions | Messages |
iency | Mean (SD) | Mean (SD) | Mean (SD) | Mean (SD) | Mean (SD) | Mean (SD) |
Low | 48.92 (11.76)* | 35.71 (10.52) | 0.93 (0.09)* | 0.63 (0.18) | 0.83 (0.08)* | 0.61 (0.17) |
High | 48.67 (7.80)* | 30.93 (14.00) | 0.92 (0.07)* | 0.63 (0.15) | 0.82 (0.10)* | 0.62 (0.16) |
All | 48.82 (9.85)* | 32.74 (10.64) | 0.93 (0.07)* | 0.63 (0.18) | 0.82 (0.09)* | 0.62 (0.17) |
Appendix B: Comparison of Posttest Accuracy between the two conditions
Prior proficiency | Assertions | Messages |
---|---|---|
Mean (SD) | Mean (SD) | |
Low | 0.74 (0.10) | 0.72 (0.08) |
High | 0.75 (0.09) | 0.73 (0.08) |
All | 0.74 (0.10) | 0.73 (0.08) |
Rights and permissions
About this article
Cite this article
Maniktala, M., Cody, C., Barnes, T. et al. Avoiding Help Avoidance: Using Interface Design Changes to Promote Unsolicited Hint Usage in an Intelligent Tutor. Int J Artif Intell Educ 30, 637–667 (2020). https://doi.org/10.1007/s40593-020-00213-3
Published:
Issue Date:
DOI: https://doi.org/10.1007/s40593-020-00213-3