Describing the Framework for AI Tool Assessment in Mental Health and Applying It to a Generative AI Obsessive-Compulsive Disorder Platform: Tutorial
- PMID: 39423001
- PMCID: PMC11530715
- DOI: 10.2196/62963
Describing the Framework for AI Tool Assessment in Mental Health and Applying It to a Generative AI Obsessive-Compulsive Disorder Platform: Tutorial
Abstract
As artificial intelligence (AI) technologies occupy a bigger role in psychiatric and psychological care and become the object of increased research attention, industry investment, and public scrutiny, tools for evaluating their clinical, ethical, and user-centricity standards have become essential. In this paper, we first review the history of rating systems used to evaluate AI mental health interventions. We then describe the recently introduced Framework for AI Tool Assessment in Mental Health (FAITA-Mental Health), whose scoring system allows users to grade AI mental health platforms on key domains, including credibility, user experience, crisis management, user agency, health equity, and transparency. Finally, we demonstrate the use of FAITA-Mental Health scale by systematically applying it to OCD Coach, a generative AI tool readily available on the ChatGPT store and designed to help manage the symptoms of obsessive-compulsive disorder. The results offer insights into the utility and limitations of FAITA-Mental Health when applied to "real-world" generative AI platforms in the mental health space, suggesting that the framework effectively identifies key strengths and gaps in AI-driven mental health tools, particularly in areas such as credibility, user experience, and acute crisis management. The results also highlight the need for stringent standards to guide AI integration into mental health care in a manner that is not only effective but also safe and protective of the users' rights and welfare.
Keywords: ChatGPT; artificial intelligence; chatbots; digital health; generative AI; generative artificial intelligence; large language model; machine learning; obsessive-compulsive disorder; psychotherapy; telemedicine.
©Ashleigh Golden, Elias Aboujaoude. Originally published in JMIR Formative Research (https://formative.jmir.org), 18.10.2024.
Conflict of interest statement
Conflicts of Interest: None declared.
Similar articles
-
User Intentions to Use ChatGPT for Self-Diagnosis and Health-Related Purposes: Cross-sectional Survey Study.JMIR Hum Factors. 2023 May 17;10:e47564. doi: 10.2196/47564. JMIR Hum Factors. 2023. PMID: 37195756 Free PMC article.
-
A Novel Cognitive Behavioral Therapy-Based Generative AI Tool (Socrates 2.0) to Facilitate Socratic Dialogue: Protocol for a Mixed Methods Feasibility Study.JMIR Res Protoc. 2024 Oct 10;13:e58195. doi: 10.2196/58195. JMIR Res Protoc. 2024. PMID: 39388255 Free PMC article.
-
Could artificial intelligence write mental health nursing care plans?J Psychiatr Ment Health Nurs. 2024 Feb;31(1):79-86. doi: 10.1111/jpm.12965. Epub 2023 Aug 4. J Psychiatr Ment Health Nurs. 2024. PMID: 37538021
-
An Introduction to Generative Artificial Intelligence in Mental Health Care: Considerations and Guidance.Curr Psychiatry Rep. 2023 Dec;25(12):839-846. doi: 10.1007/s11920-023-01477-x. Epub 2023 Nov 30. Curr Psychiatry Rep. 2023. PMID: 38032442 Review.
-
The Impact of Generative Conversational Artificial Intelligence on the Lesbian, Gay, Bisexual, Transgender, and Queer Community: Scoping Review.J Med Internet Res. 2023 Dec 6;25:e52091. doi: 10.2196/52091. J Med Internet Res. 2023. PMID: 37864350 Free PMC article. Review.
References
-
- Guo Z, Lai A, Thygesen JH, Farrington J, Keen T, Li K. Large language model for mental health: a systematic review. arXiv. doi: 10.2196/preprints.57400. Preprint posted online February 19, 2024. https://arxiv.org/abs/2403.15401 - DOI - PubMed
-
- Hua Y, Liu F, Yang K, Li Z, Sheu YH, Zhou P, Moran L, Ananiadou S, Beam A. Large language models in mental health care: a scoping review. arXiv. doi: 10.2196/preprints.64088. Preprint posted online January 1, 2024. https://arxiv.org/abs/2401.02984 - DOI
-
- Lai T, Shi Y, Du Z, Wu J, Fu K, Dou Y, Wang Z. Supporting the demand on mental health services with AI-based conversational large language models (LLMs) BioMedInformatics. 2023 Dec 22;4(1):8–33. doi: 10.3390/biomedinformatics4010002. - DOI
-
- Sharma A, Rushton K, Lin IW, Nguyen T, Althoff T. Facilitating self-guided mental health interventions through human-language model interaction: a case study of cognitive restructuring. Proceedings of the 2024 CHI Conference on Human Factors in Computing Systems; CHI '24; May 11-16, 2024; Honolulu, HI. 2024. pp. 11–6. https://dl.acm.org/doi/10.1145/3613904.3642761 - DOI - DOI
-
- Singh OP. Artificial intelligence in the era of ChatGPT - opportunities and challenges in mental health care. Indian J Psychiatry. 2023 Mar;65(3):297–8. doi: 10.4103/indianjpsychiatry.indianjpsychiatry_112_23. https://europepmc.org/abstract/MED/37204980 IJPsy-65-297 - DOI - PMC - PubMed
Publication types
MeSH terms
LinkOut - more resources
Full Text Sources
Medical
Research Materials