iBet uBet web content aggregator. Adding the entire web to your favor.
iBet uBet web content aggregator. Adding the entire web to your favor.



Link to original content: http://pubmed.ncbi.nlm.nih.gov/38609507/
Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluation - PubMed Skip to main page content
U.S. flag

An official website of the United States government

Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation
. 2024 Apr 2;3(1):12.
doi: 10.1038/s44184-024-00056-z.

Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluation

Affiliations

Large language models could change the future of behavioral healthcare: a proposal for responsible development and evaluation

Elizabeth C Stade et al. Npj Ment Health Res. .

Abstract

Large language models (LLMs) such as Open AI's GPT-4 (which power ChatGPT) and Google's Gemini, built on artificial intelligence, hold immense potential to support, augment, or even eventually automate psychotherapy. Enthusiasm about such applications is mounting in the field as well as industry. These developments promise to address insufficient mental healthcare system capacity and scale individual access to personalized treatments. However, clinical psychology is an uncommonly high stakes application domain for AI systems, as responsible and evidence-based therapy requires nuanced expertise. This paper provides a roadmap for the ambitious yet responsible application of clinical LLMs in psychotherapy. First, a technical overview of clinical LLMs is presented. Second, the stages of integration of LLMs into psychotherapy are discussed while highlighting parallels to the development of autonomous vehicle technology. Third, potential applications of LLMs in clinical care, training, and research are discussed, highlighting areas of risk given the complex nature of psychotherapy. Fourth, recommendations for the responsible development and evaluation of clinical LLMs are provided, which include centering clinical science, involving robust interdisciplinary collaboration, and attending to issues like assessment, risk detection, transparency, and bias. Lastly, a vision is outlined for how LLMs might enable a new generation of studies of evidence-based interventions at scale, and how these studies may challenge assumptions about psychotherapy.

PubMed Disclaimer

Conflict of interest statement

The authors declare the following competing interests: receiving consultation fees from Jimini Health (E.C.S., L.H.U., H.A.S., and J.C.E.).

Figures

Fig. 1
Fig. 1. Methods for tailoring clinical large language models.
Figure was designed using image components from Flaticon.com.
Fig. 2
Fig. 2. Example clinical skills of large language models.
Note. Figure was designed using image component from Flaticon.com.
Fig. 3
Fig. 3. Stages of integrating large language models into psychotherapy.
Figure was designed using image components from Flaticon.com.

Similar articles

Cited by

References

    1. Bubeck, S. et al. Sparks of artificial general intelligence: Early experiments with GPT-4. Preprint at http://arxiv.org/abs/2303.12712 (2023).
    1. Broderick, R. People are using AI for therapy, whether the tech is ready for it or not. Fast Company (2023).
    1. Weizenbaum J. ELIZA—a computer program for the study of natural language communication between man and machine. Commun. ACM. 1966;9:36–45. doi: 10.1145/365153.365168. - DOI
    1. Bantilan N, Malgaroli M, Ray B, Hull TD. Just in time crisis response: Suicide alert system for telemedicine psychotherapy settings. Psychother. Res. 2021;31:289–299. doi: 10.1080/10503307.2020.1781952. - DOI - PubMed
    1. Peretz G, Taylor CB, Ruzek JI, Jefroykin S, Sadeh-Sharvit S. Machine learning model to predict assignment of therapy homework in behavioral treatments: Algorithm development and validation. JMIR Form. Res. 2023;7:e45156. doi: 10.2196/45156. - DOI - PMC - PubMed

LinkOut - more resources