Could artificial intelligence write mental health nursing care plans?
Accessible Summary What is Known on the Subject? Artificial intelligence (AI) is freely available, responds to very basic text input (such as a question) and can now create a wide range of outputs, communicating in many languages or art forms. AI platforms like OpenAI's ChatGPT can now create p...
Saved in:
Published in: | Journal of psychiatric and mental health nursing Vol. 31; no. 1; pp. 79 - 86 |
---|---|
Main Authors: | , , , , , , |
Format: | Journal Article |
Language: | English |
Published: |
England
Wiley Subscription Services, Inc
01-02-2024
|
Subjects: | |
Online Access: | Get full text |
Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Summary: | Accessible Summary
What is Known on the Subject?
Artificial intelligence (AI) is freely available, responds to very basic text input (such as a question) and can now create a wide range of outputs, communicating in many languages or art forms. AI platforms like OpenAI's ChatGPT can now create passages of text that could be used to create plans of care for people with mental health needs. As such, AI output can be difficult to distinguish from human‐output, and there is a risk that its use could go unnoticed.
What this Paper Adds to Existing Knowledge?
Whilst it is known that AI can produce text or pass pre‐registration health‐profession exams, it is not known if AI can produce meaningful results for care delivery.
We asked ChatGPT basic questions about a fictitious person who presents with self‐harm and then evaluated the quality of the output. We found that the output could look reasonable to laypersons but there were significant errors and ethical issues. There are potential harms to people in care if AI is used without an expert correcting or removing these errors.
What are the Implications for Practice?
We suggest that there is a risk that AI use could cause harm if it was used in direct care delivery. There is a lack of policy and research to safeguard people receiving care ‐ and this needs to be in place before AI should be used in this way. Key aspects of the role of a mental health nurse are relational and AI use may diminish mental health nurses' ability to provide safe care in its current form.
Many aspects of mental health recovery are linked to relationships and social engagement, however AI is not able to provide this and may push the people who are in most need of help further away from services that assist recovery.
Background
Artificial intelligence (AI) is being increasingly used and discussed in care contexts. ChatGPT has gained significant attention in popular and scientific literature although how ChatGPT can be used in care‐delivery is not yet known.
Aims
To use artificial intelligence (ChatGPT) to create a mental health nursing care plan and evaluate the quality of the output against the authors’ clinical experience and existing guidance.
Materials & Methods
Basic text commands were input into ChatGPT about a fictitious person called ‘Emily’ who presents with self‐injurious behaviour. The output from ChatGPT was then evaluated against the authors’ clinical experience and current (national) care guidance.
Results
ChatGPT was able to provide a care plan that incorporated some principles of dialectical behaviour therapy, but the output had significant errors and limitations and thus there is a reasonable likelihood of harm if used in this way.
Discussion
AI use is increasing in direct‐care contexts through the use of chatbots or other means. However, AI can inhibit clinician to care‐recipient engagement, ‘recycle’ existing stigma, and introduce error, which may thus diminish the ability for care to uphold personhood and therefore lead to significant avoidable harms.
Conclusion
Use of AI in this context should be avoided until a point where policy and guidance can safeguard the wellbeing of care recipients and the sophistication of AI output has increased. Given ChatGPT’s ability to provide superficially reasonable outputs there is a risk that errors may go unnoticed and thus increase the likelihood of patient harms. Further research evaluating AI output is needed to consider how AI may be used safely in care delivery. |
---|---|
Bibliography: | ObjectType-Article-1 SourceType-Scholarly Journals-1 ObjectType-Feature-2 content type line 23 |
ISSN: | 1351-0126 1365-2850 |
DOI: | 10.1111/jpm.12965 |