ChatGPT, GPT-4, and Other Large Language Models: The Next Revolution for Clinical Microbiology?

Abstract ChatGPT, GPT-4, and Bard are highly advanced natural language process–based computer programs (chatbots) that simulate and process human conversation in written or spoken form. Recently released by the company OpenAI, ChatGPT was trained on billions of unknown text elements (tokens) and rap...

Full description

Saved in:
Bibliographic Details
Published in:Clinical infectious diseases Vol. 77; no. 9; pp. 1322 - 1328
Main Author: Egli, Adrian
Format: Journal Article
Language:English
Published: US Oxford University Press 11-11-2023
Subjects:
Online Access:Get full text
Tags: Add Tag
No Tags, Be the first to tag this record!
Description
Summary:Abstract ChatGPT, GPT-4, and Bard are highly advanced natural language process–based computer programs (chatbots) that simulate and process human conversation in written or spoken form. Recently released by the company OpenAI, ChatGPT was trained on billions of unknown text elements (tokens) and rapidly gained wide attention for its ability to respond to questions in an articulate manner across a wide range of knowledge domains. These potentially disruptive large language model (LLM) technologies have a broad range of conceivable applications in medicine and medical microbiology. In this opinion article, I describe how chatbot technologies work and discuss the strengths and weaknesses of ChatGPT, GPT-4, and other LLMs for applications in the routine diagnostic laboratory, focusing on various use cases for the pre- to post-analytical process. Natural language processing–based computer software simulates and processes human conversation. These potentially disruptive large language model technologies have a broad range of conceivable applications in medical microbiology. Strengths and weaknesses for applications in routine diagnostics are discussed.
Bibliography:ObjectType-Article-1
SourceType-Scholarly Journals-1
ObjectType-Feature-2
content type line 23
Potential conflicts of interest. A. E. reports grants or contracts from the Bangerter Rhyner Foundation and the Swiss National Science Foundation; consulting fees from Pfizer and Illumina; and participation on a data and safety monitoring board or advisory board for Sefunda.
The author has submitted the ICMJE Form for Disclosure of Potential Conflicts of Interest. Conflicts that the editors consider relevant to the content of the manuscript have been disclosed.
ISSN:1058-4838
1537-6591
1537-6591
DOI:10.1093/cid/ciad407