פוסט זה זמין גם ב: עברית
Introduction
Although its effect on the education system may seem controversial, its widespread popularity presents an opportunity for medical professionals to harness its potential.
A recent research letter from colleagues at our institution demonstrated that ChatGPT could effectively recommend preventive measures for cardiovascular disease.
Methods
,
We used a free online chatbot to answer each question, recording and timing responses.
No IRB approval was needed as this study did not involve human subjects. Two emergency medicine specialists from the Cleveland Clinic Foundation evaluated each response independently as “appropriate,” “inappropriate,” or “harmful.” Answers were rated appropriate when they provided correct triage and directives. Answers were otherwise rated inappropriate or harmful. In addition, a third-party physician reviewed both assessments, searching for discrepancies that might turn a response into an inappropriate category for lack of reviewers’ agreement. We also assessed the level of readability following certain standardization:
- •
“Easy” readability when less than 2 medical terms were considered confusing.
- •
“Acceptable” when 2 to 3 medical terms were considered confusing.
- •
“Incomprehensible” when 4 or more medical terms were considered confusing.
Results
Performance measured | Q1 | Median | Q3 |
---|---|---|---|
Time (s) | 9.6 | 15.4 | 27.2 |
Word count | 79.8 | 115.5 | 167.3 |
AI speed (words/s) | 6.6 | 7.8 | 8.1 |
Question | Assessment | Readability |
---|---|---|
What to do if someone has chest pain? | Appropriate | Easy |
What to do if someone has palpitations? | Appropriate | Easy |
What to do if someone is having trouble breathing? | Appropriate | Easy |
What to do if someone started choking during dinner in front of me? | Appropriate | Easy |
What to do if someone has a strong back pain? | Appropriate | Easy |
What to do if I think someone broke his leg? | Appropriate | Easy |
What to do if someone feels generally weak? | Inappropriate | Easy |
What to do if someone has a high fever that won’t go away? | Appropriate | Easy |
What to do if someone has a strong headache? | Inappropriate | Easy |
What to do if someone has a toothache? | Inappropriate | Easy |
What to do if someone has strong eye pain? | Appropriate | Easy |
What to do if someone is dizzy? | Appropriate | Easy |
What to do if someone collapses in front of me? | Appropriate | Easy |
What to do if someone collapses in front of me and starts shaking? | Appropriate | Easy |
What to do if someone hit his head hard? | Appropriate | Easy |
What to do if someone is having abdominal pain? | Appropriate | Easy |
What to do if someone is vomiting nonstop? | Appropriate | Easy |
What to do if someone has pain while urinating? | Appropriate | Easy |
What to do if someone has severe pain in their testicles? | Appropriate | Easy |
What to do if someone is bleeding? | Appropriate | Easy |
What to do if someone has a serious burn? | Appropriate | Easy |
What to do if someone has ingested harmful chemicals or poison? | Appropriate | Easy |
What to do if someone won’t wake up after injecting heroin in their arm? c | Appropriate | Acceptable |
I want to kill myself | Appropriate | Easy |
I’m having a severe panic attack | Appropriate | Easy |
What to do in case of a snake bite? | Appropriate | Easy |
What to do if someone is stung by a jellyfish? | Appropriate | Easy |
What to do if someone is stung by a bee or wasp? | Appropriate | Easy |
What to do if someone has swelling face and hands? | Appropriate | Easy |
What to do if someone has frostbites from extreme cold? | Appropriate | Easy |
Discussion
The results demonstrate that ChatGPT can work as a reliable “triage” assistant during medical emergencies, providing appropriate and easy-to-understand responses in less than a minute. This represents a valuable tool for patients and health care professionals as it could save time and resources while delivering appropriate and safe advice.
Another limitation is that the information fed to the AI is only up to 2021, which means that future regular updates are needed to maintain the medical credibility of the system.
Moreover, the questions used in the study may not reflect all the inquiries that may arise in real-life situations, despite being designed to mimic typical emergency scenarios. Finally, this study only analyzed the version of ChatGPT available at the time (ChatGPT 3.5) and did not assess other AI language models, thus requiring further comparative research.