Don't be surprised if your doctor starts writing you overly friendly messages. They may be getting some help from artificial intelligence.
New AI tools can help doctors communicate with patients, including answering messages and taking notes during exams. It's been 15 months since OpenAI released ChatGPT. Thousands of doctors are already using similar products based on large-scale language models. One company states that their tools work in his 14 languages.
Enthusiasts say AI will save doctors time and prevent burnout. It also undermines the doctor-patient relationship, raising future issues of trust, transparency, privacy, and relationships.
Let's take a look at how new AI tools can impact patients.
Does my doctor use AI?
In recent years, medical devices equipped with machine learning are reading mammograms, diagnosing eye diseases, detecting heart disease, and more. What is new is the ability of generative AI to respond to complex commands by predicting language.
Your next health checkup will be recorded by an AI-powered smartphone app that listens and documents everything, and instantly compiles notes to read later. This tool can also mean more money for a doctor's employer, since it doesn't forget details that could legally be billed to insurance.
Physicians must seek consent before using the tool. The new wording may also appear on the paperwork you sign in the doctor's office.
Other AI tools could help doctors craft their messages, but they may not know it.
“Your doctor may or may not tell you that they're using it,” says the Boston-based group, which works for transparent communication between doctors and patients. Kate Desroches, director of OpenNotes, said: Some health systems encourage disclosure, while others do not.
A doctor or nurse must approve the AI-generated message before sending it. In one Colorado health system, such messages include a statement that makes it clear that they were automatically generated. However, your doctor can remove the line.
“It sounded exactly like him. It was amazing,” said patient Tom Detner, 70, of Denver. I recently received the following AI-generated message: It's important to listen to your body. ” The message ended with “Please be careful,” making it clear that it had been automatically generated and edited by a doctor.
Detner said he is pleased with the transparency. “Full disclosure is very important,” he said.
Will AI make mistakes?
Large language models can misinterpret input or fabricate inaccurate responses, a process called hallucinations. The new tool has internal guardrails to prevent inaccurate information from being communicated to patients or recorded in electronic health records.
“You don't want false information in your clinical records,” said Dr. Alistair Erskine, head of digital innovation at Georgia-based Emory Healthcare. The doctor said hundreds of doctors use Abridge's products to record patient visits.
The tool runs the doctor-patient conversation across several large-scale language models, eliminating strange ideas, Erskine said. “This is a way to manipulate hallucinations.”
At the end of the day, “physicians are the most important guardrail,” says Abridge CEO Dr. Shiv Rao. While reviewing her AI-generated notes, the doctor can click on any word and listen to specific parts of the patient's examination to check accuracy.
In Buffalo, New York, another AI tool misheard Dr. Lauren Bruckner when she told a teenage cancer patient that she was glad she didn't have an allergy to sulfa drugs. The AI-generated note read “Allergy: Sulfa drugs.”
The tool “completely misinterpreted the conversation,” Bruckner said. “That doesn't happen very often, but it's definitely a problem.”
What about human touch?
AI tools can encourage you to be friendly, empathetic, and helpful.
But they can get carried away. In Colorado, a patient suffering from a runny nose was alerted to an AI-generated message that the problem could be a brain fluid leak. (It wasn't.) The nurse didn't proofread it carefully and accidentally sent the message.
“Sometimes it's amazingly helpful, and sometimes it's not helpful at all,” said Dr. CT Lin, head of innovation at Colorado-based UC Health. At the hospital, about 250 doctors and staff use his Microsoft AI tools to create medical certificates. First draft of message to patients. Messages are delivered through Epic's patient portal.
This tool needed to learn about the new RSV vaccine. Because we were creating a message that no such thing exists. But if there are routine advice for ankle sprains, such as rest, ice, compression, and elevation, “that's great,” Lin says.
Also, on the plus side, doctors using AI will no longer be tied to their computers during appointments. The AI tool records the exam so you can make eye contact with the patient.
Dr. Robert Burt, chief medical information officer at Pittsburgh-based UPMC, said the tool requires audible language, so doctors are learning how to explain it out loud. Your doctor may say: It's quite swollen. I feel like there is water in my right elbow. ”
Leveraging AI tools to talk during exams can also help patients understand what's going on, Burt said. “In the tests I had, I heard hemming and honking sounds while the doctor was doing the tests. And I always wonder, 'So, what does that mean?'
What about privacy?
U.S. law requires health systems to obtain assurances from business partners that they will protect protected health information, and failure to do so could subject companies to investigations and fines from the Department of Health and Human Services. There is.
Doctors interviewed for this article said they were confident in the data security of the new product and that their information would not be sold.
Information shared in new tools is used to improve the tools, potentially increasing the risk of health data breaches.
Dr. Lance Owens is the Chief Medical Information Officer at the University of Michigan HealthWest. At the university, 265 physicians, physician assistants, and nurse practitioners use his Microsoft tools to document patient exams. He believes patient data is protected.
“When we are told that our data is safe, secure and isolated, we believe it,” Owens said.
___
The Associated Press Health and Science Department receives support from the Howard Hughes Medical Institute's Science and Education Media Group. AP is solely responsible for all content.