As AI becomes increasingly pervasive in our daily lives, its impact on healthcare and medicine will impact everyone, whether or not they choose to use AI in their activities. There is no doubt about it. So how can we implement AI in a responsible way that provides primarily positive benefits while minimizing potential downsides?
At the recent 2024 SXSW Conference and Festival held in March 2024, American Medical Association (AMA) President Dr. Jesse Ehrenfeld spoke on the topic “AI, Healthcare, and the Strange Future of Medicine.” In his follow-up interview on the AI ​​Today podcast, Dr. Ehrenfeld expands on his talk and shares additional insights regarding this article.
Q: How do you see AI impacting healthcare and why did the AMA recently release a set of AI principles?
Dr. Jesse Ehrenfeld: I'm a practicing physician and anesthesiologist, and I actually saw a bunch of patients earlier this week. I work at the Medical College of Wisconsin in Milwaukee, Wisconsin, and have been in clinical practice for about 20 years. I am currently the President of the AMA. The AMA is well known and is the largest and most influential organization representing physicians across the United States. Founded in 1847, it provides a code of medical ethics and other things that help doctors practice medicine in America today. I am certified in both Anesthesiology and Clinical Informatics. I am the first Board of Information Scholars to be certified as President of the AMA. This is a relatively new specialty designation, and I also spent 10 years in the Navy. Fundamentally, everything I do comes back to understanding how I can support the delivery of quality healthcare to patients based on my work and active practice.
Not surprisingly, doctors have been saddled with a lot of technology that is useless, useless, and a burden rather than an asset. We don't want that anymore, especially when it comes to AI. So, in response to concerns raised by physicians and the public, the AMA published a set of principles for the development, deployment, and use of AI in November 2023.
The public has many questions about these AI systems. What does it mean? How can we trust them? Security and all that. Our principles center all of our work, the federal government, Congress, and others on how to ensure that these technologies work as they are developed, deployed, and ultimately used in society. It serves as a guideline for interactions with government and industry. Care delivery system.
We have been working on AI policy since 2018. But the latest iteration calls for a comprehensive government approach to AI. We need to ensure we reduce patient risk and maximize utility. And these principles were the result of a significant effort bringing together subject matter experts, including physicians, informaticists, and national expert groups. These principles cover a lot.
Q: Can you outline these AI principles?
Dr. Jesse Ehrenfeld: Above all, we want to ensure that healthcare AI is designed, developed, and deployed in an ethical, fair, responsible, and transparent manner. Our vision and perspective is that developing AI in an ethical and responsible manner requires compliance with national governance policies. Voluntary agreement and voluntary compliance are not enough. We need regulation and we need to have a risk-based approach. The level of scrutiny and verification oversight should be proportionate to the likelihood of harm or consequences that an AI system may pose. Therefore, different levels of monitoring may be required if you are using it to support diagnostics and to support scheduled operations.
We've done a lot of research work with physicians across the country to understand what's really going on today as the use of these technologies increases in research efforts. The results are interesting, but perhaps should also serve as a bit of a warning to developers and regulators. Physicians in general are very enthusiastic about the potential of AI in healthcare. 65% of nationally reputed U.S. physicians are using AI in their practices to reduce administrative burden through automation, such as assisting with document creation, assisting with document translation, assisting with diagnosis, and prior authorizations. I think there are some advantages to it.
But they also have concerns. His 41% of physicians say they are as excited as they are afraid about AI. Additionally, there are concerns about the impact on patient privacy and the patient-doctor relationship. After all, we want safe and reliable products on the market. That's how we earn the trust of physicians and consumers, and all our efforts to support the development of high-quality, clinically-validated AI clearly return to these principles.
Q: What data and health privacy concerns are you focused on?
Dr. Jesse Ehrenfeld: What I'm seeing is more questions than answers from patients and consumers about data and AI. For example, what do healthcare apps do? Where does the data go? Can I use or share that information? And unfortunately, the federal government doesn't really provide transparency on where the data goes. has not been secured. The worst examples of this are companies, developers, and the apps they label as “HIPAA compliant.” In the average person's mind, the term “HIPAA compliant” means that their data is safe, private, and secure. The application is not her HIPAA covered entity, and HIPAA applies only to covered entities. Therefore, saying that you are “HIPAA compliant” when you are not covered by HIPAA is completely misleading and should not be allowed to happen.
There are also a lot of concerns about where health data goes, and that obviously extends to the use of AI with patients. Her 94% of patients say they want strong laws governing the use of their health data. Patients are hesitant to use digital tools if they don't understand privacy considerations. There is a lot of work we need to do in the area of ​​regulation. But even if not legally required, there are many things AI developers can do to strengthen trust in the use of AI data.
Choose your favorite big tech company. Would you trust them with your medical data? What if there was a data breach? Uploading sensitive photos of body parts to their servers and potentially exposing them to concerns Can you provide me with information about the condition? What should I do if something goes wrong? Who do you want to call? So I think we need an opportunity to be more transparent about where our data collection is going. How can I opt out of data pooling, sharing, etc.?
Unfortunately, HIPAA doesn't solve all of this. In fact, many of these applications are not covered by HIPAA. More work needs to be done to ensure the safety and privacy of health data.
Q: Where and how do you see AI having the most positive impact on healthcare and medicine?
Dr. Jesse Ehrenfeld: We need to use technologies such as AI, and we need to embrace these technologies to solve the workforce crisis that exists in the healthcare industry today. This is a worldwide problem. It's not limited to America. 83 million Americans lack access to primary care. There is also a shortage of doctors in America today. If we continued to work and provide health care the same way, we would never open enough medical schools and residency programs to meet those demands.
When we talk about AI from an AMA perspective, we actually like to use the term augmented intelligence rather than artificial intelligence. Because it goes back to the very basic principle: tools should be tools that improve the capabilities of medical teams, doctors, nurses, and everyone involved to deliver care more effectively and efficiently. But what we need is a platform. There are a lot of one-time point solutions out there right now that aren't integrated, and I think companies are starting to move quickly in that direction. Obviously, we want that to happen in the medical field as well.
We're experimenting with different methods to ensure we have a voice at the table throughout the design and development process. We have a physician innovation network. It's a free online platform that brings together physician entrepreneurs to drive change and innovation and bring better products to market. Companies want clinical input, and clinicians want to connect with entrepreneurs. Silicon Valley also has a technology incubator called Health2047. They leveraged the insights we gained as AMA physicians to spin off about a dozen companies.
Ultimately, it will be important to ensure a regulatory framework that ensures only clinically validated products are brought to market. And we need to make sure our tools truly deliver on their promise and are an asset, not a burden.
I don't think AI will replace doctors, but I do think doctors who use AI will replace doctors who don't use AI. AI products have great potential and are expected to reduce the administrative burden experienced by doctors and clinics. Ultimately, we expect to see a lot of success in leveraging AI directly in patient care. There's a lot of excitement out there, but clearly tools and technology exist to address challenges such as racial bias, errors that can cause harm, security, privacy issues, and threats to health information. you need to check. Physicians need to understand how to manage these risks and manage their responsibilities before relying on an increasing number of tools.
(Disclosure: I am a co-host of the AI ​​Today podcast)