The pace of evolution of generative artificial intelligence (AI) is so fast that it is very difficult to continue to form opinions about it. AI is one of those radical, and now inevitable, revolutions that requires continuous consideration of how to scale its benefits while minimizing risks.
Particularly in the medical field, new technologies are making huge contributions by accelerating and improving diagnostic processes, personalizing preventive and therapeutic interventions, and freeing increasingly scarce human resources from bureaucratic tasks that consume precious time. may bring benefits. In the world of healthcare, there is no fear that AI will replace humans and lead to job losses. This is because the growth in workload far outpaces the number of people able and willing to work.
However, there are other potential negative consequences of these tools that are even more concerning than in other areas, particularly related to the difficulty of distinguishing between true and false information. . If you make the mistake of using ChatGPT as a search engine, even doctors can be fooled by the so-called hallucinations of AI. The system doesn't want to be disappointed, so it fabricates plausible facts and fake references.
This misinformation is even more dangerous when it deceives patients who are incapable of perceiving the information as it really is, especially when the information is intentional (disinformation) rather than the result of a mistake (misinformation). is.
There's nothing new under the sun. We have been talking about fake news and information failure for years, especially since the advent of the internet and social media.
Ignoring our arguments, innovation has taken another leap forward with revolutions that we struggle to understand terminology. We're trying to move back and forth between machine learning models and large-scale language models. The computer-generated video has been rightly identified as false in a heated situation that has captured the world's attention. Nevertheless, it is difficult to recognize at the same rate the wave of false health information that malicious individuals are publishing, or perhaps already publishing, online.
For this reason, BMJ dedicates space and commentary to research showing that most systems on the market have inadequate safeguards against misinformation, even when safeguards can be put in place. According to the authors, the developers of these tools do not seem to be aware of the risks associated with the new technology. The risks are not just about disinformation, but also about privacy and reinforcing existing gender, ethnic, and other biases. Judging from current information, AI risks perpetuating stereotypes and negatively impacting society.
Potential solutions and risk mitigation already exist for each of these issues, but it is important that everyone knows them, recognizes their importance, and applies them diligently.
The legislators did their part. A few weeks ago, the European Parliament approved the world's first law related to AI. This law was designed to transcend national borders for a global problem that could never be addressed at the level of individual nations. However, despite the legislator's good intentions, the law's effectiveness appears to be limited. Its principles of goodness may prevent deviations by perhaps the most good people, but they seem unable to stop malicious individuals. We all need to exercise care and caution to avoid the risk of demonizing innovations that can save lives and improve the health of everyone.
This story has been translated from italian university We use several editing tools, including AI, as part of the process. A human editor reviewed this content before publication.