head of Juan Lavista Ferres of Microsoft's AI for Good Lab is co-author of a book that provides real-world examples of how artificial intelligence can be used responsibly to positively impact humanity.
Pheles sat together. Mobi Health News He discusses his new book, How to Reduce Bias in Data Input to AI, and Recommendations for Regulators Creating Rules for the Use of AI in Healthcare.
Mobi Health News: Can you tell our readers about Microsoft's AI for Good Lab?
Juan Lavista Ferrez: This initiative is completely philanthropic, in which we partner with organizations around the world and provide them with our AI skills, AI technology, and AI knowledge, and they provide subject matter experts.
We build teams that combine these two efforts and work together to help solve problems. This is very important. Because we've seen that AI can solve many of these organizations and problems. Unfortunately, there is a huge gap in AI skills, especially among nonprofits and government agencies working on these. project. They typically do not have the capacity or structure to hire and retain the necessary personnel. That's why we decided to invest from our perspective – philanthropic investing – to help the world solve these problems.
We have a lab here in Redmond. We have a lab in New York. We have a laboratory in Nairobi. There are people in Uruguay too. We have a postdoctoral fellow in Colombia who works in many fields, but health is one of them, and it's a very important field for us. We do a lot of work in medical image processing, such as CT scans and his x-rays, and in areas where we have large amounts of unstructured data such as text. We can use AI to help these doctors learn more and understand the problem better.
MHN: What are you doing to ensure that AI doesn't do more harm than good, especially when it comes to inherent biases within your data?
Pheles: It's in our DNA. That's fundamental for Microsoft. Even before AI became a trend over the past two years, Microsoft has been investing heavily in areas like responsible AI. All our projects go through very thorough work on responsible AI. It's also very fundamental to us that we never work on a project if we don't have subject matter experts on the other side. And as well as experts in the field, we strive to choose the best. For example, we are doing pancreatic cancer research and collaborating with Johns Hopkins University. They are some of the best doctors in the world working on cancer research.
The reason this is so important, especially as it relates to what you mentioned, is that these experts have a better understanding of data collection and potential bias. But still, we are conducting a responsible AI review. We ensure that the data are representative. We just published a book about this.
MHN: yes. Please tell me about that book.
Pheles: In the first two chapters, we talked a lot about implicit bias and its risks, and unfortunately, there are a lot of bad examples for society, especially in areas like skin cancer detection. Many skin cancer models are trained using Caucasian skin. The number of patients with skin cancer is underrepresented in Caucasians because they typically have better access to doctors and are typically the target population for skin cancer. Along with those issues.
That's why we're doing a very thorough review. To me, Microsoft has been leading the way when it comes to responsible AI. Microsoft has his chief AI officer, Natasha Crampton.
Also, since we are a research institution, we will publish the results. We go through peer reviews to make sure we're not missing anything in that regard. And ultimately, it's our partners who understand the technology.
Our job is to make sure you understand all of these risks and potential biases.
MHN: You mentioned that the first few chapters discuss the issue of potential bias in the data. What is the rest of the book about?
Pheles: So this book has about 30 chapters. Each chapter is a case study, one on sustainability and one on health. These are real case studies that we worked on together with our partners. But in his first three chapters, he takes a good look at some of the potential risks and tries to explain them simply so that people can understand them. Many of us have heard about bias and data collection issues, but it can sometimes be difficult to understand how easily this can happen.
We also need to understand, from a bias perspective, that the fact that we can predict something does not necessarily mean that it is causal. Predictive power does not imply causation, and people often understand and repeat that correlation does not imply causation. People sometimes don't always understand that predictive power doesn't mean causation, and even explainable AI doesn't mean causation. That's really important to us. These are some of the examples covered in this book.
MHN: What recommendations do you have for government regulators regarding the creation of rules for the implementation of AI in healthcare?
Pheles: I'm not the best person to talk about regulations per se, but I can say that I generally have a good understanding of two things.
First, what is AI and what is not? What is the power of AI? What is not the power of AI? I think if you understand technology better, you will always be able to make better decisions. We believe that any technology can be used for good or bad. In many ways, it is our social responsibility to use technology in the best possible way and maximize its potential. It should be used beneficially to minimize risk factors.
From that perspective, I think a lot of effort needs to be put into getting people to understand the technology. That's rule number one.
Listen, we need a deeper understanding of technology as a society. And what we're seeing, and what I personally see, is that it has great potential. We need to make sure that we not only maximize our potential, but also utilize it correctly. This requires governments, organizations, the private sector, and nonprofits to start by understanding the technology, understanding the risks, and working together to minimize those potential risks.