Leading a team or organization in the age of artificial intelligence requires more than just technical talent. All you need is business sense, empathy, foresight, and ethics.
The key is to understand what AI means. This doesn't mean that AI-powered solutions are introduced into organizations expecting miracles to happen overnight. According to his MIT Senior Lecturer Paul McDonagh-Smith, who was quoted in the MIT Sloan Management Review, AI is “not a combination of humans and machines; It strengthens our capabilities and supports us across existing and new tasks.”
People outside the AI technology bubble need a solid understanding of how AI works and what it takes to make it responsible and productive. It requires training and a positive culture. People need to be informed about the latest technologies that are redefining their jobs and organizations in some way. “AI skills are not as plentiful, so organizations need programs to upskill their employees and train them in technical skills and technical decision-making,” McDonough-Smith says. “Culture is a big part of the equation. Organizations create cross-functional teams that break down silos, tolerate failure to foster creativity, and combine human and machine capabilities in complementary systems.” We need to encourage innovative methods.”
Industry leaders from a variety of fields echo the views highlighted in Management Review articles. In addition to business knowledge and a culture of innovation, ethics must also be a top consideration. This will require more diverse leadership of AI initiatives from business strategists, users, technologists, and people from the humanities.
Essentially, we are experiencing a once-in-a-decade moment, and how we lead people to embrace this moment will shape the AI revolution for years to come. said Mark Thurman, President and Executive Director of the Mozilla Foundation.
Today, the people running AI are “project managers and product managers who run delivery, data engineers and scientists who build data pipelines and models, and DevOps and software engineers who build digital infrastructure and dashboards.” , says AI startup founder and former Mike Kraus. Director of Data Science at Beyond Limits.
The risks of limiting AI to the technology side include “compromising the ability of AI tools to adapt to new scenarios, alignment with enterprise objectives and goals, accuracy of responses due to data illusions, and ethical concerns including privacy.” “This includes,” Paris said. Mr. Natarajan, Zinnov CEO. “But the risks go beyond data hallucinations and a lack of nuanced understanding. AI efforts led solely by engineers risk optimizing for the wrong ends, and lack human capabilities such as empathy and compassion. ’s values, lacks checks and balances, and can exacerbate prejudice.”
Everyone's goal should be to develop and ensure trustworthy AI, Thurman argues. This means “creating systems and products that prioritize human well-being and help users understand how they work.” For example, imagine a personal assistant that stores all your personal data locally, improves in functionality as you use it, and maintains your privacy. Or social media apps that can tweak their algorithms to improve your mental health instead of hurting it. These things are possible. ”
This means working closely with end users, customers, and others who rely on the output of these systems. “He builds AI in a way that puts a human in the driver's seat,” Thurman says. “This avoids risks we're already seeing, such as consumers receiving incorrect medical advice from chatbots or algorithms sending technology job postings to men instead of women. most important.”
Incorporating non-technical humanities players into AI management teams should be organic and encouraged by the culture, not forced by executive order. In other words, it's a balancing act, and for organizations with rigid, hierarchical cultures, it can be an uphill climb.
People inside and outside your organization should lead this effort. Dr. Bruce Lieberthal, vice president and chief innovation officer at Henry Schein, sees a risk of a lack of user collaboration within the healthcare sector. “Creators of the technology used by healthcare professionals and their patients often work within a black box and make decisions that do not adequately consider the users,” he warns.
“AI is best developed collaboratively,” Lieberthal adds. “Software teams need to meet with users and the people most affected by the product to maintain integrity.” It needs to be vetted by the same people to make sure it hits the mark on what users envisioned during development and deployment.”
On the humanities side, there are efforts to “bring ethicists into the conversation at some level,” Kraus said. “But it's often not clear what authority or role they actually play.” He added that the challenge is that “even if you push philosophers into technology delivery teams, you still don't get good reviews.” “In the end, this will not lead to positive results for companies or users.''
These humanities-oriented roles should be more advisory than actual decision-making, Kraus advises.
Still, he points out that a highly diverse AI team can help add a safeguard that can help organizations avoid headaches and, therefore, wasted investments. “Analyzing the problem from an ethical or philosophical perspective will raise a lot of questions that a purely technical team probably wouldn't have asked,” he says. “More importantly, technical teams are not empowered, asked, or expected to think about the impact of what they are building, but rather focus on their mission and execute it. There are risks in not investigating unintended side effects of how AI is ultimately used.”
Services like GPT and Google will increasingly democratize AI and open up AI decision-making to a more diverse range of leaders. “We have to be wary of dilettanteism. We need interdisciplinary and interdisciplinary cooperation,” Natarajan says. “People with backgrounds in ethics, cognition, sociology, decision theory, and other humanities or social science disciplines, along with subject matter experts, can contribute critical diversity to ground AI in practical, real-world situations. We can provide a unique perspective.”
The potential emergence of new roles “suggests new maturity and caution in the integration of AI, learning from issues such as algorithmic bias,” Natarajan observes. “As AI grows more capable and autonomous, it helps maintain human oversight and control.” These new roles include Chief AI Ethics Officer, AI Ombudsman, AI Compliance Officer, This could include an AI auditor, an AI UX designer or a curator.