The rapid advancement of AI has necessitated the development of several guardrails and philosophies on how to ethically incorporate technology into the workplace. Paula Goldman, chief ethics and humane use officer at Salesforce, said AI should serve as a co-pilot alongside humans, not exist on autopilot. luckBrainstorm AI conference held in London on Monday.
“We need the next level of management. We need to help people understand what's going on across the AI system,” she said. luckNick Lichtenberg, Executive News Editor. “And most importantly, he needs to design his AI products to take into account not only what AI is good at and bad at, but also what humans are good at and bad at in making their own decisions.” That means there is.”
Among growing user concerns, Goldman is primarily concerned about AI's ability to generate synthetic content, including racial and gender bias and excessive user-generated content such as deepfakes. is. She warns that unethical applications of AI could stifle funding and development of the technology.
“The next AI winter could be caused by trust issues and people adoption issues in AI,” Goldman said.
She said productivity gains from AI in the workplace will be driven by training and people's willingness to adopt new technology. To foster trust among employees who use AI products, especially applications, Goldman suggests implementing “mindful friction.” This is essentially a series of checks and balances to ensure that AI tools in the workplace do more good than harm.
What Salesforce did to achieve “conscious friction”
Salesforce has begun to curb potential bias in its use of AI. In fact, this software giant has developed his marketing segmentation product that generates the right demographics for email campaigns. The AI program generates a list of potential demographics for your campaign, but it's up to humans to choose the right demographics to avoid excluding relevant recipients. Similarly, the software company displays a warning toggle popup on generative models on the Einstein platform that incorporate zip codes. ZIP codes are often correlated with specific races and socio-economic status.
“Increasingly, systems are being developed that can detect such anomalies and prompt and prompt humans to recheck the anomalies,” Goldman said.
In the past, bias and piracy have undermined trust in AI. An MIT Media Lab study found that AI software programmed to identify the race and gender of a variety of people had a less than 1% error rate when identifying fair-skinned men; showed a 35% error rate when identifying dark-skinned women. Famous people like Oprah Winfrey and Michelle Obama. Study author Joy Buolamwini said work that uses facial recognition technology for high-stakes jobs, such as equipping drones and body cameras with facial recognition software to carry out deadly attacks, is a challenge to AI technology. He said accuracy could be compromised. Similarly, Yale School of Medicine has found that algorithmic biases in medical databases can cause AI software to suggest inappropriate treatment plans for certain patients.
Even for people in industries whose lives aren't at risk, AI applications raise ethical concerns, with OpenAI scraping hours of user-generated YouTube content without the content creator's consent. Contains information that may infringe on copyright. Goldman said that in addition to widespread misinformation and inability to complete basic tasks, AI has a long way to go before it can reach its potential as a useful tool for humans. .
But Goldman believes designing smarter AI capabilities and human-driven failsafes to strengthen trust is what's most exciting about the industry's future.
“How do we design products that know what to trust and where to reconsider and apply human judgment?”