When exciting technology comes our way, it's easy to get caught up in the excitement.
This is especially true when it comes to something as dramatic as artificial intelligence.
Related article: White House plans to regulate government use of AI
AI can write exam questions. AI can write ads. AI can even make movies. But the idea persists that AI isn't always perfect, especially when it comes to hallucinations, those troubling moments where the AI simply makes things up.
However, the impression is that companies like Google and Microsoft are boldly working to bring AI into every aspect of society.
So where can we really find out what we need to do with AI to make it trustworthy?
I have to confess that I've been on that quest for a while, and I've been moved to repeatedly read AI researcher and Ohio State University College of Engineering Dean Ayanna Howard's expressions of soulful, life-affirming honesty. did. .
Howard wrote for the MIT Sloan Management Review and most succinctly summarized the gap between engineers and the rest of the world.
She expressed a simple thought: “Engineers aren't trained to be social scientists or historians. We're in this field because we love it, and we're usually positive about technology.” because That's our field. ”
Related article: Best AI chatbots: ChatGPT is not the only one worth trying
But that's exactly the problem, Howard said presciently. “We're not very good at building bridges with others who can translate what we think is positive and what we know to be part of the negative.”
Yes, there is a dire need for translation, and the engineers creating the technologies of the future desperately need a little more emotional intelligence.
“first [need] — And this probably requires regulation — Technology companies, especially those in artificial intelligence and generative AI, are merging technology with human emotional quotient (EQ) to determine when such tools That means we need to find ways to give people tips. '' Howard said.
Think back to the early days of the internet. It was up to us to decide what was true, what was exaggerated, and what was utter emptiness.
Related article: Microsoft wants to stop abuse of AI chatbots
We're very excited, but we're still moving toward some degree of certainty.
Howard explained that as long as technology appears to be working, humans generally trust it. Even if, as in one experiment she participated in, people blindly followed the robot away from the fire escape, right in the middle of a fire.
Howard suggests that companies like ChatGPT should accept the lack of certainty when it comes to AI.
This won't eliminate the need for vigilance, but it will certainly create a higher level of trust, which is essential for embracing AI rather than fearing it or imposing it.
Howard worries that anyone can now create AI products. “We have inventors who don't know what they're doing and are selling to companies and consumers who are too trusting,” she said.
Generative AI will also change customer service forever.Here's how to get here
Even though her words may seem alarming, they are pure truths, and very positive ones that reveal the challenges involved in bringing a potentially revolutionary technology to the world and making it reliable. There is no difference in the fact that it is a word.
After all, if AI can't be trusted, it can't be the technology it's being advertised as.