While Google I/O introduced an AI assistant that can see and hear the world, OpenAI has she– Embed a chatbot like this on your iPhone. Next week, Microsoft will be hosting Build, where his version of Copilot or Cortana that understands PivotTables is sure to make an appearance. And in a few weeks, Apple will be hosting its own developer conference, and if there's anything to talk about, it's likely to talk about artificial intelligence as well. (It is unclear whether Siri will be mentioned.)
AI is here! It's no longer conceptual. It has taken away jobs, created some new jobs, and helped millions of students avoid doing homework. According to most of the big tech companies investing in AI. It appears that they are beginning to experience one of the rare monumental changes in technology. Consider the Industrial Revolution and the birth of the Internet and personal computers. Big Tech's entire Silicon Valley is focused on taking large-scale language models and other forms of artificial intelligence and moving them from researchers' laptops to ordinary people's phones and computers. Ideally, they will make a lot of money in the process.
But the meta AI thinks I have a beard, so I can't really care.
Just to be clear, I'm a cis woman and I don't have a beard. But if you type “Show me a picture of Alex Krantz” into the prompt window, the Meta AI will inevitably return an image of a very beautiful dark-haired man with a beard. I'm just a small part of them.
Meta AI isn't the only one struggling with the details. The Vergemasthead. ChatGPT said yesterday that I don't work here The Verge. Gemini at Google didn't know who I was (not surprisingly), but after telling me Nilay Patel was the founder: The Verge, He later apologized and corrected himself that this was not the case. (I swear he was.)
AI keeps failing because these computers are stupid. Their abilities are extraordinary and their stupidity amazing. We couldn't be more excited about what's next in the AI revolution. Because that deployment would lead to situations where the computer wouldn't be able to maintain accuracy, even in the smallest detail.
So even Google's big AI keynote at I/O was a huge flop. In a commercial for Google's new AI-powered search engine, when someone asks how to unclog a film camera, he suggests, “Open the back door and gently remove the film.” This is the easiest way to discard photos you've already taken.
The difficult relationship between AI and truth is called “illusion.” Quite simply, these machines are great at discovering patterns in information, but when they try to extrapolate and create them, they sometimes get it wrong. They effectively “hallucinate” a new reality, but that new reality is often false. This is a difficult problem, and everyone working on AI today knows it.
A former Google researcher claims the issue could be fixed within the next year (though he lamented the outcome), and Microsoft is offering a new service for some users to help detect them. We provide tools that may help. Liz Reid, Google's head of search, said: The Verge The company recognizes the challenge, too. “There's a balance between creativity and fact” in any language model, she told my colleague David Pearce. “We're going to skew it toward the factual side.”
But notice that Reid said there's a balance? That's because many AI researchers don't really consider hallucinations. can solved. Research from the National University of Singapore suggests that hallucinations are an inevitable consequence of all large-scale language models. Just as no one is always right 100% of the time, neither are these computers.
And perhaps that's why most of the major players in this space, the companies that have the real resources and financial incentives to get us all to adopt AI, say we don't need to worry about it. That's probably why you're thinking about it. During Google's IO keynote, the company added the words “Verify the accuracy of your response” in small gray font to the bottom of nearly every new AI tool it showed off. It served as a reminder that the company's tools can't be trusted, but it also served as a reminder that they can't be trusted. I think that's a problem. ChatGPT works similarly. Just below the prompt window, in small font, you'll see “ChatGPT can make mistakes. Please review important information.”
This isn't a disclaimer you'd find on a tool that looks like it's going to change our entire lives in the near future. And the people making these tools don't seem to care much about solving problems other than small warnings.
OpenAI CEO Sam Altman, who was briefly fired for putting profits before safety, went a step further and said people who have problems with AI accuracy are naive. “You can simply say, 'I'll never say anything I'm not 100 percent sure about,' and get everyone to do it. But there's no magic that people like so much,” he said in his Salesforce release last year. told the audience at his Dreamforce conference.
This idea that AI has some kind of unfathomable magic sauce that allows it to tolerate a tenuous relationship with reality is often brought up by people who want to let go of concerns about accuracy. Google, OpenAI, Microsoft, and many other AI developers and researchers dismiss hallucinations as a minor annoyance, but they are on the way to creating digital beings that could make our own lives easier. Therefore, it should be allowed.
But I feel sorry for Sam and everyone else who gave me financial incentives to get excited about AI. I didn't come to computers for the imprecise magic of human consciousness. I come to them because they are very accurate when humans are not. Your computer doesn't have to be your friend. This is necessary to ensure that your gender is understood correctly when you ask questions, and to avoid accidentally exposing film when repairing a broken camera. I guess the lawyers want the case law to be understood correctly.
I to understand It's the birthplace of Sam Altman and other AI evangelists. In the distant future, we may create true digital consciousness from 1s and 0s. Currently, the development of artificial intelligence is progressing at an astonishing speed that defies many previous technological revolutions. There's some real magic at work in Silicon Valley right now.
But the AI thinks I have a beard. They are unable to consistently understand the simplest tasks, and yet these AIs are foisted upon us with the expectation that we will admire the incredibly mediocre services they provide. I am. While I'm certainly amazed by the innovations that are happening, I hope we don't sacrifice computer accuracy just to have a conversation with a digital avatar. It's not a fair exchange. It's just an interesting exchange.