Written by Jonathan Allen
NEW YORK (Reuters) – New York City Mayor Eric Adams said the city's new artificial intelligence chatbot, which was caught in the past few days, was giving business owners incorrect answers or advice that would violate the law if followed. I'm defending it.
When the MyCity chatbot launched as a pilot in October, it was touted as the first citywide use of such AI technology, providing business owners with “practical and It now provides reliable information.
That doesn't always prove to be true. Journalists at research institute The Markup first reported last week that chatbots were causing problems. The report incorrectly recommended that employers could take a portion of workers' tips and that there were no regulations requiring supervisors to notify employees of schedule changes.
“It's wrong in some areas, and we have to fix it,” Adams, a Democrat, told reporters Tuesday, stressing that it was a pilot program. He says, “Whenever you use technology, you have to put it into a real environment and solve problems.”
Adams has passionately advocated bringing untested technology to the city, with optimism that isn't necessarily proven. Last year, he installed a 400-pound, vaguely egg-shaped robot at the Times Square subway station in hopes of helping police deter crime. It was decommissioned about five months later, but commuters noted that the vehicle appeared to be doing nothing and was unable to use the stairs.
The chatbot remained online Thursday and still occasionally gave incorrect answers. Store owners are free to go cashless, the paper said, in clear disregard of the City Council's 2020 law that prohibits stores from refusing to accept cash. He believes the city's minimum wage is still $15 an hour, but it has been raised to $16 an hour in 2024.
The chatbot, which relies on Microsoft's Azure AI service, appears to have been led astray by issues common to so-called generative AI technology platforms such as ChatGPT. ChatGPT is known to fabricate stories and make false claims with HAL-like confidence.
Microsoft declined to say what caused the problem, but said in a statement that it is working with the city to resolve the issue. “We expect to see a significant reduction in inaccurate responses as early as next week,” the city's Office of Technology and Innovation said in a statement.
Neither Microsoft nor City Hall responded to questions about what caused the error or how to fix it.
The city has updated the disclaimer on the MyCity chatbot's website to note that “responses may be inaccurate or incomplete,” and to encourage businesses to “use their responses as legal or professional advice.” I am asking you not to do so.
Andrew Riggy, director of the New York City Hospitality Alliance, which advocates for thousands of restaurant owners, said he has heard from business owners who are perplexed by the chatbot's response.
“I applaud the city for trying to use AI to help businesses, but we need to make it work,” he said, noting that following some of the chatbot guidance could have serious legal consequences. I warned you that there is. “If you ask a question and then have to go back to your lawyer to find out if the answer is correct, that defeats the purpose.”
(Reporting by Jonathan Allen in New York; Editing by Stephen Coates)