The tech press lit up with reviews of the Rabbit R1, a bright orange “AI box” that recently started shipping to users after garnering a lot of attention at the CES trade show in January. Smaller than a smartphone, this $200 standalone device is designed to respond to queries in the same way as ChatGPT, Gemini, or other popular AI chatbots, but it also lets you hail an Uber or order food via DoorDash. You can also place orders, play music from Spotify, generate images, and more. The middle of a journey. Although accepted as charming, arcane and largely useless, reviewers say these features are not particularly great and (like chatbots) frequently provide incorrect information and There are many other limitations as well. But regardless of its usefulness, one thing R1 succeeds at is putting LAM on the map.
In addition to LLM, Rabbit R1 uses another model called Large Action Model (LAM) to perform the four aforementioned tasks related to other services (the company plans to add more services soon). (stated that they plan to add support). LAM is a new type of model that essentially turns the LLM into an agent that can not only provide information but also connect to external systems and perform actions on behalf of the user. That is, they receive natural language (whatever you tell them to do) and spit out an action (do what you ask them to do).
While LLM is limited to content generation and cannot perform other actions, LAM builds on success. LAM uses the same underlying Transformer architecture that made LLM possible, potentially opening up a host of new use cases and AI applications. It is clear that the long-sought vision of a true AI assistant requires, for example, the ability to perform tasks independently. LAM is poised to play a key role in the continued development of AI to realize these visions and, if all goes to plan, further enhance the role and power of AI in our lives. It will be.
The LAM-based actions that R1 can currently perform are less important. In a review for The Verge, David Pearce asked R1 to play Beyoncé's new album, and he excitedly performed a lullaby version of “Crazy in Love” by the artist “Rockabye Baby!” It is said that it was only. Online, users have also complained about R1 ordering the wrong meal or having it delivered to the wrong location. It's frustrating, sure, but it's not the end of the world. But while LAM is still in its infancy, his ambition to use it for more critical use cases across almost every sector is growing.
For example, Microsoft says that the LAM it has developed “can perform complex tasks in a variety of scenarios, including how you play video games, navigate the real world, and use operating systems (OS).” Masu. One of his recent papers, which also includes authors from the company, proposes his training paradigm for training AI agents across a wide range of domains and tasks, particularly for AI in healthcare, robotics, and gaming. Demonstrates performance.
Of course, this also includes a growing interest in how models can be implemented within companies. Salesforce is eyeing his LAM in its Service Cloud and Sales Cloud products, looking to have models perform actions on behalf of clients. In March, his banking platform NetXD announced “LAM for the enterprise,” aimed at banks, healthcare companies, and other institutions. This allows it to generate code that understands user instructions and automates the execution of microservices and actions. There's also Adept, a startup that was valued at $1 billion and backed by Microsoft and Nvidia in its pursuit of “machine learning models that can talk to anything on a computer.” LLM is already everywhere and LAM is definitely picking up speed at the back.
More AI news here.
Sage Lazarus
sage.lazzaro@consultant.fortune.com
sagelazaro.com
Correction: Last week's edition (April 25) stated that Eli Lilly acquired AI drug discovery startup XtalPi last year in a deal worth $250 million. It was a collaboration deal worth that amount.
AI in news
Anthropic launches Team, its first plan for enterprises. The new plan includes access to all three versions of the company's Claude chatbot, including administrative tools, billing management, the ability to upload longer documents, the possibility of longer “multi-step” conversations, and provides various other features. more. For businesses, it costs $30 per user per month. Anthropic also announced an iOS app that will be free to all users. CNBC reported.
Justice Department antitrust documents reveal that Microsoft invested in OpenAI out of fear of falling behind Google. according to it business insider. Microsoft Chief Technology Officer Kevin Scott communicates with CEO Satya Nadella and co-founder Bill Gates, according to 2019 email exchanges released as part of the Justice Department's antitrust lawsuit against Google. He is shown warning that Google's AI was becoming too good and that Microsoft was “years behind its competitors”. In terms of ML scale. ” Nadella responded to an email with the subject line “Thoughts on OpenAI,” saying it shows “why we need to do this,” and although much of Scott’s email was redacted. , Redacted What We Can Do, to Microsoft's Chief Financial Officer Amy Hood. Competition in the AI space has intensified over the years, and how tech giants are leveraging their resources and deep pockets to partner with startups to stay competitive. is shown again. You can view the email. here.
Nvidia has added new models and support for voice queries to the ChatRTX model. In addition to Mistral or Llama 2, ChatRTX users will also be able to use Google's Gemma, ChatGLM3, and OpenAI's CLIP models. The Verge reported. Nvidia also announced that it has integrated with its AI voice recognition system Whisper, allowing users to search for data using their voice. Nvidia's ChatRTX model, first introduced in February as “Chat with RTX,” differs from others in that it runs locally on the PC. A user needs his RTX 30 or 40 series GPU with at least 8 GB of his VRAM to run.
Sam's Club expands AI-powered anti-theft arches to 20% of its stores. Rather than having staff match a customer's purchase to a receipt upon exit, as stores were previously required to do, the large blue exit arches now installed in 120 stores use computer vision to do so. is being carried out. This has reduced the speed at which customers leave the store by 23%, the company says. told TechCrunchHowever, some people have complained online that the line is now only to pass through the arch, rather than to be checked by a store staff member. Sam's Club says the cameras in the exit arch only take images of the cart, do not use biometrics, do not capture personal information, and store images temporarily to improve the model before deleting them. . In announcing this expansion, the company also attacked Amazon, which recently pulled back its Just Walk Out technology, saying it was “another retailer struggling to deploy similar technology at scale and abandoning some efforts.” He pointed out that this technology has emerged in the middle of “
Focus on AI research
Goodbye, bad data? When bad data enters an LLM, such as data that can lead to biased output, privacy violations, or copyright infringement, it remains there forever. This is a problem plaguing major models such as Google's Gemini and OpenAI's GPT-4. Unlike traditional software, these models are too large and complex to remove specific data points, so fine-tuning can only significantly improve them. However, AWS researchers believe they have discovered a potential way to remove problematic data from underlying models.
Semaphore reports in paperThe paper, recently published in the Proceedings of the National Academy of Sciences, outlines both known and new approaches to what the researchers call “model disgorgement.” One new technique they propose is to train the model differently from the beginning. More specifically, we propose splitting the dataset into multiple subsets called “shards” and training separate models on each shard separately.
“Disgorgement of samples therefore only concerns submodels that were trained on the subset containing the samples. Disgorgement requests can be addressed simply by removing or retraining components of the model that are exposed to the cohort of data in question. “Yes,” the researchers wrote.
The fate of AI
An AI company you've never heard of raises $1 billion. What CoreWeave's new $19 billion valuation actually means —Sharon Goldman
Microsoft signs $10 billion green energy deal as power-hungry AI forces emissions commitments met —Dylan Sloan
The billion-dollar dilemma for AI startups: Why high valuations are a hurdle in the race for talent —Sharon Goldman and Allie Garfinkle
Amazon's generative AI business reaches multi-billion dollar run rate, re-accelerating cloud growth —Jason Del Rey
Apple's promised AI plans are 'all that matters' as it seeks to catch up with Big Tech rivals, analyst says —Rachel Jones
There have been so many bear attacks in Japan over the past year that AI is now being used as a warning system. —Chris Morris
CIOs and CTOs talk about the biggest bottlenecks in AI —Andrew Nazca
AI calendar
May 7th: Leading the AI Conference hosted by Harvard Business School and D^3
May 7th-11th: International Conference on Learning Representations (ICLR) Vienna
May 21st to 23rd: Microsoft Build in Seattle
June 5th: FedScoop's FedTalks 2024 in Washington, DC
June 25th-27th: 2024 IEEE Conference on Artificial Intelligence in Singapore
July 15th to 17th: Fortune Brainstorming Tech, Park City, Utah (Registered) here)
July 30th-31st: Fortune Brainstorm AI Singapore (registration) here)
August 12th-14th: Ai4 2024 in Las Vegas
Pay attention to AI numbers
$32 billion
That's how much Alphabet, Microsoft, and Meta spent on AI infrastructure, including servers, data centers, and other infrastructure, in the first quarter of 2024, according to . CB Insight. Alphabet spent $12 billion, a huge 91% increase from the same period last year. The ballooning costs further demonstrate how expensive AI gaming is for these companies, and why they dominate the field. As we have previously reported, luckdata about Fundraising with AI and model training Basically showing that they are the only ones who can afford it.