microsoft It has reportedly built a generative AI model designed for U.S. intelligence agencies.
development mark a milestone In the field of artificial intelligence (AI), Microsoft officials say AI is a large-scale language model (LLM) that can function completely separate from the internet, Bloomberg News reported on Tuesday (May 7). Ta.
“This is the first time there is an isolated version. Isolated means it is not connected to the internet, but on a special network that only the U.S. government has access to.” William ChappellMicrosoft's chief technology officer for strategic mission and technology told Bloomberg.
According to the report, most AI models rely on cloud services to glean patterns from data, but Microsoft wanted to provide U.S. intelligence agencies such as the CIA with a truly secure system.
According to Bloomberg, intelligence officials have emphasized that they want to use the same type of AI tools that they say will transform the business world. Last year, the CIA ChatGPT style services For unclassified information, government agencies want something that can handle more sensitive data.
“There is a race to incorporate generative AI into intelligence data.” Sheetal PatelAccording to Bloomberg, the deputy director of the CIA's Multinational and Technical Mission Center said this at a recent security conference at Vanderbilt University.
She added that the first country to use generative AI for intelligence will win that competition. “And we want that to be us.”
Microsoft's latest initiative is the company's New internal AI model 'Much bigger' than in the past Open Source trained model.
The new model “MAI-1” is expected to cost about $500 billion. parameter, It's also designed to compete with models created by companies like Google, Anthropic, and OpenAI (also backed by Microsoft).
Meanwhile, PYMNTS reviewed some of the challenges and concerns that have arisen on Monday (May 6). By using AI LLM.
“LLMs may fabricate information and affect credibility and trustworthiness,” the report said. “Models can perpetuate biases in training data and generate misinformation. Using them to create online content at scale can accelerate the spread of fake news and spam. Policy makers are concerned about the impact on employment as LLMs encroach on knowledge work.
Additionally, the following questions arose: intellectual propertyThis is because these models are trained using copyrighted material.