New computer chips could accelerate artificial intelligence (AI) applications and increase the speed and efficiency of business operations.
On Wednesday (March 6th), Encharge AI announced partnership and princeton universityUS support Defense Advanced Research Projects Agency (DARPA) to develop advanced processors that can run AI models. DARPA's Optimum Processing Technology Inside Memory Arrays (OPTIMA) program is a $78 million effort to develop faster, more power-efficient, and scalable compute-in-memory accelerators for commercial AI.
“Companies are in the early stages of understanding how AI will transform their business.” Jonathan Morris, EnCharge's Vice President of Government Affairs and Communications, said in an interview with PYMNTS. “But what we know is that only half of AI’s potential is realized when it is locked in the cloud and behind high implementation costs. A new generation of efficient AI processors , enables on-device AI inference, overcomes the prohibitive costs of the cloud, and reduces energy usage and privacy concerns while enabling a variety of new use cases and experiences.”
Widespread implementation of AI chips
This project explores the latest advances and how new computer chips can be used to run AI applications from start to finish. The goal is to make AI work outside of large computer centers and to be used in everyday devices such as phones, cars, and even factories.
EnCharge AI is already working on making these chips available and hopes to make them faster and more efficient with help from DARPA.
These new chips use the following types: switched capacitor analog in memory EnCharge A computing chip commercialized by AI. The company claims the new chip will be orders of magnitude more efficient than digital accelerators, while maintaining precision and scalability not possible with current-based analog computing approaches.
New computer chips could make personal computers much faster, allowing users to do more with business software without worrying about privacy or security issues, Morris said.
“The new generation of on-device AI applications could include AI assistants that recognize local files, real-time language translation, meeting transcription/summarization, and personalized and dynamic content generation.” added. “Similar to the smartphone revolution, we are just beginning to understand the productivity gains that can be achieved with user-close AI on the PC.”
EnCharge faces competition in a crowded market for AI accelerator hardware. Axela and giga space We are working on in-memory hardware to speed up AI tasks. Neuroblade has also secured venture capital funding for an in-memory inference chip designed for both data centers and edge devices.
Adding power-efficient chips
The requirements for AI software far exceed what current hardware can provide, especially in situations where power usage is constrained, Morris said. As a result, many AI applications now run on large, expensive, and power-hungry server farms in the cloud. He said moving AI from cloud servers to personal computers will require computers to become much more efficient, and new chips could help achieve that goal.
“We have seen the rise of GPU-accelerated computing as a need for computing demands generated by 3D rendering that cannot be efficiently met by CPUs,” said Morris. “Similarly, the initial category of AI PCs will require specialized accelerators (NPUs) for AI applications.”