ECE startup MemryX releases MX3 Edge AI Accelerator

MemryX has announced production availability of a new AI chip, enabled by U-M ECE research.
A horizontal green circuit board has four black squares on it, labeled "memryX MX3."
The MemryX MX3 four-chip M.2 module. Photo courtesy of Wei Lu.

U-M Electrical and Computer Engineering (ECE) startup company MemryX has put a new product on the market, which promises to transform applications of artificial intelligence (AI) on everyday devices. Their MX3 Edge AI Accelerator allows users to process large amounts of data and run machine learning models on their local devices.

“The MX3 is a very capable AI inference chip,” explained ECE Prof. Wei Lu, the James R. Mellor Professor of Engineering and co-founder of MemryX. “With this AI accelerator, you can reduce the power and financial cost required to run the AI models, and get higher performance. This is all due to the very innovative hardware that we developed initially at the University of Michigan.”

AI inference occurs when pre-trained machine learning models are deployed to solve a problem or perform a task. It is different from AI model training, which is performed at massive cloud data centers, such as those operated by Google, Facebook, Microsoft, or OpenAI.

“When you send a Chat GPT prompt and get a response, that’s done in the cloud,” Lu said. “But there are a lot of other use cases where you want to do the inference at the edge, meaning locally on a device like your phone or computer.”

Examples of these localized edge applications include sensing defects in factory equipment, detecting when accidents happen in a mass transit system, monitoring if people are bringing concealed weapons to a public museum, or determining when you need to adjust stop lights for traffic patterns. A local processing solution could also mitigate concerns about reliability, speed, and confidentiality. For example, it would be faster and safer to process patient medical data directly at the hospital or clinic where it was collected.

Our architecture is so efficient that we don’t have to sacrifice the precision of the models and users can program their AI models on our chips effortlessly within minutes.

Wei Lu

Until now, there have been few good options for doing this type of efficient and local AI analysis. Instead, companies have been using integrated processors, which do not offer the performance for AI applications, or relying on NVIDIA chips, which are designed for graphics and data centers. These chips require a lot of power, are expensive, and are sometimes physically unable to fit the systems users are trying to deploy. Other solutions for local AI inference require engineers to retrain the models prior to running them locally, because the chips aren’t flexible enough to run the high-precision models trained on the cloud. This process can take months and requires dozens of software engineers, making these chips very challenging to use for edge applications. This is where the MX3 chips come in.

“We want to do the inference to use the trained models locally and very efficiently,” said Lu. “Our chips use 10 times less power, are 10–100 times smaller, and can run a few times faster than current leading solutions. Our architecture is so efficient that we don’t have to sacrifice the precision of the models and users can program their AI models on our chips effortlessly within minutes.”

The MX3 chip is also very versatile, available as a single chip or an M.2 module that holds four chips, and interfaces with standard computer hardware and operating systems. MemryX is working with Taiwan Semiconductor Manufacturing Company Limited (TSMC) to fabricate the chips at scale, and they already have customers committed to using the chips.

“This is really exciting,” Lu said. “To have a very small company with about 50 people to actually produce a very high quality, high performance AI chip in volume is really remarkable. We already have our first revenue and we have customers, for example, using them in their server racks to process very large amounts of video in real time, using them in factory settings to do this inference at the edge for dozens of different machine learning models and use cases.”

MemryX is also committed to doing their part in educating and training the workforce to use future AI technologies. They plan to make their tools accessible so that students at universities, community colleges, and even in advanced high school classes can learn how to quickly develop their own low-cost AI applications.

“AI is here; it’s going to be everywhere, more and more involved in our lives,” Lu said. “Our goal is to actually help people develop the skills for these new applications.”

Explore:
Electronics, Devices, Computers; Entrepreneurship and Tech Transfer; Integrated Circuits and VLSI; MEMS and Microsystems; Research News; Solid-State Devices and Nanotechnology; Wei Lu