Research

Memristor Revolution

Dr. Sunny Raj’s research at Oakland University aims to revolutionize data processing with energy-efficient memristor-based computing for the future of AI.

Two people working on a computer

Dr. Raj and Ph.D. student Pranav Sinha proposed a new way to design small, energy-efficient memristor crossbar circuits for decision trees.

Computer Science and Engineering

icon of a calendarDecember 11, 2024

Pencil IconBy Arina Bokas

Memristor Revolution

The race is on to find computing solutions that can keep up with the modern data-driven era’s demands. Everything from social media posts and online transactions to scientific research and medical records is contributing to the accumulation of data — with projections reaching a staggering 175 zettabytes by 2025. At Oakland University, Sunny Raj, Ph.D., assistant professor of computer science and engineering, is at the forefront of game-changing research to redefine how we process information and power the technologies of tomorrow.

In sharp contrast to the projected need for fast, energy-efficient computing, the rate of advancements in computing devices has slowed. Computer technologies are falling behind, due to the diminishing returns from Moore's Law and Dennard scaling, compounded by the von Neumann bottleneck’s limitations. Consequently, they’ll be unable to handle the large amounts of data and machine learning tasks of the future. One solution is emerging: memory computing using memristors. Memristors are smaller, faster and more energy efficient than current devices. Furthermore, memristor-based, in-memory computing promises lightning-fast processing, incredible energy efficiency and the ability to process the massive data effortlessly.

“Currently, there are two main strategies for using memristors: combining them with traditional computing elements to speed up certain tasks or using them for tasks like neuromorphic and flow-based computing. However, neither of these approaches has yet provided a solution for running data-heavy machine learning algorithms entirely on memristor-based systems,” Dr. Raj explains, while discussing the pressing need for a better solution, which he currently investigates.

Dr. Raj’s previous research efforts have already pushed forward the boundaries of machine learning and hardware design. As a Ph.D. student at the University of Central Florida, Orlando, he was part of the team that introduced a new way to measure how much trust can be placed in the results of deep neural networks (DNNs), called attribution-based confidence. This tool helped the team understand whether or not the output from a DNN was reliable. 

More recently, Dr. Raj and his OU Ph.D. student Pranav Sinha proposed a new way to design small, energy-efficient memristor crossbar circuits for decision trees, while using flow-based computing. 

“Our approach involved developing a binary classification graph (BCG) and mapping it onto a pure memristor crossbar array. BCG is a binary version of a decision tree, a popular machine learning algorithm, wherein decision-making is driven by bits instead of numerical thresholds. The crossbar design not only compensates for manufacturing defects, but also offers superior energy efficiency and resilience to radiation degradation,” says Sinha.

“The development of a new design for energy-efficient memristor crossbar circuits for decision trees represents a major step forward in hardware design,” adds Dr. Raj.

The current research — which started in 2024 with a grant from the National Science Foundation, titled “In-Memory Machine Learning using Sneak-Paths in Crossbars for Robustness and Energy Efficiency” — aims to further harness the potential of memristor crossbar circuits by developing a more compact, energy-efficient solution to the challenges that are facing machine learning tasks. 

Specifically, the team, which now includes Ph.D. students Akash Chavan, Pranav Sinha and Mehrnaz Sarvarian, along with undergraduate student Souleymane Sono, is focused on identifying new data structures to improve how memristors handle simple machine learning tasks. The end result is to optimize the design of memristor crossbar circuits to use less space, energy and time, as well as to develop methods for efficiently combining functions in complex machine learning algorithms, in order to make memristor-based systems work better.

“By conducting computations in-memory, memristors substantially reduce the latency involved in data transfer, seen in conventional architectures. Additionally, the reduction in data movement leads to lower energy consumption, addressing a major energy concern in traditional computing,” says Chavan. “Memristors are instrumental in developing neuromorphic computing systems that imitate the human brain's processing methods, paving the way for more efficient and powerful AI systems.”
This project could have a big impact, creating energy-efficient, durable machine learning systems that could work in harsh environments like space or nuclear plants.

In addition to research, Dr. Raj is a recipient of the Department of Energy (DOE) grant “Mobilizing the Emerging Diverse AI Talent (MEDAL) through Design and Automated Control of Autonomous Scientific Laboratories” — a collaborative effort that includes Oakland University, the University of Texas at San Antonio, Argonne National Laboratory, Bowie State University, Cleveland State University, Florida International University and the University of Central Florida. This partnership aims to cater to diverse learning preferences and enhance educational experiences to train students for DOE-relevant AI engineering and research, with a strong focus on inclusive recruitment to equip the next generation of researchers for DOE roles.

Anyone interested in Dr. Raj’s work can contact him at [email protected].