Coming Into View

Dr. Shadi Alawneh leads research to enhance the technology that would minimize pedestrian fatalities by providing the auto-driver with correct information related to pedestrian behavior

A man wiring a robot

Department of Electrical and Computer Engineering

icon of a calendarDecember 15, 2021

icon of a pencilBy Arina Bokas

Share this story

There is nothing more precious than a human life, and, when it is at stake, science often responds with the most advanced solutions. Shadi Alawneh, Ph.D., assistant professor of electrical and computer engineering, leads research to enhance the technology that would minimize pedestrian fatalities by providing the driver or the auto-driver with correct information related to pedestrian behavior.

An analysis of data reported by State Highway Safety Offices shows that even during the COVID-19-related quarantine, in 2020, pedestrian death increased by 21 percent from 2019 — the largest annual increase since such data collection began in the mid-1970s. This is especially important to consider due to the fact that as more autonomous vehicles will be emerging on the streets in the years to come, one of the most challenging tasks for fully or partially autonomous cars equipped with Advanced Driver Assistant Systems is to ensure pedestrian safety.

Thus far, numerous methods and algorithms were developed to provide pedestrian detection, factoring in significant variations in human appearance due to environmental backgrounds, clothes and body shapes. This technology, however, often fails to prevent vehicle-to-pedestrian crashes when a person suddenly decides to cross the road. Thus, to issue timely alerts or to trigger safety breaking action, it is crucial to determine whether the pedestrian is intending to cross the road in the path of a vehicle.

“One-second prediction of a pedestrian crossing the road in front of a car moving at a typical urban speed of 30 mph (50km/h) can provide a distance of 45.2 feet (13.8 meters) for an automatic vehicle or driver response. This distance could even be longer if the slowing down action is initiated before the pedestrian starts crossing the road. Just two seconds of advanced detection could make a difference between a fatal outcome or crash avoidance,” says Dr. Alawneh, stressing the importance of new vehicle technologies which could correctly forecast pedestrian behavior.

While a human driver can detect pedestrian behavior by utilizing different clues, such as a head movement when looking at road sides, leg movements, or their body bending toward the street, it is not always easy to implement these signs into computerized algorithms. Yet, all these signs are essential parts in the process of designing the assistive and autonomous driving systems that are more suitable for urban environments.

To meet this challenge, Dr. Alawneh and his students utilized the GPU Computing Research Laboratory to describe a vision-based approach that combines deep-learning techniques and depth sensing to build a 3D understanding of the pedestrian orientation in reference to the camera view.

“The main concept in this approach is to construct a 3D visualization of the human body that gives a clear clue for the body orientation. To accomplish that, we focused on landmarks on the pedestrian that are highly related to the body orientation – the shoulders, the neck and the face,” Dr. Alawneh explains.

The system used the Convolutional Neural Network (CNN) model to estimate the shoulders, the neck and the nose positions and then transformed these points into a 3D space using the stereo vision system. This information was used to determine a risk level of the pedestrian crossing the road and a consequent action. To enable real-time performance, the Graphics Processing Units (GPU) acceleration was applied.

As the result of this research, a CNN model for detecting human body landmarks, with even higher accuracy than before was developed. It also allowed to increase the dataset size for labeled pedestrians and to identify a street-crossing intention based on detecting sudden pedestrian orientation change toward the road.

“GPU computing is being used for a wide range of real-world applications. Many prominent science and engineering fields that we take for granted today would have not progressed so quickly if it were not for GPU computing. Dr. Alawneh’s GPU Computing Research Laboratory and his work serve to strengthen the SECS research competitive position for future projects in high performance computing,” says Daniel Aloi, Ph.D., SECS director of research and professor of electrical and computer engineering.

In addition to the GPU-accelerated pedestrian orientation estimation work, Dr. Alawneh’s research interests include building AI models using deep neural networks for autonomous driving and GPU-accelerated Gabor transform for SAR image compression.

Visit the current research projects page for more information on work conducted at SECS GPU Computing Research Laboratory.

Share this story