Supported by a recent grant from the National Science Foundation, Dr. Xu’s project strives to innovatively train multiple medical imaging tasks together with a cache mechanism to build an efficient and effective multi-institutional collaborative system.
For Lanyu Xu, Ph.D., assistant professor of computer science and engineering, medicine is the art of compassionate care that combines scientific wisdom with the most advanced tools. Recent advancements in information technology, specifically in machine learning and artificial intelligence (AI), have shown significant promise for AI-based medical devices in diagnosis, management, and pharmaceutical development for various diseases. As scientists are increasingly looking for ways to implement AI into medical imaging, Dr. Xu is building a one-for-all edge collaboration system to overcome the problem of scarce annotation that is hindering the progress.
Automatic medical image segmentation has an enormous potential to alleviate the workload for clinicians due to its ability to learn complex representations in a data-driven manner. Augmenting the technology of computer vision and machine learning in medical image segmentation improves clinical workflow efficiency and reduces clinicians’ repetitive work tasks. Although supervised machine learning (SML) algorithms have been thoroughly investigated in medical image analysis research, training SML models to perform analysis tasks on bioimaging datasets is challenging due to scarce annotation.
“The success of an AI algorithm is directly correlated with the quality and size of datasets. This task requires a large amount of precisely annotated images, which is not always possible because of ethical, time, or cost considerations. And often, there are only a small set of labeled data and unlabeled images that are available. Scarce annotation significantly restricts the size and growth of medical image datasets,” Dr. Xu explains.
To address the problem of scarce annotation, Dr. Xu and OU computer science masters student, Fan Li, initially investigated the multi-institutional collaboration as an emerging deployment of medical imaging processing. Noticing a lack of investigation of the distributed system performance, such as the trade-off between collaboration and efficiency, the researchers proposed a distributed system, based on deep reinforcement learning for medical image segmentation. They conducted preliminary experiments on single and multiple central and graphics processing unit environments to demonstrate the system performance and the trade-off.
“What we learned is that the multi-institutional collaboration involves substantial computation and communication costs to train a dedicated model for only one type of task, which we think is not efficient,” Dr. Xu says. “Instead, many studies have shown that, while different in high-level features, conventional images share the same coarse features, which can be utilized by models. We believe the same is true for medical imaging. Therefore, we now focus on developing a shareable, or one-for-all, model for multiple tasks to address the scarce annotation problem in a more efficient manner,” she adds.
The one-for-all model seeks to discover the underlying connections and similarities among medical images. It is based on obtaining multi-scale features from training a shareable model to fine-tune multiple tasks. Supported by a recent grant from the National Science Foundation, Dr. Xu’s project strives to innovatively train multiple medical imaging tasks together with a cache mechanism to build an efficient and effective multi-institutional collaborative system.
The goals of the research are to design a multi-task learning model for medical imaging tasks, create a cache mechanism for the system to activate relevant portions when interpreting a specific task, and to develop a prototype distributed multi-task learning system for medical imaging.
“Creating a one-for-all collaboration system for medical imaging will bring a significant technological breakthrough toward achieving practical AI in a clinical setting,” states Daniel Aloi, Ph.D., SECS Director of Research. “It will facilitate the model sharing among institutions and, therefore, will help improve clinical detection, diagnosis, and treatment.”
Medical imaging, however, is not the only area of the model’s application. Dr. Xu believes it to be a general-purpose framework that can be utilized in other fields with similar application requirements.
“Our one-for-all model will be used for image analysis to simultaneously perform multiple related tasks using the same set of input data. In medical imaging, these tasks often include segmentation, classification, detection, registration, and so on. Similarly, we can use this model for autonomous driving, for example. It will detect objects on the road while simultaneously identifying lane boundaries, segmenting the driving lane, and contributing to safer and more reliable autonomous driving systems. In robotics, the model can detect the object and analyze the best point to manipulate the machine,” Fan Li says.
The proposed system can also be easily adapted to other scenarios, such as smart homes and smart transportation, and used for undergraduate and graduate education and research with the goal of inspiring students' interests in edge intelligence.
Anyone interested in Dr. Xu’s work can contact her at [email protected]