- Home
- Dr. Lan Emily Zhang Research Focus
Dr. Lan Emily Zhang Research Focus
Research Focus: Dr. Lan Emily Zhang, Assistant Professor of Electrical and Computer Engineering at Clemson University
Dr. Lan Emily Zhang is an assistant professor of Electrical and Computer Engineering at Clemson University, specializing in distributed AI and wireless communication technologies for various Internet-of-Things (IoT) applications, including healthcare. Her research within the Biomedical AI Core of the ADAPT in SC project focuses on developing AI-driven multiscale modeling techniques that optimize and adapt AI models across different hardware platforms, enabling real-time and resource-efficient healthcare solutions.
Efficient AI Model Adaptation for Heterogeneous Platforms
The deployment of AI models across different types of hardware—from powerful high-end servers to resource-constrained edge devices—is essential for achieving real-time, efficient AI applications in healthcare. High-end platforms excel at intensive model training, while low-end devices are crucial for on-device inference, where limitations on power, memory, and latency present significant challenges. Dr. Zhang’s research addresses these challenges through a multiscale model compression framework designed to efficiently adapt AI models for a variety of hardware environments.
A key innovation in her work is the introduction of bi-directional knowledge distillation, a technique that enables different versions of AI models to learn from each other. Unlike previous approaches that rely on public datasets or pre-trained data generators, Dr. Zhang’s method offers an on-device, knowledge-agnostic solution that eliminates data-dependency concerns. In this approach, the server adversarially learns a generative model alongside the global model, leveraging the ensemble of collected on-device models to improve performance. A novel softmax L1 (SL) loss function is introduced to facilitate zero-shot federated knowledge distillation by overcoming the limitations of traditional loss functions, such as vanishing gradients in KL-divergence loss and instability in L1 norm loss, ensuring more robust and accurate model performance. This iterative learning loop not only preserves model accuracy but also ensures efficiency across various hardware tiers.
Figures below
Dr. Zhang’s multiscale AI deployment techniques have been rigorously validated in image processing tasks, demonstrating their scalability and potential for real-time, resource-efficient predictions in healthcare. Her contributions to the ADAPT SC project are paving the way for more effective AI-based healthcare solutions by enabling seamless transfer of AI models across diverse hardware platforms without compromising efficiency or accuracy.
Personal website: https://sites.google.com/view/lan-zhang/home
A multiscale model training framework through bidirectional knowledge distillation techniques with a novel loss function.
Results demonstrated the scalability of our approach over a variety of heterogeneous platforms.The gradients for the KL-divergence loss tend to vanish, while those for l1 norm loss are unstable. Our SL loss overcomes both problems.