Hey, I'm @CatLe
Hi there! I’m an AI Scientist passionate about advancing Machine Learning Algorithms, Generative Modeling, and Time Series Forecasting — creating interpretable and human-centered AI that understands patterns, relationships, and causality.
Smarter💡. Safer🔒. Scalable🚀 AI — learning with purpose, scaling with trust, evolving with creativity.
About me
I’m an AI Scientist at UL Solutions, where I design and apply advanced machine learning techniques to build safe, transparent, and reliable AI systems for real-world applications. I earned my Ph.D. in Machine Learning from Duke University, where I explored Task Affinity and its applications in Machine Learning under the guidance of Dr. Vahid Tarokh. Before that, I completed my M.S. in Electrical Engineering at the California Institute of Technology (Caltech) and my B.S. in Electrical and Computer Engineering at Rutgers University, graduating Summa Cum Laude. I’m passionate about bridging the gap between cutting-edge AI research and practical safety applications, ensuring that intelligent systems not only perform well—but do so responsibly.
My research bridges Machine Learning Algorithms, Computer Vision, and Natural Language Processing, with a focus on Transfer Learning, Few-Shot Learning, Multi-Task Learning, and Neural Architecture Search. I’ve been honored with distinctions such as the Matthew Leydt Society, John B. Smith Award, and Nikola Tesla Scholar, and I’m an active member of Tau Beta Pi, Eta Kappa Nu, and Sigma Alpha Pi, reflecting my dedication to excellence and innovation in engineering and research.
Recent
adventures
🤖 AI/ML Scientist UL Solutions LinkedIn 2025 - Now
👨‍🔬 Instructor & Postdoctoral Researcher Duke University LinkedIn 2023 - 2025
📈 Research Scientist Amazon LinkedIn 2022 - 2023
🔬 Doctoral Researcher Duke University LinkedIn 2018 - 2023
💻 Software Engineer Motorola Solutions LinkedIn 2017 - 2018
🎓 Electrical Engineering Student California Institute of Technology LinkedIn
Rutgers University LinkedIn
2014 - 2017
Projects
Task-Aware Lifelong Learning for GANs
Improving lifelong learning for GANs with task-aware methods (ICPR 2026).
Read More
Generative Models and Bootstrapping for Titanium Alloy Microstructure Analysis
Applying generative models for Titanium EBSD Analysis (ICPR 2026).
Read More
Counterfactual Data Augmentation with Contrastive Learning
Augmentation methods for Counterfactual data using Contrastive Learning (UAI 2025).
Read More
Perceiving Copulas for Multimodal Time Series Forecasting
Improving time series prediction with copula-based models and perceiver network (WSC 2024).
Read More
Open-Domain Dialog Evaluation
Improving open-domain dialog evaluation system using causal inference models (IWDSD 2023).
Read More
Transfer Learning for ITE Estimation
Applying transfer learning for Individual Treatment Effect estimation (UAI 2023).
Read More
Task Affinity in Few-Shot Learning
Using maximum bipartite matching to evaluate task affinity in few-shot learning scenarios (ICLR 2022).
Read More
Fisher Task Distance & NAS
Applying Fisher Task Distance in Neural Architecture Search to improve model selection (IEEE Access 2022).
Read More
Task-Aware NAS (TA-NAS)
Task-Aware Neural Architecture Search framework for efficient model design (ICASSP 2021).
Read More
Automated ML via Transfer Learning
Improved automated machine learning by leveraging transfer learning techniques.
Read More
Supervised Encoding
Supervised encoding for discrete representation learning (ICASSP 2020).
Read More