πAccepted Papers:
1. Corruption-Tolerant Asynchronous Q-Learning with Near-Optimal Rates [Accepted at the 43rd International Conference on Machine Learning ICML 2026]
[Posterπ][Paper βΎοΈ][Code π³][Slides π]
2. Adversarially-Robust TD Learning with Markovian Data: Finite-Time Rates and Fundamental Limits [Accepted at the 28th International Conference on AI and Statistics AISTATS 2025]
[Paper βΎοΈ] [Proceedings π] [Code π³][Slides π]
3. Robust Q Learning under Corrupted Rewards [Accepted at the 63rd IEEE Conference on Decision and Control CDC 2024]
[Paper βΎοΈ] [Proceedings π] [Code π³] [Slides π]
4. Variance-Reduced Q-Learning over Static and Time-Varying Networks [Accepted at the American Control Conference ACC 2026] [Paper βΎοΈ][Code π³]
5. Robust Federated Q-Learning with Almost No Communication [Accepted at the American Control Conference ACC 2026] [Paper βΎοΈ][Code π³]
ποΈPapers Under Review/Preparation:
1. Fragile object transportation by a multi-robot system in an unknown environment using a semi-decentralized control approach [Under Review Paper ποΈ]
2. Is Q-Learning robust under State-Adversarial perturbation ?
3. Optimal Rates and Information-Theoretic Limits for Federated Q-Learning.
Workshop Presentations:
1. Robust Federated RL with Byzantine Agents (Poster at Applied AI Symposium 2025 at North Carolina State University)
2. Adversarially-Robust TD Learning with Markovian Data. (Poster at New York Reinforcement Learning Workshop 2025, Amazon, New York)
3. Towards Finite-Time Rates for Adversarially-Robust Reinforcement Learning: Mathematical Guarantees and Fundamental Limits. (Invited Talk / Poster at Northeast Systems and Control Symposium (NESCW 2025), Columbia University, New York)
4. Adversarially-Robust Deep Q-Network for Algorithmic Trading (Poster at MLSS (2025), Neural Networks at North Carolina State University)
5. Robust Algorithms for Adversarial Reinforcement Learning (Poster at Applied AI Symposium 2024 at North Carolina State University)