β«πππππππ Ξ£ππππ’, Ph.D. student at North Carolina State University, Raleigh.
Research π‘
Robust Q-Learning under Corrupted Rewards (To appear in IEEE CDC 2024)
Recently, there has been a surge of interest in analyzing the non-asymptotic behavior of model-free reinforcement learning algorithms. However, the performance of such algorithms in non-ideal environments, such as in the presence of corrupted rewards, is poorly understood. Motivated by this gap, we investigate the robustness of the celebrated Q-learning algorithm to a strong-contamination attack model, where an adversary can arbitrarily perturb a small fraction of the observed rewards. We start by proving that such an attack can cause the vanilla Q-learning algorithm to incur arbitrarily large errors. We then develop a novel robust synchronous Q-learning algorithm that uses historical reward data to construct robust empirical Bellman operators at each time step. Finally, we prove a finite-time convergence rate for our algorithm that matches known state-of-the-art bounds (in the absence of attacks) up to a small inevitable error term that scales with the adversarial corruption fraction. Notably, our results continue to hold even when the true reward distributions have infinite support, provided they admit bounded second moments. This work has been accepted for presentation, and publication in the proceedings of 63rd IEEE Conference on Decision and Control, 2024!
Robust Algorithms for Adversarial Reinforcement Learning (Poster at Applied AI Symposium, Theoretical Machine Learning Track)
Provably Faster Convergence of Temporal Difference Learning
This code differs from standard Temporal Difference (TD) learning primarily through the use of linear function approximation with a projection matrix derived from the feature matrix and stationary distribution. Instead of the iterative updates commonly seen in standard TD, it explicitly calculates the fixed point of the TD learning process by solving a system of linear equations, incorporating both the discount factor and transition probabilities. The code tracks error convergence over multiple epochs, providing robust performance evaluation by averaging these errors. Additionally, it employs two update schemes: one that combines reward-dependent and feature-dependent updates with accumulated gradients, and another that uses direct TD-like updates within the projected feature space. These approaches offer a more nuanced analysis of the algorithm's stability and convergence, making it particularly suitable for research settings where advanced function approximation and error tracking are essential. Check the implementation at my Github Page.