-
Hi, I'm Feng Zhu. I am a PhD candidate at North Carolina State University.
-
My research focuses on federated reinforcement learning, distributed & stochastic optimization, and multi-agent systems, with an emphasis on sample efficiency and high-probability guarantees.
-
I have a strong background in optimization and theoretical machine learning, with coursework including large-scale optimization, convex optimization, real analysis, and stochastic processes.
-
I received my B.E. and M.E. degrees in Electrical Engineering from Fudan University.
-
Feel free to contact me at fzhu5@ncsu.edu.
-
🏛️Google Scholar: https://scholar.google.com/citations?user=ZqdH9HwAAAAJ
-
🌏Homepage: https://fzhu0628.github.io
- Raleigh, NC
-
09:55
(UTC -04:00) - https://fzhu0628.github.io/
- in/feng-zhu-4738112a2
- https://scholar.google.com/citations?hl=en&user=ZqdH9HwAAAAJ
Highlights
- Pro
Pinned Loading
-
FedHSA---Tighter-Rates-for-Heterogeneous-Federated-Stochastic-Approximation-under-Markovian-Sampling
FedHSA---Tighter-Rates-for-Heterogeneous-Federated-Stochastic-Approximation-under-Markovian-Sampling PublicPublished at TMLR in 2026. Focusing on the general stochastic approximation framework, it proposes a federated algorithm that finds the optimum of an average of contractive operators.
Python 1
-
DisSACC---Distributed-Stochastic-Approximation-with-Constant-Communication
DisSACC---Distributed-Stochastic-Approximation-with-Constant-Communication PublicPublished at IEEE Asilomar 2025. We study a general distributed heterogeneous stochastic approximation problem with M agents. The proposed DisSACC method converges to the desiderata with linear spe…
-
Fast-FedPG---Towards-Fast-Rates-for-Federated-and-Multi-Task-Reinforcement-Learning
Fast-FedPG---Towards-Fast-Rates-for-Federated-and-Multi-Task-Reinforcement-Learning PublicThis work is a conference paper published at IEEE CDC 2024. The paper is dedicated to finding a policy that maximizes the average of long-term cumulative rewards across environments. Included in th…
-
sreejeetm1729/Q-Learning-over-Static-and-Time-Varying-Networks
sreejeetm1729/Q-Learning-over-Static-and-Time-Varying-Networks PublicAccepted at ACC 2026 🎉. 𝚅𝚁𝙳𝚀 : We propose and analyze a new algorithm that achieves collaborative speedups in sample complexity for Q-learning over static and time-varying networks.
Jupyter Notebook 1
-
STSyn---Speeding-Up-Local-SGD-with-Straggler-Tolerant-Synchronization
STSyn---Speeding-Up-Local-SGD-with-Straggler-Tolerant-Synchronization PublicThis work is a journal paper published at IEEE TSP in 2024, concentrating on improving the robustness to stragglers in distributed/federated learning with synchronous local SGD.
Python 1
-
RL-from-scratch
RL-from-scratch PublicThis is a playground for implementing modern RL algorithms including Q, DQN, Dueling DQN, REINFORCE, A2C, TRPO, PPO, etc.
Python
If the problem persists, check the GitHub status page or contact support.