Skip to content
View fzhu0628's full-sized avatar
😄
😄

Highlights

  • Pro

Block or report fzhu0628

Block user

Prevent this user from interacting with your repositories and sending you notifications. Learn more about blocking users.

You must be logged in to block users.

Maximum 250 characters. Please don't include any personal information such as legal names or email addresses. Markdown supported. This note will be visible to only you.
Report abuse

Contact GitHub support about this user’s behavior. Learn more about reporting abuse.

Report abuse
fzhu0628/README.md
  • Hi, I'm Feng Zhu. I am a PhD candidate at North Carolina State University.

  • My research focuses on federated reinforcement learning, distributed & stochastic optimization, and multi-agent systems, with an emphasis on sample efficiency and high-probability guarantees.

  • I have a strong background in optimization and theoretical machine learning, with coursework including large-scale optimization, convex optimization, real analysis, and stochastic processes.

  • I received my B.E. and M.E. degrees in Electrical Engineering from Fudan University.

  • Feel free to contact me at fzhu5@ncsu.edu.

  • 🏛️Google Scholar: https://scholar.google.com/citations?user=ZqdH9HwAAAAJ

  • 🌏Homepage: https://fzhu0628.github.io

  • 🔗LinkedIn: https://www.linkedin.com/in/feng-zhu-4738112a2/

Pinned Loading

  1. FedHSA---Tighter-Rates-for-Heterogeneous-Federated-Stochastic-Approximation-under-Markovian-Sampling FedHSA---Tighter-Rates-for-Heterogeneous-Federated-Stochastic-Approximation-under-Markovian-Sampling Public

    Published at TMLR in 2026. Focusing on the general stochastic approximation framework, it proposes a federated algorithm that finds the optimum of an average of contractive operators.

    Python 1

  2. DisSACC---Distributed-Stochastic-Approximation-with-Constant-Communication DisSACC---Distributed-Stochastic-Approximation-with-Constant-Communication Public

    Published at IEEE Asilomar 2025. We study a general distributed heterogeneous stochastic approximation problem with M agents. The proposed DisSACC method converges to the desiderata with linear spe…

  3. Fast-FedPG---Towards-Fast-Rates-for-Federated-and-Multi-Task-Reinforcement-Learning Fast-FedPG---Towards-Fast-Rates-for-Federated-and-Multi-Task-Reinforcement-Learning Public

    This work is a conference paper published at IEEE CDC 2024. The paper is dedicated to finding a policy that maximizes the average of long-term cumulative rewards across environments. Included in th…

  4. sreejeetm1729/Q-Learning-over-Static-and-Time-Varying-Networks sreejeetm1729/Q-Learning-over-Static-and-Time-Varying-Networks Public

    Accepted at ACC 2026 🎉. 𝚅𝚁𝙳𝚀 : We propose and analyze a new algorithm that achieves collaborative speedups in sample complexity for Q-learning over static and time-varying networks.

    Jupyter Notebook 1

  5. STSyn---Speeding-Up-Local-SGD-with-Straggler-Tolerant-Synchronization STSyn---Speeding-Up-Local-SGD-with-Straggler-Tolerant-Synchronization Public

    This work is a journal paper published at IEEE TSP in 2024, concentrating on improving the robustness to stragglers in distributed/federated learning with synchronous local SGD.

    Python 1

  6. RL-from-scratch RL-from-scratch Public

    This is a playground for implementing modern RL algorithms including Q, DQN, Dueling DQN, REINFORCE, A2C, TRPO, PPO, etc.

    Python