Skip to content
/ GHBM Public

Official Repository for "Communication Efficient Federated Learning with Generalized Heavy-Ball Momentum", accepted at TMLR 2025

License

RickZack/GHBM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Communication-Efficient Federated Learning with Generalized Heavy-Ball Momentum

Authors: Riccardo Zaccone, Sai Praneeth Karimireddy, Carlo Masone, Marco Ciccone.

🚀 Welcome to the official repository for "Communication-Efficient Federated Learning with Generalized Heavy-Ball Momentum".

In this work, we propose FL algorithms based on a novel Generalized Heavy-Ball Momentum (GHBM) formulation designed to overcome the limitation of classical momentum in FL w.r.t. heterogeneous local distributions and partial participation.

💪 GHBM is theoretically unaffected by client heterogeneity:​ it is proven to converge in (cyclic) partial participation as other momentum-based FL algorithms do in full participation.
💡 GHBM is easy to implement: it is based on the key modification to make momentum effective in heterogeneous FL with partial participation.
🧠 GHBM is very flexible: we provide both algorithms compatible with cross-device scenarios and more communication-efficient ones for cross-silo settings.
🏆 GHBM substantially improves the state of the art: extensive experimentation on large-scale settings with high data heterogeneity and low client participation shows that GHBM and its variants reach much better final model quality and much higher convergence speed.

📄 Read our paper on: [OpenReview] [ArXiv]
🌍 Demo & Project Page

Implementation of other FL algorithms

This software additionally implements the code we used in the paper to simulate the following SOTA algorithms:

Installation

Requirements

To install the requirements, you can use the provided requirement file and use conda:

$ conda env create --file requirements/environment.yaml
$ conda activate ghbm

Reproducing our experiments

Perform a single run

If you just want to run a configuration, simply run the train.py specifying the command line arguments. Please note that default arguments are specified in the ./config folder and that all is configured such that the parameters are the ones reported in the paper. For example, to run our GHBM on CIFAR-10 with ResNet-20, just use:

# runs GHBM on CIFAR-10 with ResNet-20, using default parameters specified in config files (K=100, C=0.1)
$ python train.py model=resnet \
                  dataset=cifar10 \
                  algo=ghbm \
                  algo.params.common.alpha=0 \
                  algo.params.center_server.args.tau=10 \
                  algo.params.client.args.optim.args.lr=0.01 \
                  algo.params.client.args.optim.args.weight_decay=1e-5 

This software uses Hydra to configure experiments, for more information on how to provide command line arguments, please refer to the official documentation.

Paper

Communication-Efficient Federated Learning with Generalized Heavy-Ball Momentum
Riccardo Zaccone, Sai Praneeth Karimireddy, Carlo Masone, Marco Ciccone
[Paper]

How to cite us

@article{zaccone2025communicationefficient,
      title={Communication-Efficient Heterogeneous Federated Learning with Generalized Heavy-Ball Momentum}, 
      author={Riccardo Zaccone and Sai Praneeth Karimireddy and Carlo Masone and Marco Ciccone},
      year={2025},
      journal={Transactions on Machine Learning Research},
      url={https://openreview.net/forum?id=LNoFjcLywb},
}

About

Official Repository for "Communication Efficient Federated Learning with Generalized Heavy-Ball Momentum", accepted at TMLR 2025

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages