Skip to content

solidlabnetwork/fedshield-llm

Repository files navigation

🛡️FedShield-LLM: A Secure and Scalable Federated Fine-Tuned Large Language Model

FedShield-LLM is a novel framework that enables secure and efficient federated fine-tuning of Large Language Models (LLMs) across organizations while preserving data privacy. By combining pruning with Fully Homomorphic Encryption (FHE) for Low-Rank Adaptation (LoRA) parameters, FedShield-LLM allows encrypted computation on model updates, reducing the attack surface and mitigating inference attacks like membership inference and gradient inversion. Designed for cross-silo federated environments, the framework optimizes computational and communication efficiency, making it suitable for small and medium-sized organizations.

📘 Read the paper on arXiv

Key Features:

  • 🚀 Encrypted LoRA aggregation using Fully Homomorphic Encryption (FHE).
  • Communication-efficient updates through aggressive pruning.
  • 🛡️ Privacy-preserving defense against membership inference and gradient inversion attacks.
  • 📈 Empirically validated: Outperforms baseline methods while maintaining robust privacy protection.

🛠️ Requirements

  • Python 3.9+
  • PyTorch 2.1.2
  • TenSEAL
  • Torchvision
  • CUDA 11.8 (for GPU support)

📚 Datasets

This project supports various datasets for federated fine-tuning:

🤖 Models

Supported base models for federated fine-tuning:

📦 Environment Setup

Install dependencies using:

pip install -r requirements.txt

🔧Finetuning using FedShield-LLM

bash run_scripts/fed.sh

🔧Finetuning using Vanilla-FL

bash run_scripts/vanilla.sh

🔧 Merge LoRA with Base Model

To merge the LoRA weights with the base model, use the following command:

python utils/merge_lora.py --base_model_path "meta-llama/Llama-2-7b-hf" \
  --lora_path /path/to/your/lora/checkpoint

📝 Generate Answers Using Merged Model

To generate answers using the merged model, use the following command:

python eval_utils/generate_answer.py \
  --base_model_path /path/to/your/merged/model/checkpoint

📝 Generate Answers Using Base Model + LoRA Weights

To generate answers using the base model and LoRA weights, use the following command:

python eval_utils/generate_answer.py \
  --base_model_path "meta-llama/Llama-2-7b-hf" \
  --template alpaca \
  --lora_path /path/to/your/lora/checkpoint

📚 Citation

If you use this paper or codebase in your research or publication, please cite it as follows:

@article{mia2025fedshield,
  title={FedShield-LLM: A Secure and Scalable Federated Fine-Tuned Large Language Model},
  author={Mia, Md Jueal and Amini, M Hadi},
  journal={arXiv preprint arXiv:2506.05640},
  year={2025}
}

Acknowledgements

This repository is based on OpenFedLLM thanks to the original authors for their works!

About

FedShield-LLM: A Secure and Scalable Federated Fine-Tuned Large Language Model

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •