Skip to content

Latest commit

Β 

History

History
85 lines (70 loc) Β· 3.29 KB

File metadata and controls

85 lines (70 loc) Β· 3.29 KB

FedXplore - Framework for Federated Learning Attacks, Defences, Client Selection and Personalization

Table of contents

  1. Quickstart -- Follow the instructions and get the result!
  2. Attacks and Defences -- Deep dive into Byzantine-Robust Federated Learning
  3. Personalization -- Deep dive into Personalized Federated Learning
  4. Client Selection -- Deep dive into Client Selection Strategies
  5. Byzantine Robustness and Client Selection -- Feel the flexibility of framework in modular interaction
  6. C4 notation -- Context Container Component Code scheme.
  7. Federated Method Explaining -- Get the basis and write your own method
  8. Attacks -- Get the basis and write custom attack

πŸš€ Quickstart Guide

πŸ“‹ Prerequisites

python -m venv venv
source venv/bin/activate
pip install -e .

βš™οΈ Experiment Setups

See allowed optionalization in config.md

python src/train.py \
  training_params.batch_size=32 \
  federated_params.print_client_metrics=False \
  training_params.device_ids=[0] \
  > fedavg_cifar.txt

At the first run, downloading CIFAR-10 takes some time.

device_ids controls the GPU number (if there are several GPUs on the machine). You can specify multiple ids, then the training will be evenly distributed across the specified devices.

Additionally, manager.batch_size client processes will be created. To forcefully terminate the training, kill any of the processes.

πŸŒͺ️ Dirichlet Partition with $\alpha=0.1$ (strong heterogeneity) and FedCor client strategy

python src/train.py \
  training_params.batch_size=32 \
  federated_params.print_client_metrics=False \
  distribution.alpha=0.1 \
  federated_params.amount_of_clients=100 \
  client_selector=fedcor \
  > fedavg_fedcor_cifar10_dirichlet_alpha0.1.txt

🦠 FLTrust with Label Flipping Attack on PTB-XL dataset

python src/train.py \
  federated_method=fltrust \
  dataset@train_dataset=ptbxl \
  dataset@test_dataset=ptbxl \
  dataset@trust_dataset=ptbxl \
  model_trainer=ptbxl \
  distribution=uniform \
  model=resnet1d18 \
  training_params.batch_size=32 \
  federated_params.print_client_metrics=False \
  federated_params.clients_attack_types=label_flip \
  federated_params.prop_attack_clients=0.5 \
  federated_params.attack_scheme=constant \
  federated_params.prop_attack_rounds=1.0 \
  > fltrust_ptbxl_label_flip_half_byzantines.txt

At the first run, downloading PTB-XL takes some time.

πŸ§‘β€πŸ€β€πŸ§‘ FedAMP with 10 clusters on CIFAR-10 dataset

python src/train.py \
  federated_method=fedamp \
  federated_method.strategy=sharded \
  federated_method.cluster_params=[10,0.5] \
  federated_params.amount_of_clients=100 \
  federated_params.client_subset_size=100 \
  training_params.batch_size=32 \
  > fedamp_10_clusters_cifar10.txt