diff --git a/README.md b/README.md index b5c58c6838f0..06938ce25588 100644 --- a/README.md +++ b/README.md @@ -14,8 +14,8 @@

-[![GitHub license](https://img.shields.io/github/license/adap/flower)](https://github.com/adap/flower/blob/main/LICENSE) -[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/adap/flower/blob/main/CONTRIBUTING.md) +[![GitHub license](https://img.shields.io/github/license/adap/flower)](LICENSE) +[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](CONTRIBUTING.md) ![Build](https://github.com/adap/flower/actions/workflows/framework.yml/badge.svg) [![Downloads](https://static.pepy.tech/badge/flwr)](https://pepy.tech/project/flwr) [![Docker Hub](https://img.shields.io/badge/Docker%20Hub-flwr-blue)](https://hub.docker.com/u/flwr) @@ -48,29 +48,29 @@ Flower's goal is to make federated learning accessible to everyone. This series 0. **What is Federated Learning?** - [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-what-is-federated-learning.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/doc/source/tutorial-series-what-is-federated-learning.ipynb)) + [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-what-is-federated-learning.ipynb) (or open the [Jupyter Notebook](doc/source/tutorial-series-what-is-federated-learning.ipynb)) 1. **An Introduction to Federated Learning** - [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-get-started-with-flower-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/doc/source/tutorial-series-get-started-with-flower-pytorch.ipynb)) + [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-get-started-with-flower-pytorch.ipynb) (or open the [Jupyter Notebook](doc/source/tutorial-series-get-started-with-flower-pytorch.ipynb)) 2. **Using Strategies in Federated Learning** - [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/doc/source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb)) + [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb) (or open the [Jupyter Notebook](doc/source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb)) 3. **Building Strategies for Federated Learning** - [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/doc/source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb)) + [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb) (or open the [Jupyter Notebook](doc/source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb)) 4. **Custom Clients for Federated Learning** - [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-customize-the-client-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/doc/source/tutorial-series-customize-the-client-pytorch.ipynb)) + [![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-customize-the-client-pytorch.ipynb) (or open the [Jupyter Notebook](doc/source/tutorial-series-customize-the-client-pytorch.ipynb)) Stay tuned, more tutorials are coming soon. Topics include **Privacy and Security in Federated Learning**, and **Scaling Federated Learning**. ## 30-Minute Federated Learning Tutorial -[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/examples/flower-in-30-minutes/tutorial.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/examples/flower-in-30-minutes/tutorial.ipynb)) +[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/examples/flower-in-30-minutes/tutorial.ipynb) (or open the [Jupyter Notebook](examples/flower-in-30-minutes/tutorial.ipynb)) ## Documentation @@ -92,28 +92,28 @@ Stay tuned, more tutorials are coming soon. Topics include **Privacy and Securit Flower Baselines is a collection of community-contributed projects that reproduce the experiments performed in popular federated learning publications. Researchers can build on Flower Baselines to quickly evaluate new ideas. The Flower community loves contributions! Make your work more visible and enable others to build on it by contributing it as a baseline! -- [DASHA](https://github.com/adap/flower/tree/main/baselines/dasha) -- [DepthFL](https://github.com/adap/flower/tree/main/baselines/depthfl) -- [FedBN](https://github.com/adap/flower/tree/main/baselines/fedbn) -- [FedMeta](https://github.com/adap/flower/tree/main/baselines/fedmeta) -- [FedMLB](https://github.com/adap/flower/tree/main/baselines/fedmlb) -- [FedPer](https://github.com/adap/flower/tree/main/baselines/fedper) -- [FedProx](https://github.com/adap/flower/tree/main/baselines/fedprox) -- [FedNova](https://github.com/adap/flower/tree/main/baselines/fednova) -- [HeteroFL](https://github.com/adap/flower/tree/main/baselines/heterofl) -- [FedAvgM](https://github.com/adap/flower/tree/main/baselines/fedavgm) -- [FedRep](https://github.com/adap/flower/tree/main/baselines/fedrep) -- [FedStar](https://github.com/adap/flower/tree/main/baselines/fedstar) -- [FedWav2vec2](https://github.com/adap/flower/tree/main/baselines/fedwav2vec2) -- [FjORD](https://github.com/adap/flower/tree/main/baselines/fjord) -- [MOON](https://github.com/adap/flower/tree/main/baselines/moon) -- [niid-Bench](https://github.com/adap/flower/tree/main/baselines/niid_bench) -- [TAMUNA](https://github.com/adap/flower/tree/main/baselines/tamuna) -- [FedVSSL](https://github.com/adap/flower/tree/main/baselines/fedvssl) -- [FedXGBoost](https://github.com/adap/flower/tree/main/baselines/hfedxgboost) -- [FedPara](https://github.com/adap/flower/tree/main/baselines/fedpara) -- [FedAvg](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/fedavg_mnist) -- [FedOpt](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/adaptive_federated_optimization) +- [DASHA](baselines/dasha/README.md) +- [DepthFL](baselines/depthfl/README.md) +- [FedBN](baselines/fedbn/README.md) +- [FedMeta](baselines/fedmeta/README.md) +- [FedMLB](baselines/fedmlb/README.md) +- [FedPer](baselines/fedper/README.md) +- [FedProx](baselines/fedprox/README.md) +- [FedNova](baselines/fednova/README.md) +- [HeteroFL](baselines/heterofl/README.md) +- [FedAvgM](baselines/fedavgm/README.md) +- [FedRep](baselines/fedrep/README.md) +- [FedStar](baselines/fedstar/README.md) +- [FedWav2vec2](baselines/fedwav2vec2/README.md) +- [FjORD](baselines/fjord/README.md) +- [MOON](baselines/moon/README.md) +- [niid-Bench](baselines/niid_bench/README.md) +- [TAMUNA](baselines/tamuna/README.md) +- [FedVSSL](baselines/fedvssl/README.md) +- [FedXGBoost](baselines/hfedxgboost/README.md) +- [FedPara](baselines/fedpara/README.md) +- [FedAvg](baselines/flwr_baselines/flwr_baselines/publications/fedavg_mnist/README.md) +- [FedOpt](baselines/flwr_baselines/flwr_baselines/publications/adaptive_federated_optimization/README.md) Please refer to the [Flower Baselines Documentation](https://flower.ai/docs/baselines/) for a detailed categorization of baselines and for additional info including: * [How to use Flower Baselines](https://flower.ai/docs/baselines/how-to-use-baselines.html) @@ -125,36 +125,36 @@ Several code examples show different usage scenarios of Flower (in combination w Quickstart examples: -- [Quickstart (TensorFlow)](https://github.com/adap/flower/tree/main/examples/quickstart-tensorflow) -- [Quickstart (PyTorch)](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch) -- [Quickstart (Hugging Face)](https://github.com/adap/flower/tree/main/examples/quickstart-huggingface) -- [Quickstart (PyTorch Lightning)](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch-lightning) -- [Quickstart (fastai)](https://github.com/adap/flower/tree/main/examples/quickstart-fastai) -- [Quickstart (Pandas)](https://github.com/adap/flower/tree/main/examples/quickstart-pandas) -- [Quickstart (JAX)](https://github.com/adap/flower/tree/main/examples/quickstart-jax) -- [Quickstart (MONAI)](https://github.com/adap/flower/tree/main/examples/quickstart-monai) -- [Quickstart (scikit-learn)](https://github.com/adap/flower/tree/main/examples/sklearn-logreg-mnist) -- [Quickstart (Android [TFLite])](https://github.com/adap/flower/tree/main/examples/android) -- [Quickstart (iOS [CoreML])](https://github.com/adap/flower/tree/main/examples/ios) -- [Quickstart (MLX)](https://github.com/adap/flower/tree/main/examples/quickstart-mlx) -- [Quickstart (XGBoost)](https://github.com/adap/flower/tree/main/examples/xgboost-quickstart) - -Other [examples](https://github.com/adap/flower/tree/main/examples): - -- [Raspberry Pi & Nvidia Jetson Tutorial](https://github.com/adap/flower/tree/main/examples/embedded-devices) -- [PyTorch: From Centralized to Federated](https://github.com/adap/flower/tree/main/examples/pytorch-from-centralized-to-federated) -- [Vertical FL](https://github.com/adap/flower/tree/main/examples/vertical-fl) -- [Federated Finetuning of OpenAI's Whisper](https://github.com/adap/flower/tree/main/examples/whisper-federated-finetuning) -- [Federated Finetuning of Large Language Model](https://github.com/adap/flower/tree/main/examples/flowertune-llm) -- [Federated Finetuning of a Vision Transformer](https://github.com/adap/flower/tree/main/examples/flowertune-vit) -- [Advanced Flower with TensorFlow/Keras](https://github.com/adap/flower/tree/main/examples/advanced-tensorflow) -- [Advanced Flower with PyTorch](https://github.com/adap/flower/tree/main/examples/advanced-pytorch) -- [Comprehensive Flower+XGBoost](https://github.com/adap/flower/tree/main/examples/xgboost-comprehensive) -- [Flower through Docker Compose and with Grafana dashboard](https://github.com/adap/flower/tree/main/examples/flower-via-docker-compose) -- [Flower with KaplanMeierFitter from the lifelines library](https://github.com/adap/flower/tree/main/examples/federated-kaplan-meier-fitter) -- [Sample Level Privacy with Opacus](https://github.com/adap/flower/tree/main/examples/opacus) -- [Sample Level Privacy with TensorFlow-Privacy](https://github.com/adap/flower/tree/main/examples/tensorflow-privacy) -- [Flower with a Tabular Dataset](https://github.com/adap/flower/tree/main/examples/fl-tabular) +- [Quickstart (TensorFlow)](examples/quickstart-tensorflow/README.md) +- [Quickstart (PyTorch)](examples/quickstart-pytorch/README.md) +- [Quickstart (Hugging Face)](examples/quickstart-huggingface/README.md) +- [Quickstart (PyTorch Lightning)](examples/quickstart-pytorch-lightning/README.md) +- [Quickstart (fastai)](examples/quickstart-fastai/README.md) +- [Quickstart (Pandas)](examples/quickstart-pandas/README.md) +- [Quickstart (JAX)](examples/quickstart-jax/README.md) +- [Quickstart (MONAI)](examples/quickstart-monai/README.md) +- [Quickstart (scikit-learn)](examples/sklearn-logreg-mnist/README.md) +- [Quickstart (Android [TFLite])](examples/android/README.md) +- [Quickstart (iOS [CoreML])](examples/ios/README.md) +- [Quickstart (MLX)](examples/quickstart-mlx/README.md) +- [Quickstart (XGBoost)](examples/xgboost-quickstart/README.md) + +Other [examples](examples): + +- [Raspberry Pi & Nvidia Jetson Tutorial](examples/embedded-devices/README.md) +- [PyTorch: From Centralized to Federated](examples/pytorch-from-centralized-to-federated/README.md) +- [Vertical FL](examples/vertical-fl/README.md) +- [Federated Finetuning of OpenAI's Whisper](examples/whisper-federated-finetuning/README.md) +- [Federated Finetuning of Large Language Model](examples/flowertune-llm/README.md) +- [Federated Finetuning of a Vision Transformer](examples/flowertune-vit/README.md) +- [Advanced Flower with TensorFlow/Keras](examples/advanced-tensorflow/README.md) +- [Advanced Flower with PyTorch](examples/advanced-pytorch/README.md) +- [Comprehensive Flower+XGBoost](examples/xgboost-comprehensive/README.md) +- [Flower through Docker Compose and with Grafana dashboard](examples/flower-via-docker-compose/README.md) +- [Flower with KaplanMeierFitter from the lifelines library](examples/federated-kaplan-meier-fitter/README.md) +- [Sample Level Privacy with Opacus](examples/opacus/README.md) +- [Sample Level Privacy with TensorFlow-Privacy](examples/tensorflow-privacy/README.md) +- [Flower with a Tabular Dataset](examples/fl-tabular/README.md) ## Community diff --git a/baselines/README.md b/baselines/README.md index 75bcccb68b2a..900540cca85e 100644 --- a/baselines/README.md +++ b/baselines/README.md @@ -2,7 +2,7 @@ > [!NOTE] -> We are changing the way we structure the Flower baselines. While we complete the transition to the new format, you can still find the existing baselines in the `flwr_baselines` directory. Currently, you can make use of baselines for [FedAvg](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/fedavg_mnist), [FedOpt](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/adaptive_federated_optimization), and [LEAF-FEMNIST](https://github.com/adap/flower/tree/main/baselines/flwr_baselines/flwr_baselines/publications/leaf/femnist). +> We are changing the way we structure the Flower baselines. While we complete the transition to the new format, you can still find the existing baselines in the `flwr_baselines` directory. Currently, you can make use of baselines for [FedAvg](flwr_baselines/flwr_baselines/publications/fedavg_mnist/README.md), [FedOpt](flwr_baselines/flwr_baselines/publications/adaptive_federated_optimization/README.md), and [LEAF-FEMNIST](flwr_baselines/flwr_baselines/publications/leaf/femnist/README.md). ## Structure diff --git a/datasets/README.md b/datasets/README.md index 0d35d2e31b6a..7c417fea2afa 100644 --- a/datasets/README.md +++ b/datasets/README.md @@ -1,7 +1,7 @@ # Flower Datasets -[![GitHub license](https://img.shields.io/github/license/adap/flower)](https://github.com/adap/flower/blob/main/LICENSE) -[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](https://github.com/adap/flower/blob/main/CONTRIBUTING.md) +[![GitHub license](https://img.shields.io/github/license/adap/flower)](../LICENSE) +[![PRs Welcome](https://img.shields.io/badge/PRs-welcome-brightgreen.svg)](../CONTRIBUTING.md) ![Build](https://github.com/adap/flower/actions/workflows/framework.yml/badge.svg) ![Downloads](https://pepy.tech/badge/flwr-datasets) [![Slack](https://img.shields.io/badge/Chat-Slack-red)](https://flower.ai/join-slack) @@ -10,7 +10,7 @@ Flower Datasets (`flwr-datasets`) is a library to quickly and easily create data > [!TIP] -> For complete documentation that includes API docs, how-to guides and tutorials, please visit the [Flower Datasets Documentation](https://flower.ai/docs/datasets/) and for full FL example see the [Flower Examples page](https://github.com/adap/flower/tree/main/examples). +> For complete documentation that includes API docs, how-to guides and tutorials, please visit the [Flower Datasets Documentation](https://flower.ai/docs/datasets/) and for full FL example see the [Flower Examples page](../examples). ## Installation diff --git a/examples/advanced-pytorch/README.md b/examples/advanced-pytorch/README.md index 1771173c3925..5f7cb9091d02 100644 --- a/examples/advanced-pytorch/README.md +++ b/examples/advanced-pytorch/README.md @@ -7,9 +7,9 @@ framework: [torch, torchvision] # Federated Learning with PyTorch and Flower (Advanced Example) > \[!TIP\] -> This example shows intermediate and advanced functionality of Flower. It you are new to Flower, it is recommended to start from the [quickstart-pytorch](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch) example or the [quickstart PyTorch tutorial](https://flower.ai/docs/framework/tutorial-quickstart-pytorch.html). +> This example shows intermediate and advanced functionality of Flower. It you are new to Flower, it is recommended to start from the [quickstart-pytorch](../quickstart-pytorch/README.md) example or the [quickstart PyTorch tutorial](https://flower.ai/docs/framework/tutorial-quickstart-pytorch.html). -This example shows how to extend your `ClientApp` and `ServerApp` capabilities compared to what's shown in the [`quickstart-pytorch`](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch) example. In particular, it will show how the `ClientApp`'s state (and object of type [RecordSet](https://flower.ai/docs/framework/ref-api/flwr.common.RecordSet.html)) can be used to enable stateful clients, facilitating the design of personalized federated learning strategies, among others. The `ServerApp` in this example makes use of a custom strategy derived from the built-in [FedAvg](https://flower.ai/docs/framework/ref-api/flwr.server.strategy.FedAvg.html). In addition, it will also showcase how to: +This example shows how to extend your `ClientApp` and `ServerApp` capabilities compared to what's shown in the [`quickstart-pytorch`](../quickstart-pytorch/README.md) example. In particular, it will show how the `ClientApp`'s state (and object of type [RecordSet](https://flower.ai/docs/framework/ref-api/flwr.common.RecordSet.html)) can be used to enable stateful clients, facilitating the design of personalized federated learning strategies, among others. The `ServerApp` in this example makes use of a custom strategy derived from the built-in [FedAvg](https://flower.ai/docs/framework/ref-api/flwr.server.strategy.FedAvg.html). In addition, it will also showcase how to: 1. Save model checkpoints 2. Save the metrics available at the strategy (e.g. accuracies, losses) diff --git a/examples/app-pytorch/README.md b/examples/app-pytorch/README.md index 5cfae8440ed2..e43350ec0179 100644 --- a/examples/app-pytorch/README.md +++ b/examples/app-pytorch/README.md @@ -7,7 +7,7 @@ framework: [torch, torchvision] # Flower App (PyTorch) 🧪 > 🧪 = This example covers experimental features that might change in future versions of Flower -> Please consult the regular PyTorch code examples ([quickstart](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch), [advanced](https://github.com/adap/flower/tree/main/examples/advanced-pytorch)) to learn how to use Flower with PyTorch. +> Please consult the regular PyTorch code examples ([quickstart](../quickstart-pytorch/README.md), [advanced](../advanced-pytorch/README.md)) to learn how to use Flower with PyTorch. The following steps describe how to start a long-running Flower server (SuperLink) and then run a Flower App (consisting of a `ClientApp` and a `ServerApp`). diff --git a/examples/custom-mods/README.md b/examples/custom-mods/README.md index c2007eb323ae..8e2c76254583 100644 --- a/examples/custom-mods/README.md +++ b/examples/custom-mods/README.md @@ -7,7 +7,7 @@ framework: [wandb, tensorboard] # Using custom mods 🧪 > 🧪 = This example covers experimental features that might change in future versions of Flower -> Please consult the regular PyTorch code examples ([quickstart](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch), [advanced](https://github.com/adap/flower/tree/main/examples/advanced-pytorch)) to learn how to use Flower with PyTorch. +> Please consult the regular PyTorch code examples ([quickstart](../quickstart-pytorch/README.md), [advanced](../advanced-pytorch/README.md)) to learn how to use Flower with PyTorch. The following steps describe how to write custom Flower Mods and use them in a simple example. diff --git a/examples/embedded-devices/README.md b/examples/embedded-devices/README.md index c03646d475ac..c339528faee8 100644 --- a/examples/embedded-devices/README.md +++ b/examples/embedded-devices/README.md @@ -6,7 +6,8 @@ framework: [torch] # Federated AI with Embedded Devices using Flower -This example will show you how Flower makes it very easy to run Federated Learning workloads on edge devices. Here we'll be showing how to use Raspberry Pi as Flower clients, or better said, `SuperNodes`. The FL workload (i.e. model, dataset and training loop) is mostly borrowed from the [quickstart-pytorch](https://github.com/adap/flower/tree/main/examples/simulation-pytorch) example, but you could adjust it to follow [quickstart-tensorflow](https://github.com/adap/flower/tree/main/examples/quickstart-tensorflow) if you prefere using TensorFlow. The main difference compare to those examples is that here you'll learn how to use Flower's Deployment Engine to run FL across multiple embedded devices. + +This example will show you how Flower makes it very easy to run Federated Learning workloads on edge devices. Here we'll be showing how to use Raspberry Pi as Flower clients, or better said, `SuperNodes`. The FL workload (i.e. model, dataset and training loop) is mostly borrowed from the [quickstart-pytorch](../quickstart-pytorch/README.md) example, but you could adjust it to follow [quickstart-tensorflow](../quickstart-tensorflow/README.md) if you prefere using TensorFlow. The main difference compare to those examples is that here you'll learn how to use Flower's Deployment Engine to run FL across multiple embedded devices. ![Different was of running Flower FL on embedded devices](_static/diagram.png) diff --git a/examples/fl-dp-sa/README.md b/examples/fl-dp-sa/README.md index 61a6c80f3556..e8e69b4851f0 100644 --- a/examples/fl-dp-sa/README.md +++ b/examples/fl-dp-sa/README.md @@ -8,7 +8,7 @@ framework: [torch, torchvision] This example demonstrates a federated learning setup using the Flower, incorporating central differential privacy (DP) with client-side fixed clipping and secure aggregation (SA). It is intended for a small number of rounds for demonstration purposes. -This example is similar to the [quickstart-pytorch example](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch) and extends it by integrating central differential privacy and secure aggregation. For more details on differential privacy and secure aggregation in Flower, please refer to the documentation [here](https://flower.ai/docs/framework/how-to-use-differential-privacy.html) and [here](https://flower.ai/docs/framework/contributor-ref-secure-aggregation-protocols.html). +This example is similar to the [quickstart-pytorch example](../quickstart-pytorch/README.md) and extends it by integrating central differential privacy and secure aggregation. For more details on differential privacy and secure aggregation in Flower, please refer to the documentation [here](https://flower.ai/docs/framework/how-to-use-differential-privacy.html) and [here](https://flower.ai/docs/framework/contributor-ref-secure-aggregation-protocols.html). ## Set up the project diff --git a/examples/flower-authentication/README.md b/examples/flower-authentication/README.md index 4f312608503d..79f0de374966 100644 --- a/examples/flower-authentication/README.md +++ b/examples/flower-authentication/README.md @@ -8,7 +8,7 @@ framework: [torch, torchvision] > \[!NOTE\] > 🧪 = This example covers experimental features that might change in future versions of Flower. -> Please consult the regular PyTorch examples ([quickstart](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch), [advanced](https://github.com/adap/flower/tree/main/examples/advanced-pytorch)) to learn how to use Flower with PyTorch. +> Please consult the regular PyTorch examples ([quickstart](../quickstart-pytorch/README.md), [advanced](../advanced-pytorch/README.md)) to learn how to use Flower with PyTorch. The following steps describe how to start a long-running Flower server (SuperLink+SuperExec) and a long-running Flower clients (SuperNode) with authentication enabled. The task is to train a simple CNN for image classification using PyTorch. diff --git a/examples/flower-secure-aggregation/README.md b/examples/flower-secure-aggregation/README.md index 0a9056263db3..4073612f11c6 100644 --- a/examples/flower-secure-aggregation/README.md +++ b/examples/flower-secure-aggregation/README.md @@ -6,7 +6,7 @@ framework: [torch, torchvision] # Secure aggregation with Flower (the SecAgg+ protocol) -The following steps describe how to use Flower's built-in Secure Aggregation components. This example demonstrates how to apply `SecAgg+` to the same federated learning workload as in the [quickstart-pytorch](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch) example. The `ServerApp` uses the [`SecAggPlusWorkflow`](https://flower.ai/docs/framework/ref-api/flwr.server.workflow.SecAggPlusWorkflow.html#secaggplusworkflow) while `ClientApp` uses the [`secaggplus_mod`](https://flower.ai/docs/framework/ref-api/flwr.client.mod.secaggplus_mod.html#flwr.client.mod.secaggplus_mod). To introduce the various steps involved in `SecAgg+`, this example introduces as a sub-class of `SecAggPlusWorkflow` the `SecAggPlusWorkflowWithLogs`. It is enabled by default, but you can disable (see later in this readme). +The following steps describe how to use Flower's built-in Secure Aggregation components. This example demonstrates how to apply `SecAgg+` to the same federated learning workload as in the [quickstart-pytorch](../quickstart-pytorch/README.md) example. The `ServerApp` uses the [`SecAggPlusWorkflow`](https://flower.ai/docs/framework/ref-api/flwr.server.workflow.SecAggPlusWorkflow.html#secaggplusworkflow) while `ClientApp` uses the [`secaggplus_mod`](https://flower.ai/docs/framework/ref-api/flwr.client.mod.secaggplus_mod.html#flwr.client.mod.secaggplus_mod). To introduce the various steps involved in `SecAgg+`, this example introduces as a sub-class of `SecAggPlusWorkflow` the `SecAggPlusWorkflowWithLogs`. It is enabled by default, but you can disable (see later in this readme). ## Set up the project diff --git a/examples/flower-simulation-step-by-step-pytorch/Part-I/README.md b/examples/flower-simulation-step-by-step-pytorch/Part-I/README.md index d961d29184de..29aee4385e82 100644 --- a/examples/flower-simulation-step-by-step-pytorch/Part-I/README.md +++ b/examples/flower-simulation-step-by-step-pytorch/Part-I/README.md @@ -4,7 +4,7 @@ In the first part of the Flower Simulation series, we go step-by-step through th ## Running the Code -In this tutorial we didn't dive in that much into Hydra configs (that's the content of [Part-II](https://github.com/adap/flower/tree/main/examples/flower-simulation-step-by-step-pytorch/Part-II)). However, this doesn't mean we can't easily configure our experiment directly from the command line. Let's see a couple of examples on how to run our simulation. +In this tutorial we didn't dive in that much into Hydra configs (that's the content of [Part-II](../Part-II/README.mdmd)). However, this doesn't mean we can't easily configure our experiment directly from the command line. Let's see a couple of examples on how to run our simulation. ```bash diff --git a/examples/flower-simulation-step-by-step-pytorch/Part-II/README.md b/examples/flower-simulation-step-by-step-pytorch/Part-II/README.md index 2c081ed9e7dc..74368f29fa72 100644 --- a/examples/flower-simulation-step-by-step-pytorch/Part-II/README.md +++ b/examples/flower-simulation-step-by-step-pytorch/Part-II/README.md @@ -1,11 +1,11 @@ # A Complete FL Simulation Pipeline using Flower (w/ better Hydra usage) -The code in this directory is fairly similar to that presented in [`simulation-pytorch example`](https://github.com/adap/flower/tree/main/examples/simulation-pytorch) but extended into a series of [step-by-step video tutorials](https://www.youtube.com/playlist?list=PLNG4feLHqCWlnj8a_E1A_n5zr2-8pafTB) on how to Federated Learning simulations using Flower. In Part-I, we made use of a very simple config structure using a single `YAML` file. With the code here presented, we take a dive into more advanced config structures leveraging some of the core functionality of Hydra. You can find more information about Hydra in the [Hydra Documentation](https://hydra.cc/docs/intro/). To the files I have added a fair amount of comments to support and expand upon what was said in the video tutorial. +The code in this directory is fairly similar to that presented in [`simulation-pytorch example`](../simulation-pytorch) but extended into a series of [step-by-step video tutorials](https://www.youtube.com/playlist?list=PLNG4feLHqCWlnj8a_E1A_n5zr2-8pafTB) on how to Federated Learning simulations using Flower. In Part-I, we made use of a very simple config structure using a single `YAML` file. With the code here presented, we take a dive into more advanced config structures leveraging some of the core functionality of Hydra. You can find more information about Hydra in the [Hydra Documentation](https://hydra.cc/docs/intro/). To the files I have added a fair amount of comments to support and expand upon what was said in the video tutorial. The content of the code in this directory is roughly divided into two parts: - `toy.py` and its associated config files (i.e. `conf/toy.yaml` and `conf/toy_model/`) which were designed as a playground to test out some of the functionality of Hydra configs that we want to incorporate into our Flower projects. -- and the rest: which follows the exact same structure as in the code presented in [Part-I](https://github.com/adap/flower/tree/main/examples/flower-simulation-step-by-step-pytorch/Part-I) but that has been _enhanced_ using Hydra. +- and the rest: which follows the exact same structure as in the code presented in [Part-I](../Part-I/README.md) but that has been _enhanced_ using Hydra. ## Running the Code diff --git a/examples/opacus/README.md b/examples/opacus/README.md index d08f534f878e..28dae34967fb 100644 --- a/examples/opacus/README.md +++ b/examples/opacus/README.md @@ -6,7 +6,7 @@ framework: [opacus, torch] # Training with Sample-Level Differential Privacy using Opacus Privacy Engine -In this example, we demonstrate how to train a model with differential privacy (DP) using Flower. We employ PyTorch and integrate the Opacus Privacy Engine to achieve sample-level differential privacy. This setup ensures robust privacy guarantees during the client training phase. The code is adapted from the [PyTorch Quickstart example](https://github.com/adap/flower/tree/main/examples/quickstart-pytorch). +In this example, we demonstrate how to train a model with differential privacy (DP) using Flower. We employ PyTorch and integrate the Opacus Privacy Engine to achieve sample-level differential privacy. This setup ensures robust privacy guarantees during the client training phase. The code is adapted from the [PyTorch Quickstart example](../quickstart-pytorch/README.md). For more information about DP in Flower please refer to the [tutorial](https://flower.ai/docs/framework/how-to-use-differential-privacy.html). For additional information about Opacus, visit the official [website](https://opacus.ai/). diff --git a/examples/xgboost-comprehensive/README.md b/examples/xgboost-comprehensive/README.md index f65f2dbeb645..db97e0875426 100644 --- a/examples/xgboost-comprehensive/README.md +++ b/examples/xgboost-comprehensive/README.md @@ -8,7 +8,7 @@ framework: [xgboost] This example demonstrates a comprehensive federated learning setup using Flower with XGBoost. We use [HIGGS](https://archive.ics.uci.edu/dataset/280/higgs) dataset to perform a binary classification task. This examples uses [Flower Datasets](https://flower.ai/docs/datasets/) to retrieve, partition and preprocess the data for each Flower client. -It differs from the [xgboost-quickstart](https://github.com/adap/flower/tree/main/examples/xgboost-quickstart) example in the following ways: +It differs from the [xgboost-quickstart](../xgboost-quickstart/README.md) example in the following ways: - Customised FL settings. - Customised partitioner type (uniform, linear, square, exponential). diff --git a/examples/xgboost-quickstart/README.md b/examples/xgboost-quickstart/README.md index a7b047c090f0..97916997ec1e 100644 --- a/examples/xgboost-quickstart/README.md +++ b/examples/xgboost-quickstart/README.md @@ -10,7 +10,7 @@ This example demonstrates how to perform EXtreme Gradient Boosting (XGBoost) wit We use [HIGGS](https://archive.ics.uci.edu/dataset/280/higgs) dataset for this example to perform a binary classification task. Tree-based with bagging method is used for aggregation on the server. -This project provides a minimal code example to enable you to get started quickly. For a more comprehensive code example, take a look at [xgboost-comprehensive](https://github.com/adap/flower/tree/main/examples/xgboost-comprehensive). +This project provides a minimal code example to enable you to get started quickly. For a more comprehensive code example, take a look at [xgboost-comprehensive](../xgboost-comprehensive/README.md). ## Set up the project diff --git a/src/swift/flwr/README.md b/src/swift/flwr/README.md index 372bd0d5c6c5..869e527d0962 100644 --- a/src/swift/flwr/README.md +++ b/src/swift/flwr/README.md @@ -8,7 +8,7 @@ You can download the Flower project and integrate the package manually. ## Usage -A comprehensive example is available in: [```examples/ios/```](https://github.com/adap/flower/tree/main/examples/ios). To give information about the usage structurally: +A comprehensive example is available in: [```examples/ios/```](../../../examples/ios/README.md). To give information about the usage structurally: ``` import flwr