Skip to content

Commit

Permalink
Merge branch 'main' into deprecate-old-central-DPs
Browse files Browse the repository at this point in the history
  • Loading branch information
mohammadnaseri authored Jan 30, 2024
2 parents bf0b311 + b9b0a90 commit 1754df7
Show file tree
Hide file tree
Showing 149 changed files with 5,411 additions and 2,483 deletions.
5 changes: 0 additions & 5 deletions .github/workflows/e2e.yml
Original file line number Diff line number Diff line change
Expand Up @@ -89,11 +89,6 @@ jobs:
from torchvision.datasets import MNIST
MNIST('./data', download=True)
- directory: mxnet
dataset: |
import mxnet as mx
mx.test_utils.get_mnist()
- directory: scikit-learn
dataset: |
import openml
Expand Down
5 changes: 3 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -56,11 +56,11 @@ Flower's goal is to make federated learning accessible to everyone. This series

2. **Using Strategies in Federated Learning**

[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-use-a-federated-learning-strategy-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/doc/source/tutorial-use-a-federated-learning-strategy-pytorch.ipynb))
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/doc/source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb))

3. **Building Strategies for Federated Learning**

[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/doc/source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb))
[![Open in Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/adap/flower/blob/main/doc/source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb) (or open the [Jupyter Notebook](https://github.com/adap/flower/blob/main/doc/source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb))

4. **Custom Clients for Federated Learning**

Expand Down Expand Up @@ -135,6 +135,7 @@ Other [examples](https://github.com/adap/flower/tree/main/examples):
- [Advanced Flower with TensorFlow/Keras](https://github.com/adap/flower/tree/main/examples/advanced-tensorflow)
- [Advanced Flower with PyTorch](https://github.com/adap/flower/tree/main/examples/advanced-pytorch)
- Single-Machine Simulation of Federated Learning Systems ([PyTorch](https://github.com/adap/flower/tree/main/examples/simulation_pytorch)) ([Tensorflow](https://github.com/adap/flower/tree/main/examples/simulation_tensorflow))
- [Flower through Docker Compose and with Grafana dashboard](https://github.com/adap/flower/tree/main/examples/flower-via-docker-compose)

## Community

Expand Down
2 changes: 1 addition & 1 deletion doc/source/example-jax-from-centralized-to-federated.rst
Original file line number Diff line number Diff line change
Expand Up @@ -259,7 +259,7 @@ Having defined the federation process, we can run it.
# Start Flower client
client = FlowerClient(params, grad_fn, train_x, train_y, test_x, test_y)
fl.client.start_numpy_client(server_address="0.0.0.0:8080", client)
fl.client.start_client(server_address="0.0.0.0:8080", client.to_client())
if __name__ == "__main__":
main()
Expand Down
4 changes: 2 additions & 2 deletions doc/source/example-pytorch-from-centralized-to-federated.rst
Original file line number Diff line number Diff line change
Expand Up @@ -278,7 +278,7 @@ We included type annotations to give you a better understanding of the data type
return float(loss), self.num_examples["testset"], {"accuracy": float(accuracy)}
All that's left to do it to define a function that loads both model and data, creates a :code:`CifarClient`, and starts this client.
You load your data and model by using :code:`cifar.py`. Start :code:`CifarClient` with the function :code:`fl.client.start_numpy_client()` by pointing it at the same IP adress we used in :code:`server.py`:
You load your data and model by using :code:`cifar.py`. Start :code:`CifarClient` with the function :code:`fl.client.start_client()` by pointing it at the same IP adress we used in :code:`server.py`:

.. code-block:: python
Expand All @@ -292,7 +292,7 @@ You load your data and model by using :code:`cifar.py`. Start :code:`CifarClient
# Start client
client = CifarClient(model, trainloader, testloader, num_examples)
fl.client.start_numpy_client(server_address="0.0.0.0:8080", client)
fl.client.start_client(server_address="0.0.0.0:8080", client.to_client())
if __name__ == "__main__":
Expand Down
4 changes: 2 additions & 2 deletions doc/source/how-to-enable-ssl-connections.rst
Original file line number Diff line number Diff line change
Expand Up @@ -75,9 +75,9 @@ We are now going to show how to write a client which uses the previously generat
client = MyFlowerClient()
# Start client
fl.client.start_numpy_client(
fl.client.start_client(
"localhost:8080",
client=client,
client=client.to_client(),
root_certificates=Path(".cache/certificates/ca.crt").read_bytes(),
)
Expand Down
2 changes: 1 addition & 1 deletion doc/source/how-to-run-simulations.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ Run simulations

Simulating Federated Learning workloads is useful for a multitude of use-cases: you might want to run your workload on a large cohort of clients but without having to source, configure and mange a large number of physical devices; you might want to run your FL workloads as fast as possible on the compute systems you have access to without having to go through a complex setup process; you might want to validate your algorithm on different scenarios at varying levels of data and system heterogeneity, client availability, privacy budgets, etc. These are among some of the use-cases where simulating FL workloads makes sense. Flower can accommodate these scenarios by means of its `VirtualClientEngine <contributor-explanation-architecture.html#virtual-client-engine>`_ or VCE.

The :code:`VirtualClientEngine` schedules, launches and manages `virtual` clients. These clients are identical to `non-virtual` clients (i.e. the ones you launch via the command `flwr.client.start_numpy_client <ref-api-flwr.html#start-numpy-client>`_) in the sense that they can be configure by creating a class inheriting, for example, from `flwr.client.NumPyClient <ref-api-flwr.html#flwr.client.NumPyClient>`_ and therefore behave in an identical way. In addition to that, clients managed by the :code:`VirtualClientEngine` are:
The :code:`VirtualClientEngine` schedules, launches and manages `virtual` clients. These clients are identical to `non-virtual` clients (i.e. the ones you launch via the command `flwr.client.start_client <ref-api-flwr.html#start-client>`_) in the sense that they can be configure by creating a class inheriting, for example, from `flwr.client.NumPyClient <ref-api-flwr.html#flwr.client.NumPyClient>`_ and therefore behave in an identical way. In addition to that, clients managed by the :code:`VirtualClientEngine` are:

* resource-aware: this means that each client gets assigned a portion of the compute and memory on your system. You as a user can control this at the beginning of the simulation and allows you to control the degree of parallelism of your Flower FL simulation. The fewer the resources per client, the more clients can run concurrently on the same hardware.
* self-managed: this means that you as a user do not need to launch clients manually, instead this gets delegated to :code:`VirtualClientEngine`'s internals.
Expand Down
4 changes: 4 additions & 0 deletions doc/source/ref-changelog.md
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,10 @@

- **Retiring MXNet examples** The development of the MXNet fremework has ended and the project is now [archived on GitHub](https://github.com/apache/mxnet). Existing MXNet examples won't receive updates [#2724](https://github.com/adap/flower/pull/2724)

- **Deprecated `start_numpy_client`**. ([#2563](https://github.com/adap/flower/pull/2563))

Until now, clients of type `NumPyClient` needed to be started via `start_numpy_client`. In our efforts to consolidate the core framework, we have introduced changes, and now all client types should start via `start_client`. To continue using `NumPyClient` clients, you simply need to first call the `.to_client()` method and then pass returned `Client` object to `start_client`. The examples and the documentation have been updated accordingly.

- **Update Flower Baselines**

- HFedXGBoost [#2226](https://github.com/adap/flower/pull/2226)
Expand Down
4 changes: 2 additions & 2 deletions doc/source/tutorial-quickstart-huggingface.rst
Original file line number Diff line number Diff line change
Expand Up @@ -212,9 +212,9 @@ We can now start client instances using:

.. code-block:: python
fl.client.start_numpy_client(
fl.client.start_client(
server_address="127.0.0.1:8080",
client=IMDBClient()
client=IMDBClient().to_client()
)
Expand Down
2 changes: 1 addition & 1 deletion doc/source/tutorial-quickstart-jax.rst
Original file line number Diff line number Diff line change
Expand Up @@ -265,7 +265,7 @@ Having defined the federation process, we can run it.
# Start Flower client
client = FlowerClient(params, grad_fn, train_x, train_y, test_x, test_y)
fl.client.start_numpy_client(server_address="0.0.0.0:8080", client)
fl.client.start_client(server_address="0.0.0.0:8080", client=client.to_client())
if __name__ == "__main__":
main()
Expand Down
4 changes: 2 additions & 2 deletions doc/source/tutorial-quickstart-pytorch.rst
Original file line number Diff line number Diff line change
Expand Up @@ -191,10 +191,10 @@ to actually run this client:

.. code-block:: python
fl.client.start_numpy_client(server_address="[::]:8080", client=CifarClient())
fl.client.start_client(server_address="[::]:8080", client=CifarClient().to_client())
That's it for the client. We only have to implement :code:`Client` or
:code:`NumPyClient` and call :code:`fl.client.start_client()` or :code:`fl.client.start_numpy_client()`. The string :code:`"[::]:8080"` tells the client which server to connect to. In our case we can run the server and the client on the same machine, therefore we use
:code:`NumPyClient` and call :code:`fl.client.start_client()`. If you implement a client of type :code:`NumPyClient` you'll need to first call its :code:`to_client()` method. The string :code:`"[::]:8080"` tells the client which server to connect to. In our case we can run the server and the client on the same machine, therefore we use
:code:`"[::]:8080"`. If we run a truly federated workload with the server and
clients running on different machines, all that needs to change is the
:code:`server_address` we point the client at.
Expand Down
4 changes: 2 additions & 2 deletions doc/source/tutorial-quickstart-scikitlearn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -145,10 +145,10 @@ to actually run this client:

.. code-block:: python
fl.client.start_numpy_client("0.0.0.0:8080", client=MnistClient())
fl.client.start_client("0.0.0.0:8080", client=MnistClient().to_client())
That's it for the client. We only have to implement :code:`Client` or
:code:`NumPyClient` and call :code:`fl.client.start_client()` or :code:`fl.client.start_numpy_client()`. The string :code:`"0.0.0.0:8080"` tells the client which server to connect to. In our case we can run the server and the client on the same machine, therefore we use
:code:`NumPyClient` and call :code:`fl.client.start_client()`. If you implement a client of type :code:`NumPyClient` you'll need to first call its :code:`to_client()` method. The string :code:`"0.0.0.0:8080"` tells the client which server to connect to. In our case we can run the server and the client on the same machine, therefore we use
:code:`"0.0.0.0:8080"`. If we run a truly federated workload with the server and
clients running on different machines, all that needs to change is the
:code:`server_address` we pass to the client.
Expand Down
4 changes: 2 additions & 2 deletions doc/source/tutorial-quickstart-tensorflow.rst
Original file line number Diff line number Diff line change
Expand Up @@ -84,11 +84,11 @@ to actually run this client:

.. code-block:: python
fl.client.start_numpy_client(server_address="[::]:8080", client=CifarClient())
fl.client.start_client(server_address="[::]:8080", client=CifarClient().to_client())
That's it for the client. We only have to implement :code:`Client` or
:code:`NumPyClient` and call :code:`fl.client.start_client()` or :code:`fl.client.start_numpy_client()`. The string :code:`"[::]:8080"` tells the client which server to connect to. In our case we can run the server and the client on the same machine, therefore we use
:code:`NumPyClient` and call :code:`fl.client.start_client()`. If you implement a client of type :code:`NumPyClient` you'll need to first call its :code:`to_client()` method. The string :code:`"[::]:8080"` tells the client which server to connect to. In our case we can run the server and the client on the same machine, therefore we use
:code:`"[::]:8080"`. If we run a truly federated workload with the server and
clients running on different machines, all that needs to change is the
:code:`server_address` we point the client at.
Expand Down
Loading

0 comments on commit 1754df7

Please sign in to comment.