Skip to content

Commit

Permalink
Migrate advanced TensorFlow to use FDS (#2806)
Browse files Browse the repository at this point in the history
Co-authored-by: jafermarq <[email protected]>
  • Loading branch information
adam-narozniak and jafermarq authored Jan 17, 2024
1 parent b00c77b commit 6cea1c7
Show file tree
Hide file tree
Showing 6 changed files with 33 additions and 30 deletions.
9 changes: 5 additions & 4 deletions examples/advanced-tensorflow/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Advanced Flower Example (TensorFlow/Keras)

This example demonstrates an advanced federated learning setup using Flower with TensorFlow/Keras. It differs from the quickstart example in the following ways:
This example demonstrates an advanced federated learning setup using Flower with TensorFlow/Keras. This example uses [Flower Datasets](https://flower.dev/docs/datasets/) and it differs from the quickstart example in the following ways:

- 10 clients (instead of just 2)
- Each client holds a local dataset of 5000 training examples and 1000 test examples (note that by default only a small subset of this data is used when running the `run.sh` script)
- Each client holds a local dataset of 1/10 of the train datasets and 80% is training examples and 20% as test examples (note that by default only a small subset of this data is used when running the `run.sh` script)
- Server-side model evaluation after parameter aggregation
- Hyperparameter schedule using config functions
- Custom return values
Expand Down Expand Up @@ -57,10 +57,11 @@ pip install -r requirements.txt

## Run Federated Learning with TensorFlow/Keras and Flower

The included `run.sh` will call a script to generate certificates (which will be used by server and clients), start the Flower server (using `server.py`), sleep for 2 seconds to ensure the the server is up, and then start 10 Flower clients (using `client.py`). You can simply start everything in a terminal as follows:
The included `run.sh` will call a script to generate certificates (which will be used by server and clients), start the Flower server (using `server.py`), sleep for 10 seconds to ensure the the server is up, and then start 10 Flower clients (using `client.py`). You can simply start everything in a terminal as follows:

```shell
poetry run ./run.sh
# Once you have activated your environment
./run.sh
```

The `run.sh` script starts processes in the background so that you don't have to open eleven terminal windows. If you experiment with the code example and something goes wrong, simply using `CTRL + C` on Linux (or `CMD + C` on macOS) wouldn't normally kill all these processes, which is why the script ends with `trap "trap - SIGTERM && kill -- -$$" SIGINT SIGTERM EXIT` and `wait`. This simply allows you to stop the experiment using `CTRL + C` (or `CMD + C`). If you change the script and anything goes wrong you can still use `killall python` (or `killall python3`) to kill all background processes (or a more specific command if you have other Python processes running that you don't want to kill).
Expand Down
29 changes: 15 additions & 14 deletions examples/advanced-tensorflow/client.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,8 @@

import flwr as fl

from flwr_datasets import FederatedDataset

# Make TensorFlow logs less verbose
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"

Expand Down Expand Up @@ -74,7 +76,7 @@ def main() -> None:
# Parse command line argument `partition`
parser = argparse.ArgumentParser(description="Flower")
parser.add_argument(
"--partition",
"--client-id",
type=int,
default=0,
choices=range(0, 10),
Expand All @@ -84,9 +86,7 @@ def main() -> None:
)
parser.add_argument(
"--toy",
type=bool,
default=False,
required=False,
action='store_true',
help="Set to true to quicky run the client using only 10 datasamples. "
"Useful for testing purposes. Default: False",
)
Expand All @@ -99,7 +99,7 @@ def main() -> None:
model.compile("adam", "sparse_categorical_crossentropy", metrics=["accuracy"])

# Load a subset of CIFAR-10 to simulate the local data partition
(x_train, y_train), (x_test, y_test) = load_partition(args.partition)
x_train, y_train, x_test, y_test = load_partition(args.client_id)

if args.toy:
x_train, y_train = x_train[:10], y_train[:10]
Expand All @@ -117,15 +117,16 @@ def main() -> None:

def load_partition(idx: int):
"""Load 1/10th of the training and test data to simulate a partition."""
assert idx in range(10)
(x_train, y_train), (x_test, y_test) = tf.keras.datasets.cifar10.load_data()
return (
x_train[idx * 5000 : (idx + 1) * 5000],
y_train[idx * 5000 : (idx + 1) * 5000],
), (
x_test[idx * 1000 : (idx + 1) * 1000],
y_test[idx * 1000 : (idx + 1) * 1000],
)
# Download and partition dataset
fds = FederatedDataset(dataset="cifar10", partitioners={"train": 10})
partition = fds.load_partition(idx)
partition.set_format("numpy")

# Divide data on each node: 80% train, 20% test
partition = partition.train_test_split(test_size=0.2)
x_train, y_train = partition["train"]["img"] / 255.0, partition["train"]["label"]
x_test, y_test = partition["test"]["img"] / 255.0, partition["test"]["label"]
return x_train, y_train, x_test, y_test


if __name__ == "__main__":
Expand Down
1 change: 1 addition & 0 deletions examples/advanced-tensorflow/pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -11,5 +11,6 @@ authors = ["The Flower Authors <[email protected]>"]
[tool.poetry.dependencies]
python = ">=3.8,<3.11"
flwr = ">=1.0,<2.0"
flwr-datasets = { extras = ["vision"], version = ">=0.0.2,<1.0.0" }
tensorflow-cpu = {version = ">=2.9.1,<2.11.1 || >2.11.1", markers = "platform_machine == \"x86_64\""}
tensorflow-macos = {version = ">=2.9.1,<2.11.1 || >2.11.1", markers = "sys_platform == \"darwin\" and platform_machine == \"arm64\""}
1 change: 1 addition & 0 deletions examples/advanced-tensorflow/requirements.txt
Original file line number Diff line number Diff line change
@@ -1,3 +1,4 @@
flwr>=1.0, <2.0
flwr-datasets = { extras = ["vision"], version = ">=0.0.2,<1.0.0" }
tensorflow-cpu>=2.9.1, != 2.11.1 ; platform_machine == "x86_64"
tensorflow-macos>=2.9.1, != 2.11.1 ; sys_platform == "darwin" and platform_machine == "arm64"
9 changes: 3 additions & 6 deletions examples/advanced-tensorflow/run.sh
Original file line number Diff line number Diff line change
Expand Up @@ -5,14 +5,11 @@
echo "Starting server"

python server.py &
sleep 3 # Sleep for 3s to give the server enough time to start
sleep 10 # Sleep for 10s to give the server enough time to start and download the dataset

# Ensure that the Keras dataset used in client.py is already cached.
python -c "import tensorflow as tf; tf.keras.datasets.cifar10.load_data()"

for i in `seq 0 9`; do
for i in $(seq 0 9); do
echo "Starting client $i"
python client.py --partition=${i} --toy True &
python client.py --client-id=${i} --toy &
done

# This will allow you to use CTRL+C to stop all background processes
Expand Down
14 changes: 8 additions & 6 deletions examples/advanced-tensorflow/server.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,8 @@
import flwr as fl
import tensorflow as tf

from flwr_datasets import FederatedDataset


def main() -> None:
# Load and compile model for
Expand Down Expand Up @@ -43,11 +45,11 @@ def main() -> None:
def get_evaluate_fn(model):
"""Return an evaluation function for server-side evaluation."""

# Load data and model here to avoid the overhead of doing it in `evaluate` itself
(x_train, y_train), _ = tf.keras.datasets.cifar10.load_data()

# Use the last 5k training examples as a validation set
x_val, y_val = x_train[45000:50000], y_train[45000:50000]
# Load data here to avoid the overhead of doing it in `evaluate` itself
fds = FederatedDataset(dataset="cifar10", partitioners={"train": 10})
test = fds.load_full("test")
test.set_format("numpy")
x_test, y_test = test["img"] / 255.0, test["label"]

# The `evaluate` function will be called after every round
def evaluate(
Expand All @@ -56,7 +58,7 @@ def evaluate(
config: Dict[str, fl.common.Scalar],
) -> Optional[Tuple[float, Dict[str, fl.common.Scalar]]]:
model.set_weights(parameters) # Update model with the latest parameters
loss, accuracy = model.evaluate(x_val, y_val)
loss, accuracy = model.evaluate(x_test, y_test)
return loss, {"accuracy": accuracy}

return evaluate
Expand Down

0 comments on commit 6cea1c7

Please sign in to comment.