diff --git a/docs/developer_guide/advanced_topics/overriding_agg_fn.rst b/docs/developer_guide/advanced_topics/overriding_agg_fn.rst index 0e8bb892d8..0c14a83b10 100644 --- a/docs/developer_guide/advanced_topics/overriding_agg_fn.rst +++ b/docs/developer_guide/advanced_topics/overriding_agg_fn.rst @@ -7,19 +7,7 @@ Override Aggregation Function ***************************** -With the aggregator-based workflow, you can use custom aggregation functions for each task via Python\*\ API or command line interface. - - -Python API (Deprecated) -========== - -1. Create an implementation of :class:`openfl.interface.aggregation_functions.core.AggregationFunction`. - -2. In the ``override_config`` keyword argument of the :func:`openfl.native.run_experiment` native function, pass the implementation as a ``tasks.{task_name}.aggregation_type`` parameter. - -.. note:: - See `Federated PyTorch MNIST Tutorial `_ for an example of the custom aggregation function. - +With the aggregator-based workflow, you can use custom aggregation functions for each task via command line interface. Command Line Interface ====================== diff --git a/docs/developer_guide/advanced_topics/overriding_plan_settings.rst b/docs/developer_guide/advanced_topics/overriding_plan_settings.rst deleted file mode 100644 index 752214c5ea..0000000000 --- a/docs/developer_guide/advanced_topics/overriding_plan_settings.rst +++ /dev/null @@ -1,99 +0,0 @@ -.. # Copyright (C) 2020-2023 Intel Corporation -.. # SPDX-License-Identifier: Apache-2.0 - -.. _overriding_plan_settings: - -*********************** -Updating plan settings -*********************** - -With the director-based workflow, you can use custom plan settings before starting the experiment. Changing plan settings in command line interface is straightforward by modifying plan.yaml. -When using Python API or Director Envoy based interactive API, **override_config** can be used to update plan settings. - - -Python API (Deprecated) -========== - -Modify the plan settings: - -.. code-block:: python - - final_fl_model = fx.run_experiment(collaborators, override_config={ - 'aggregator.settings.rounds_to_train': 5, - 'aggregator.settings.log_metric_callback': write_metric, - }) - - -Director Envoy Based Interactive API Interface -============================================== -Once you create an FL_experiment object, a basic federated learning plan with default settings is created. To check the default plan settings, print the plan as shown below: - -.. code-block:: python - - fl_experiment = FLExperiment(federation=federation, experiment_name=experiment_name) - import openfl.native as fx - print(fx.get_plan(fl_plan=fl_experiment.plan)) - -Here is an example of the default plan settings that get displayed: - -.. code-block:: python - - "aggregator.settings.best_state_path": "save/best.pbuf", - "aggregator.settings.db_store_rounds": 2, - "aggregator.settings.init_state_path": "save/init.pbuf", - "aggregator.settings.last_state_path": "save/last.pbuf", - "aggregator.settings.rounds_to_train": 10, - "aggregator.settings.write_logs": true, - "aggregator.template": "openfl.component.Aggregator", - "assigner.settings.task_groups.0.name": "train_and_validate", - "assigner.settings.task_groups.0.percentage": 1.0, - "assigner.settings.task_groups.0.tasks.0": "aggregated_model_validation", - "assigner.settings.task_groups.0.tasks.1": "train", - "assigner.settings.task_groups.0.tasks.2": "locally_tuned_model_validation", - "assigner.template": "openfl.component.RandomGroupedAssigner", - "collaborator.settings.db_store_rounds": 1, - "collaborator.settings.delta_updates": false, - "collaborator.settings.opt_treatment": "RESET", - "collaborator.template": "openfl.component.Collaborator", - "compression_pipeline.settings": {}, - "compression_pipeline.template": "openfl.pipelines.NoCompressionPipeline", - "data_loader.settings": {}, - "data_loader.template": "openfl.federated.DataLoader", - "network.settings.agg_addr": "auto", - "network.settings.agg_port": "auto", - "network.settings.cert_folder": "cert", - "network.settings.client_reconnect_interval": 5, - "network.settings.disable_client_auth": false, - "network.settings.hash_salt": "auto", - "network.settings.tls": true, - "network.template": "openfl.federation.Network", - "task_runner.settings": {}, - "task_runner.template": "openfl.federated.task.task_runner.CoreTaskRunner", - "tasks.settings": {} - - -Use **override_config** with FL_experiment.start to make any changes to the default plan settings. It's essentially a dictionary with the keys corresponding to plan parameters along with the corresponding values (or list of values). Any new key entry will be added to the plan and for any existing key present in the plan, the value will be overrriden. - - -.. code-block:: python - - fl_experiment.start(model_provider=MI, - task_keeper=TI, - data_loader=fed_dataset, - rounds_to_train=5, - opt_treatment='CONTINUE_GLOBAL', - override_config={'aggregator.settings.db_store_rounds': 1, 'compression_pipeline.template': 'openfl.pipelines.KCPipeline', 'compression_pipeline.settings.n_clusters': 2}) - - -Since 'aggregator.settings.db_store_rounds' and 'compression_pipeline.template' fields are already present in the plan, the values of these fields get replaced. Field 'compression_pipeline.settings.n_clusters' is a new entry that gets added to the plan: - -.. code-block:: python - - INFO Updating aggregator.settings.db_store_rounds to 1... native.py:102 - - INFO Updating compression_pipeline.template to openfl.pipelines.KCPipeline... native.py:102 - - INFO Did not find compression_pipeline.settings.n_clusters in config. Make sure it should exist. Creating... native.py:105 - - -A full implementation can be found at `Federated_Pytorch_MNIST_Tutorial.ipynb `_ and at `Tensorflow_MNIST.ipynb `_. \ No newline at end of file diff --git a/docs/developer_guide/utilities/splitters_data.rst b/docs/developer_guide/utilities/splitters_data.rst index 66064706d1..0ebb27d63f 100644 --- a/docs/developer_guide/utilities/splitters_data.rst +++ b/docs/developer_guide/utilities/splitters_data.rst @@ -13,20 +13,7 @@ Dataset Splitters You may apply data splitters differently depending on the |productName| workflow that you follow. -OPTION 1: Use **Native Python API** (Aggregator-Based Workflow) Functions to Split the Data (Deprecated) -=========================================================================================== - -Predefined |productName| data splitters functions are as follows: - -- ``openfl.utilities.data_splitters.EqualNumPyDataSplitter`` (default) -- ``openfl.utilities.data_splitters.RandomNumPyDataSplitter`` -- ``openfl.interface.aggregation_functions.LogNormalNumPyDataSplitter``, which assumes the ``data`` argument as ``np.ndarray`` of integers (labels) -- ``openfl.interface.aggregation_functions.DirichletNumPyDataSplitter``, which assumes the ``data`` argument as ``np.ndarray`` of integers (labels) - -Alternatively, you can create an `implementation `_ of :class:`openfl.plugins.data_splitters.NumPyDataSplitter` and pass it to the :code:`FederatedDataset` function as either ``train_splitter`` or ``valid_splitter`` keyword argument. - - -OPTION 2: Use Dataset Splitters in your Shard Descriptor +OPTION 1: Use Dataset Splitters in your Shard Descriptor ======================================================== Apply one of previously mentioned splitting function on your data to perform a simulation. diff --git a/docs/get_started/examples.rst b/docs/get_started/examples.rst index 7e3bed5b13..6c139944ba 100644 --- a/docs/get_started/examples.rst +++ b/docs/get_started/examples.rst @@ -29,20 +29,6 @@ See :ref:`running_the_task_runner` :ref:`running_the_task_runner` -------------------------- -Python Native API (Deprecated) -------------------------- -Intended for quick simulation purposes - -See :ref:`python_native_pytorch_mnist` - -.. toctree:: - :hidden: - :maxdepth: 1 - - examples/python_native_pytorch_mnist - - ------------------------- Interactive API ------------------------- diff --git a/docs/get_started/examples/python_native_pytorch_mnist.rst b/docs/get_started/examples/python_native_pytorch_mnist.rst deleted file mode 100644 index 11bbe7d88c..0000000000 --- a/docs/get_started/examples/python_native_pytorch_mnist.rst +++ /dev/null @@ -1,173 +0,0 @@ -.. # Copyright (C) 2020-2023 Intel Corporation -.. # SPDX-License-Identifier: Apache-2.0 - -.. _python_native_pytorch_mnist: - -========================================== -Python Native API: Federated PyTorch MNIST (Deprecated) -========================================== - -In this tutorial, we will set up a federation and train a basic PyTorch model on the MNIST dataset using the Python Native API. -See `full notebook `_. - -.. note:: - - Ensure you have installed the |productName| package. - - See :ref:`install_package` for details. - - -Install additional dependencies if not already installed - -.. code-block:: console - - $ pip install torch torchvision - -.. code-block:: python - - import numpy as np - import torch - import torch.nn as nn - import torch.nn.functional as F - import torch.optim as optim - - import torchvision - import torchvision.transforms as transforms - import openfl.native as fx - from openfl.federated import FederatedModel,FederatedDataSet - -After importing the required packages, the next step is setting up our openfl workspace. -To do this, simply run the ``fx.init()`` command as follows: - -.. code-block:: python - - #Setup default workspace, logging, etc. - fx.init('torch_cnn_mnist', log_level='METRIC', log_file='./spam_metric.log') - -Now we are ready to define our dataset and model to perform federated learning on. -The dataset should be composed of a numpy array. We start with a simple fully connected model that is trained on the MNIST dataset. - -.. code-block:: python - - def one_hot(labels, classes): - return np.eye(classes)[labels] - - transform = transforms.Compose( - [transforms.ToTensor(), - transforms.Normalize((0.5, 0.5, 0.5), (0.5, 0.5, 0.5))]) - - trainset = torchvision.datasets.MNIST(root='./data', train=True, - download=True, transform=transform) - - train_images,train_labels = trainset.train_data, np.array(trainset.train_labels) - train_images = torch.from_numpy(np.expand_dims(train_images, axis=1)).float() - - validset = torchvision.datasets.MNIST(root='./data', train=False, - download=True, transform=transform) - - valid_images,valid_labels = validset.test_data, np.array(validset.test_labels) - valid_images = torch.from_numpy(np.expand_dims(valid_images, axis=1)).float() - valid_labels = one_hot(valid_labels,10) - -.. code-block:: python - - feature_shape = train_images.shape[1] - classes = 10 - - fl_data = FederatedDataSet(train_images,train_labels,valid_images,valid_labels,batch_size=32,num_classes=classes) - - class Net(nn.Module): - def __init__(self): - super(Net, self).__init__() - self.conv1 = nn.Conv2d(1, 16, 3) - self.pool = nn.MaxPool2d(2, 2) - self.conv2 = nn.Conv2d(16, 32, 3) - self.fc1 = nn.Linear(32 * 5 * 5, 32) - self.fc2 = nn.Linear(32, 84) - self.fc3 = nn.Linear(84, 10) - - def forward(self, x): - x = self.pool(F.relu(self.conv1(x))) - x = self.pool(F.relu(self.conv2(x))) - x = x.view(x.size(0),-1) - x = F.relu(self.fc1(x)) - x = F.relu(self.fc2(x)) - x = self.fc3(x) - return F.log_softmax(x, dim=1) - - optimizer = lambda x: optim.Adam(x, lr=1e-4) - - def cross_entropy(output, target): - """Binary cross-entropy metric - """ - return F.cross_entropy(input=output,target=target) - - -Here we can define metric logging function. It should has the following signature described below. You can use it to write metrics to tensorboard or some another specific logging. - -.. code-block:: python - - from torch.utils.tensorboard import SummaryWriter - - writer = SummaryWriter('./logs/cnn_mnist', flush_secs=5) - - - def write_metric(node_name, task_name, metric_name, metric, round_number): - writer.add_scalar("{}/{}/{}".format(node_name, task_name, metric_name), - metric, round_number) - -.. code-block:: python - - #Create a federated model using the pytorch class, lambda optimizer function, and loss function - fl_model = FederatedModel(build_model=Net,optimizer=optimizer,loss_fn=cross_entropy,data_loader=fl_data) - -The ``FederatedModel`` object is a wrapper around your Keras, Tensorflow or PyTorch model that makes it compatible with openfl. -It provides built in federated training and validation functions that we will see used below. -Using it's setup function, collaborator models and datasets can be automatically defined for the experiment. - -.. code-block:: python - - collaborator_models = fl_model.setup(num_collaborators=2) - collaborators = {'one':collaborator_models[0],'two':collaborator_models[1]}#, 'three':collaborator_models[2]} - -.. code-block:: python - - #Original MNIST dataset - print(f'Original training data size: {len(train_images)}') - print(f'Original validation data size: {len(valid_images)}\n') - - #Collaborator one's data - print(f'Collaborator one\'s training data size: {len(collaborator_models[0].data_loader.X_train)}') - print(f'Collaborator one\'s validation data size: {len(collaborator_models[0].data_loader.X_valid)}\n') - - #Collaborator two's data - print(f'Collaborator two\'s training data size: {len(collaborator_models[1].data_loader.X_train)}') - print(f'Collaborator two\'s validation data size: {len(collaborator_models[1].data_loader.X_valid)}\n') - - #Collaborator three's data - #print(f'Collaborator three\'s training data size: {len(collaborator_models[2].data_loader.X_train)}') - #print(f'Collaborator three\'s validation data size: {len(collaborator_models[2].data_loader.X_valid)}') - -We can see the current plan values by running the ``fx.get_plan()`` function - -.. code-block:: python - - #Get the current values of the plan. Each of these can be overridden - print(fx.get_plan()) - -Now we are ready to run our experiment. -If we want to pass in custom plan settings, we can easily do that with the override_config parameter - -.. code-block:: python - - # Run experiment, return trained FederatedModel - - final_fl_model = fx.run_experiment(collaborators, override_config={ - 'aggregator.settings.rounds_to_train': 5, - 'aggregator.settings.log_metric_callback': write_metric, - }) - -.. code-block:: python - - #Save final model - final_fl_model.save_native('final_pytorch_model') diff --git a/docs/source/api/openfl_native.rst b/docs/source/api/openfl_native.rst deleted file mode 100644 index 5f3f513340..0000000000 --- a/docs/source/api/openfl_native.rst +++ /dev/null @@ -1,16 +0,0 @@ -.. # Copyright (C) 2020-2024 Intel Corporation -.. # SPDX-License-Identifier: Apache-2.0 - -************************************************* -Native Module (Deprecated) -************************************************* - -Native modules reference: - -.. autosummary:: - :toctree: _autosummary - :template: custom-module-template.rst - :recursive: - - openfl.native - \ No newline at end of file