diff --git a/README.md b/README.md index aba069a867..a16962cca9 100644 --- a/README.md +++ b/README.md @@ -34,26 +34,21 @@ For more installation options check out the [online documentation](https://openf ## Getting Started +OpenFL supports two APIs to set up a Federated Learning experiment: -OpenFL enables data scientists to set up a federated learning experiment following one of the workflows: +- [Task Runner API](https://openfl.readthedocs.io/en/latest/about/features_index/taskrunner.html): +Define an experiment and distribute it manually. All participants can verify model code and [FL plan](https://openfl.readthedocs.io/en/latest/about/features_index/taskrunner.html#federated-learning-plan-fl-plan-settings) prior to execution. The federation is terminated when the experiment is finished. This API is meant for enterprise-grade FL experiments, including support for mTLS-based communication channels and TEE-ready nodes (based on Intel® SGX). -- [Aggregator-based Workflow](https://openfl.readthedocs.io/en/latest/about/features_index/taskrunner.html): -Define an experiment and distribute it manually. All participants can verify model code and [FL plan](https://openfl.readthedocs.io/en/latest/about/features_index/taskrunner.html#federated-learning-plan-fl-plan-settings) prior to execution. The federation is terminated when the experiment is finished - -- [Workflow Interface](https://openfl.readthedocs.io/en/latest/about/features_index/workflowinterface.html) ([*experimental*](https://openfl.readthedocs.io/en/latest/developer_guide/experimental_features.html)): -Create complex experiments that extend beyond traditional horizontal federated learning. See the [experimental tutorials](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/) to learn how to coordinate [aggregator validation after collaborator model training](https://github.com/securefederatedai/openfl/tree/develop/openfl-tutorials/experimental/102_Aggregator_Validation.ipynb), [perform global differentially private federated learning](https://github.com/psfoley/openfl/tree/experimental-workflow-interface/openfl-tutorials/experimental/Global_DP), measure the amount of private information embedded in a model after collaborator training with [privacy meter](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/Privacy_Meter/readme.md), or [add a watermark to a federated model](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/301_MNIST_Watermarking.ipynb). - -The quickest way to test OpenFL is to follow our [tutorials](https://github.com/securefederatedai/openfl/tree/develop/openfl-tutorials).
-Read the [blog post](https://towardsdatascience.com/go-federated-with-openfl-8bc145a5ead1) explaining steps to train a model with OpenFL.
-Check out the [online documentation](https://openfl.readthedocs.io/en/latest/index.html) to launch your first federation. +- [Workflow API](https://openfl.readthedocs.io/en/latest/about/features_index/workflowinterface.html) ([*experimental*](https://openfl.readthedocs.io/en/latest/developer_guide/experimental_features.html)): +Create complex experiments that extend beyond traditional horizontal federated learning. This API enables an experiment to be simulated locally, then seamlessly scaled to a federated setting. See the [experimental tutorials](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/) to learn how to coordinate [aggregator validation after collaborator model training](https://github.com/securefederatedai/openfl/tree/develop/openfl-tutorials/experimental/102_Aggregator_Validation.ipynb), [perform global differentially private federated learning](https://github.com/psfoley/openfl/tree/experimental-workflow-interface/openfl-tutorials/experimental/Global_DP), measure the amount of private information embedded in a model after collaborator training with [privacy meter](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/Privacy_Meter/readme.md), or [add a watermark to a federated model](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/301_MNIST_Watermarking.ipynb). +The quickest way to test OpenFL is to follow the [online documentation](https://openfl.readthedocs.io/en/latest/index.html) to launch your first federation.
+Read the [blog post](https://medium.com/openfl/from-centralized-machine-learning-to-federated-learning-with-openfl-b3e61da52432) explaining steps to train a model with OpenFL.
## Requirements -- Ubuntu Linux 18.04+ -- Python 3.7+ (recommended to use with [Virtualenv](https://virtualenv.pypa.io/en/latest/)). - -OpenFL supports training with TensorFlow 2+ or PyTorch 1.3+ which should be installed separately. User can extend the list of supported Deep Learning frameworks if needed. +OpenFL supports popular NumPy-based ML frameworks like TensorFlow, PyTorch and Jax which should be installed separately.
+Users can extend the list of supported Machine Learning frameworks if needed. ## Project Overview ### What is Federated Learning @@ -82,29 +77,24 @@ You can find more details in the following articles: ### Supported Aggregation Algorithms -| Algorithm Name | Paper | PyTorch implementation | TensorFlow implementation | Other frameworks compatibility | How to use | -| -------------- | ----- | :--------------------: | :-----------------------: | :----------------------------: | ---------- | -| FedAvg | [McMahan et al., 2017](https://arxiv.org/pdf/1602.05629.pdf) | ✅ | ✅ | ✅ | [docs](https://openfl.readthedocs.io/en/latest/about/features.html#aggregation-algorithms) | -| FedProx | [Li et al., 2020](https://arxiv.org/pdf/1812.06127.pdf) | ✅ | ✅ | ❌ | [docs](https://openfl.readthedocs.io/en/latest/about/features.html#aggregation-algorithms) | -| FedOpt | [Reddi et al., 2020](https://arxiv.org/abs/2003.00295) | ✅ | ✅ | ✅ | [docs](https://openfl.readthedocs.io/en/latest/about/features.html#aggregation-algorithms) | -| FedCurv | [Shoham et al., 2019](https://arxiv.org/pdf/1910.07796.pdf) | ✅ | ❌ | ❌ | [docs](https://openfl.readthedocs.io/en/latest/about/features.html#aggregation-algorithms) | +| Algorithm Name | Paper | PyTorch implementation | TensorFlow implementation | Other frameworks compatibility | +| -------------- | ----- | :--------------------: | :-----------------------: | :----------------------------: | +| FedAvg | [McMahan et al., 2017](https://arxiv.org/pdf/1602.05629.pdf) | ✅ | ✅ | ✅ | +| FedProx | [Li et al., 2020](https://arxiv.org/pdf/1812.06127.pdf) | ✅ | ✅ | ❌ | +| FedOpt | [Reddi et al., 2020](https://arxiv.org/abs/2003.00295) | ✅ | ✅ | ✅ | +| FedCurv | [Shoham et al., 2019](https://arxiv.org/pdf/1910.07796.pdf) | ✅ | ❌ | ❌ | ## Support -Please join us for our bi-monthly community meetings starting December 1 & 2, 2022!
-Meet with some of the OpenFL team members behind OpenFL.
-We will be going over our roadmap, open for Q&A, and welcome idea sharing.
- -Calendar and links to a Community calls are [here](https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=70648254) - -Subscribe to the OpenFL mail list openfl-announce@lists.lfaidata.foundation +The OpenFL community is growing, and we invite you to be a part of it. Join the [Slack channel](https://join.slack.com/t/openfl/shared_invite/zt-ovzbohvn-T5fApk05~YS_iZhjJ5yaTw) to connect with fellow enthusiasts, share insights, and contribute to the future of federated learning. +Consider subscribing to the OpenFL mail list openfl-announce@lists.lfaidata.foundation See you there! We also always welcome questions, issue reports, and suggestions via: * [GitHub Issues](https://github.com/securefederatedai/openfl/issues) -* [Slack workspace](https://join.slack.com/t/openfl/shared_invite/zt-ovzbohvn-T5fApk05~YS_iZhjJ5yaTw) +* [GitHub Discussions](https://github.com/securefederatedai/openfl/discussions) ## License This project is licensed under [Apache License Version 2.0](LICENSE). By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms. diff --git a/docs/about/features_index/pynative.rst b/docs/about/features_index/pynative.rst index 83d4143cf8..562bda3a05 100644 --- a/docs/about/features_index/pynative.rst +++ b/docs/about/features_index/pynative.rst @@ -7,7 +7,7 @@ :orphan: ================= -Python Native API +Python Native API (Deprecated) ================= TODO diff --git a/docs/developer_guide/advanced_topics/overriding_agg_fn.rst b/docs/developer_guide/advanced_topics/overriding_agg_fn.rst index ba393f86ca..e2bc2fd396 100644 --- a/docs/developer_guide/advanced_topics/overriding_agg_fn.rst +++ b/docs/developer_guide/advanced_topics/overriding_agg_fn.rst @@ -10,7 +10,7 @@ Override Aggregation Function With the aggregator-based workflow, you can use custom aggregation functions for each task via Python\*\ API or command line interface. -Python API +Python API (Deprecated) ========== 1. Create an implementation of :class:`openfl.interface.aggregation_functions.core.AggregationFunction`. diff --git a/docs/developer_guide/advanced_topics/overriding_plan_settings.rst b/docs/developer_guide/advanced_topics/overriding_plan_settings.rst index 629e8a017a..aae1f7ea02 100644 --- a/docs/developer_guide/advanced_topics/overriding_plan_settings.rst +++ b/docs/developer_guide/advanced_topics/overriding_plan_settings.rst @@ -11,7 +11,7 @@ With the director-based workflow, you can use custom plan settings before starti When using Python API or Director Envoy based interactive API (Deprecated), **override_config** can be used to update plan settings. -Python API +Python API (Deprecated) ========== Modify the plan settings: diff --git a/docs/developer_guide/running_the_federation.notebook.rst b/docs/developer_guide/running_the_federation.notebook.rst index ce8c75df72..d335c112d0 100644 --- a/docs/developer_guide/running_the_federation.notebook.rst +++ b/docs/developer_guide/running_the_federation.notebook.rst @@ -4,7 +4,7 @@ .. _running_notebook: ********************************** -Aggregator-Based Workflow Tutorial +Aggregator-Based Workflow Tutorial (Deprecated) ********************************** You will start a Jupyter\* \ lab server and receive a URL you can use to access the tutorials. Jupyter notebooks are provided for PyTorch\* \ and TensorFlow\* \ that simulate a federation on a local machine. diff --git a/docs/developer_guide/utilities/splitters_data.rst b/docs/developer_guide/utilities/splitters_data.rst index 56b08ee6a1..66064706d1 100644 --- a/docs/developer_guide/utilities/splitters_data.rst +++ b/docs/developer_guide/utilities/splitters_data.rst @@ -13,7 +13,7 @@ Dataset Splitters You may apply data splitters differently depending on the |productName| workflow that you follow. -OPTION 1: Use **Native Python API** (Aggregator-Based Workflow) Functions to Split the Data +OPTION 1: Use **Native Python API** (Aggregator-Based Workflow) Functions to Split the Data (Deprecated) =========================================================================================== Predefined |productName| data splitters functions are as follows: diff --git a/docs/get_started/examples.rst b/docs/get_started/examples.rst index f358090e06..ff1e3f0364 100644 --- a/docs/get_started/examples.rst +++ b/docs/get_started/examples.rst @@ -30,7 +30,7 @@ See :ref:`running_the_task_runner` :ref:`running_the_task_runner` ------------------------- -Python Native API +Python Native API (Deprecated) ------------------------- Intended for quick simulation purposes diff --git a/docs/get_started/examples/python_native_pytorch_mnist.rst b/docs/get_started/examples/python_native_pytorch_mnist.rst index 8105ad495c..11bbe7d88c 100644 --- a/docs/get_started/examples/python_native_pytorch_mnist.rst +++ b/docs/get_started/examples/python_native_pytorch_mnist.rst @@ -4,7 +4,7 @@ .. _python_native_pytorch_mnist: ========================================== -Python Native API: Federated PyTorch MNIST +Python Native API: Federated PyTorch MNIST (Deprecated) ========================================== In this tutorial, we will set up a federation and train a basic PyTorch model on the MNIST dataset using the Python Native API. diff --git a/docs/source/api/openfl_native.rst b/docs/source/api/openfl_native.rst index bd9eb608d3..5f3f513340 100644 --- a/docs/source/api/openfl_native.rst +++ b/docs/source/api/openfl_native.rst @@ -2,7 +2,7 @@ .. # SPDX-License-Identifier: Apache-2.0 ************************************************* -Native Module +Native Module (Deprecated) ************************************************* Native modules reference: diff --git a/openfl-tutorials/Federated_FedProx_Keras_MNIST_Tutorial.ipynb b/openfl-tutorials/deprecated/native_api/Federated_FedProx_Keras_MNIST_Tutorial.ipynb similarity index 100% rename from openfl-tutorials/Federated_FedProx_Keras_MNIST_Tutorial.ipynb rename to openfl-tutorials/deprecated/native_api/Federated_FedProx_Keras_MNIST_Tutorial.ipynb diff --git a/openfl-tutorials/Federated_FedProx_PyTorch_MNIST_Tutorial.ipynb b/openfl-tutorials/deprecated/native_api/Federated_FedProx_PyTorch_MNIST_Tutorial.ipynb similarity index 100% rename from openfl-tutorials/Federated_FedProx_PyTorch_MNIST_Tutorial.ipynb rename to openfl-tutorials/deprecated/native_api/Federated_FedProx_PyTorch_MNIST_Tutorial.ipynb diff --git a/openfl-tutorials/Federated_Keras_MNIST_Tutorial.ipynb b/openfl-tutorials/deprecated/native_api/Federated_Keras_MNIST_Tutorial.ipynb similarity index 100% rename from openfl-tutorials/Federated_Keras_MNIST_Tutorial.ipynb rename to openfl-tutorials/deprecated/native_api/Federated_Keras_MNIST_Tutorial.ipynb diff --git a/openfl-tutorials/Federated_PyTorch_TinyImageNet.ipynb b/openfl-tutorials/deprecated/native_api/Federated_PyTorch_TinyImageNet.ipynb similarity index 100% rename from openfl-tutorials/Federated_PyTorch_TinyImageNet.ipynb rename to openfl-tutorials/deprecated/native_api/Federated_PyTorch_TinyImageNet.ipynb diff --git a/openfl-tutorials/Federated_PyTorch_UNET_Tutorial.ipynb b/openfl-tutorials/deprecated/native_api/Federated_PyTorch_UNET_Tutorial.ipynb similarity index 100% rename from openfl-tutorials/Federated_PyTorch_UNET_Tutorial.ipynb rename to openfl-tutorials/deprecated/native_api/Federated_PyTorch_UNET_Tutorial.ipynb diff --git a/openfl-tutorials/Federated_Pytorch_MNIST_Tutorial.ipynb b/openfl-tutorials/deprecated/native_api/Federated_Pytorch_MNIST_Tutorial.ipynb similarity index 100% rename from openfl-tutorials/Federated_Pytorch_MNIST_Tutorial.ipynb rename to openfl-tutorials/deprecated/native_api/Federated_Pytorch_MNIST_Tutorial.ipynb diff --git a/openfl-tutorials/Federated_Pytorch_MNIST_custom_aggregation_Tutorial.ipynb b/openfl-tutorials/deprecated/native_api/Federated_Pytorch_MNIST_custom_aggregation_Tutorial.ipynb similarity index 100% rename from openfl-tutorials/Federated_Pytorch_MNIST_custom_aggregation_Tutorial.ipynb rename to openfl-tutorials/deprecated/native_api/Federated_Pytorch_MNIST_custom_aggregation_Tutorial.ipynb diff --git a/openfl-workspace/torch_llm_horovod/src/InHorovodrun.py b/openfl-workspace/torch_llm_horovod/src/InHorovodrun.py index e4e24bf760..3c7746d831 100644 --- a/openfl-workspace/torch_llm_horovod/src/InHorovodrun.py +++ b/openfl-workspace/torch_llm_horovod/src/InHorovodrun.py @@ -9,7 +9,7 @@ import horovod.torch as hvd -import openfl.native as fx +from openfl.interface.cli import setup_logging SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__)) sys.path.append(os.path.dirname(SCRIPT_DIR)) @@ -50,7 +50,7 @@ def get_args(): def main(): logger = getLogger(__name__) - fx.setup_logging(level="INFO", log_file=None) + setup_logging() try: logger.info("starting horovod") hvd.init() diff --git a/openfl/interface/model.py b/openfl/interface/model.py index 3cadb25051..9852124c6d 100644 --- a/openfl/interface/model.py +++ b/openfl/interface/model.py @@ -11,7 +11,6 @@ from click import confirm, group, option, pass_context, style from openfl.federated import Plan -from openfl.pipelines import NoCompressionPipeline from openfl.protocols import utils from openfl.utilities.click_types import InputSpec from openfl.utilities.dataloading import get_dataloader @@ -168,13 +167,14 @@ def get_model( ) data_loader = get_dataloader(plan, prefer_minimal=True, input_shape=input_shape) task_runner = plan.get_task_runner(data_loader=data_loader) + tensor_pipe = plan.get_tensor_pipe() model_protobuf_path = Path(model_protobuf_path).resolve() logger.info("Loading OpenFL model protobuf: 🠆 %s", model_protobuf_path) model_protobuf = utils.load_proto(model_protobuf_path) - tensor_dict, _ = utils.deconstruct_model_proto(model_protobuf, NoCompressionPipeline()) + tensor_dict, _ = utils.deconstruct_model_proto(model_protobuf, tensor_pipe) # This may break for multiple models. # task_runner.set_tensor_dict will need to handle multiple models