Skip to content

Commit

Permalink
Merge branch 'securefederatedai:develop' into xgboost-fedbagging
Browse files Browse the repository at this point in the history
  • Loading branch information
kta-intel authored Nov 19, 2024
2 parents 34f7d8a + 1b586cb commit 837031b
Show file tree
Hide file tree
Showing 18 changed files with 30 additions and 40 deletions.
46 changes: 18 additions & 28 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -34,26 +34,21 @@ For more installation options check out the [online documentation](https://openf

## Getting Started

OpenFL supports two APIs to set up a Federated Learning experiment:

OpenFL enables data scientists to set up a federated learning experiment following one of the workflows:
- [Task Runner API](https://openfl.readthedocs.io/en/latest/about/features_index/taskrunner.html):
Define an experiment and distribute it manually. All participants can verify model code and [FL plan](https://openfl.readthedocs.io/en/latest/about/features_index/taskrunner.html#federated-learning-plan-fl-plan-settings) prior to execution. The federation is terminated when the experiment is finished. This API is meant for enterprise-grade FL experiments, including support for mTLS-based communication channels and TEE-ready nodes (based on Intel® SGX).

- [Aggregator-based Workflow](https://openfl.readthedocs.io/en/latest/about/features_index/taskrunner.html):
Define an experiment and distribute it manually. All participants can verify model code and [FL plan](https://openfl.readthedocs.io/en/latest/about/features_index/taskrunner.html#federated-learning-plan-fl-plan-settings) prior to execution. The federation is terminated when the experiment is finished

- [Workflow Interface](https://openfl.readthedocs.io/en/latest/about/features_index/workflowinterface.html) ([*experimental*](https://openfl.readthedocs.io/en/latest/developer_guide/experimental_features.html)):
Create complex experiments that extend beyond traditional horizontal federated learning. See the [experimental tutorials](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/) to learn how to coordinate [aggregator validation after collaborator model training](https://github.com/securefederatedai/openfl/tree/develop/openfl-tutorials/experimental/102_Aggregator_Validation.ipynb), [perform global differentially private federated learning](https://github.com/psfoley/openfl/tree/experimental-workflow-interface/openfl-tutorials/experimental/Global_DP), measure the amount of private information embedded in a model after collaborator training with [privacy meter](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/Privacy_Meter/readme.md), or [add a watermark to a federated model](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/301_MNIST_Watermarking.ipynb).

The quickest way to test OpenFL is to follow our [tutorials](https://github.com/securefederatedai/openfl/tree/develop/openfl-tutorials). </br>
Read the [blog post](https://towardsdatascience.com/go-federated-with-openfl-8bc145a5ead1) explaining steps to train a model with OpenFL. </br>
Check out the [online documentation](https://openfl.readthedocs.io/en/latest/index.html) to launch your first federation.
- [Workflow API](https://openfl.readthedocs.io/en/latest/about/features_index/workflowinterface.html) ([*experimental*](https://openfl.readthedocs.io/en/latest/developer_guide/experimental_features.html)):
Create complex experiments that extend beyond traditional horizontal federated learning. This API enables an experiment to be simulated locally, then seamlessly scaled to a federated setting. See the [experimental tutorials](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/) to learn how to coordinate [aggregator validation after collaborator model training](https://github.com/securefederatedai/openfl/tree/develop/openfl-tutorials/experimental/102_Aggregator_Validation.ipynb), [perform global differentially private federated learning](https://github.com/psfoley/openfl/tree/experimental-workflow-interface/openfl-tutorials/experimental/Global_DP), measure the amount of private information embedded in a model after collaborator training with [privacy meter](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/Privacy_Meter/readme.md), or [add a watermark to a federated model](https://github.com/securefederatedai/openfl/blob/develop/openfl-tutorials/experimental/301_MNIST_Watermarking.ipynb).

The quickest way to test OpenFL is to follow the [online documentation](https://openfl.readthedocs.io/en/latest/index.html) to launch your first federation.<br/>
Read the [blog post](https://medium.com/openfl/from-centralized-machine-learning-to-federated-learning-with-openfl-b3e61da52432) explaining steps to train a model with OpenFL. <br/>

## Requirements

- Ubuntu Linux 18.04+
- Python 3.7+ (recommended to use with [Virtualenv](https://virtualenv.pypa.io/en/latest/)).

OpenFL supports training with TensorFlow 2+ or PyTorch 1.3+ which should be installed separately. User can extend the list of supported Deep Learning frameworks if needed.
OpenFL supports popular NumPy-based ML frameworks like TensorFlow, PyTorch and Jax which should be installed separately.<br/>
Users can extend the list of supported Machine Learning frameworks if needed.

## Project Overview
### What is Federated Learning
Expand Down Expand Up @@ -82,29 +77,24 @@ You can find more details in the following articles:


### Supported Aggregation Algorithms
| Algorithm Name | Paper | PyTorch implementation | TensorFlow implementation | Other frameworks compatibility | How to use |
| -------------- | ----- | :--------------------: | :-----------------------: | :----------------------------: | ---------- |
| FedAvg | [McMahan et al., 2017](https://arxiv.org/pdf/1602.05629.pdf) |||| [docs](https://openfl.readthedocs.io/en/latest/about/features.html#aggregation-algorithms) |
| FedProx | [Li et al., 2020](https://arxiv.org/pdf/1812.06127.pdf) |||| [docs](https://openfl.readthedocs.io/en/latest/about/features.html#aggregation-algorithms) |
| FedOpt | [Reddi et al., 2020](https://arxiv.org/abs/2003.00295) |||| [docs](https://openfl.readthedocs.io/en/latest/about/features.html#aggregation-algorithms) |
| FedCurv | [Shoham et al., 2019](https://arxiv.org/pdf/1910.07796.pdf) |||| [docs](https://openfl.readthedocs.io/en/latest/about/features.html#aggregation-algorithms) |
| Algorithm Name | Paper | PyTorch implementation | TensorFlow implementation | Other frameworks compatibility |
| -------------- | ----- | :--------------------: | :-----------------------: | :----------------------------: |
| FedAvg | [McMahan et al., 2017](https://arxiv.org/pdf/1602.05629.pdf) ||||
| FedProx | [Li et al., 2020](https://arxiv.org/pdf/1812.06127.pdf) ||||
| FedOpt | [Reddi et al., 2020](https://arxiv.org/abs/2003.00295) ||||
| FedCurv | [Shoham et al., 2019](https://arxiv.org/pdf/1910.07796.pdf) ||||

## Support
Please join us for our bi-monthly community meetings starting December 1 & 2, 2022! <br>
Meet with some of the OpenFL team members behind OpenFL. <br>
We will be going over our roadmap, open for Q&A, and welcome idea sharing. <br>

Calendar and links to a Community calls are [here](https://wiki.lfaidata.foundation/pages/viewpage.action?pageId=70648254)

Subscribe to the OpenFL mail list [email protected]
The OpenFL community is growing, and we invite you to be a part of it. Join the [Slack channel](https://join.slack.com/t/openfl/shared_invite/zt-ovzbohvn-T5fApk05~YS_iZhjJ5yaTw) to connect with fellow enthusiasts, share insights, and contribute to the future of federated learning.

Consider subscribing to the OpenFL mail list [email protected]

See you there!

We also always welcome questions, issue reports, and suggestions via:

* [GitHub Issues](https://github.com/securefederatedai/openfl/issues)
* [Slack workspace](https://join.slack.com/t/openfl/shared_invite/zt-ovzbohvn-T5fApk05~YS_iZhjJ5yaTw)
* [GitHub Discussions](https://github.com/securefederatedai/openfl/discussions)

## License
This project is licensed under [Apache License Version 2.0](LICENSE). By contributing to the project, you agree to the license and copyright terms therein and release your contribution under these terms.
Expand Down
2 changes: 1 addition & 1 deletion docs/about/features_index/pynative.rst
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@
:orphan:

=================
Python Native API
Python Native API (Deprecated)
=================

TODO
Expand Down
2 changes: 1 addition & 1 deletion docs/developer_guide/advanced_topics/overriding_agg_fn.rst
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@ Override Aggregation Function
With the aggregator-based workflow, you can use custom aggregation functions for each task via Python\*\ API or command line interface.


Python API
Python API (Deprecated)
==========

1. Create an implementation of :class:`openfl.interface.aggregation_functions.core.AggregationFunction`.
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,7 @@ With the director-based workflow, you can use custom plan settings before starti
When using Python API or Director Envoy based interactive API (Deprecated), **override_config** can be used to update plan settings.


Python API
Python API (Deprecated)
==========

Modify the plan settings:
Expand Down
2 changes: 1 addition & 1 deletion docs/developer_guide/running_the_federation.notebook.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
.. _running_notebook:

**********************************
Aggregator-Based Workflow Tutorial
Aggregator-Based Workflow Tutorial (Deprecated)
**********************************

You will start a Jupyter\* \ lab server and receive a URL you can use to access the tutorials. Jupyter notebooks are provided for PyTorch\* \ and TensorFlow\* \ that simulate a federation on a local machine.
Expand Down
2 changes: 1 addition & 1 deletion docs/developer_guide/utilities/splitters_data.rst
Original file line number Diff line number Diff line change
Expand Up @@ -13,7 +13,7 @@ Dataset Splitters
You may apply data splitters differently depending on the |productName| workflow that you follow.


OPTION 1: Use **Native Python API** (Aggregator-Based Workflow) Functions to Split the Data
OPTION 1: Use **Native Python API** (Aggregator-Based Workflow) Functions to Split the Data (Deprecated)
===========================================================================================

Predefined |productName| data splitters functions are as follows:
Expand Down
2 changes: 1 addition & 1 deletion docs/get_started/examples.rst
Original file line number Diff line number Diff line change
Expand Up @@ -30,7 +30,7 @@ See :ref:`running_the_task_runner`
:ref:`running_the_task_runner`

-------------------------
Python Native API
Python Native API (Deprecated)
-------------------------
Intended for quick simulation purposes

Expand Down
2 changes: 1 addition & 1 deletion docs/get_started/examples/python_native_pytorch_mnist.rst
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
.. _python_native_pytorch_mnist:

==========================================
Python Native API: Federated PyTorch MNIST
Python Native API: Federated PyTorch MNIST (Deprecated)
==========================================

In this tutorial, we will set up a federation and train a basic PyTorch model on the MNIST dataset using the Python Native API.
Expand Down
2 changes: 1 addition & 1 deletion docs/source/api/openfl_native.rst
Original file line number Diff line number Diff line change
Expand Up @@ -2,7 +2,7 @@
.. # SPDX-License-Identifier: Apache-2.0
*************************************************
Native Module
Native Module (Deprecated)
*************************************************

Native modules reference:
Expand Down
4 changes: 2 additions & 2 deletions openfl-workspace/torch_llm_horovod/src/InHorovodrun.py
Original file line number Diff line number Diff line change
Expand Up @@ -9,7 +9,7 @@

import horovod.torch as hvd

import openfl.native as fx
from openfl.interface.cli import setup_logging

SCRIPT_DIR = os.path.dirname(os.path.abspath(__file__))
sys.path.append(os.path.dirname(SCRIPT_DIR))
Expand Down Expand Up @@ -50,7 +50,7 @@ def get_args():

def main():
logger = getLogger(__name__)
fx.setup_logging(level="INFO", log_file=None)
setup_logging()
try:
logger.info("starting horovod")
hvd.init()
Expand Down
4 changes: 2 additions & 2 deletions openfl/interface/model.py
Original file line number Diff line number Diff line change
Expand Up @@ -11,7 +11,6 @@
from click import confirm, group, option, pass_context, style

from openfl.federated import Plan
from openfl.pipelines import NoCompressionPipeline
from openfl.protocols import utils
from openfl.utilities.click_types import InputSpec
from openfl.utilities.dataloading import get_dataloader
Expand Down Expand Up @@ -168,13 +167,14 @@ def get_model(
)
data_loader = get_dataloader(plan, prefer_minimal=True, input_shape=input_shape)
task_runner = plan.get_task_runner(data_loader=data_loader)
tensor_pipe = plan.get_tensor_pipe()

model_protobuf_path = Path(model_protobuf_path).resolve()
logger.info("Loading OpenFL model protobuf: 🠆 %s", model_protobuf_path)

model_protobuf = utils.load_proto(model_protobuf_path)

tensor_dict, _ = utils.deconstruct_model_proto(model_protobuf, NoCompressionPipeline())
tensor_dict, _ = utils.deconstruct_model_proto(model_protobuf, tensor_pipe)

# This may break for multiple models.
# task_runner.set_tensor_dict will need to handle multiple models
Expand Down

0 comments on commit 837031b

Please sign in to comment.