Skip to content

Commit

Permalink
Merge branch 'securefederatedai:develop' into straggler_handling_update
Browse files Browse the repository at this point in the history
  • Loading branch information
ishant162 authored Jan 10, 2025
2 parents 9f27f5f + 9981be2 commit 6ed5267
Show file tree
Hide file tree
Showing 19 changed files with 210 additions and 154 deletions.
23 changes: 23 additions & 0 deletions docs/releases.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,28 @@
# Releases

## 1.7
[Full Release Notes](https://github.com/securefederatedai/openfl/releases/tag/v1.7)

### New Features
- [**FederatedRuntime**](https://openfl.readthedocs.io/en/latest/about/features_index/workflowinterface.html#runtimes-future-plans) for Workflow API: enables a seamless transition from a local simulation (via LocalRuntime) to a distributed Federated Learning deployment - all orchestrated from a familiar Jupyter notebook environment. Check out the [FederatedRuntime 101 Tutorial](https://github.com/securefederatedai/openfl/tree/develop/openfl-tutorials/experimental/workflow/FederatedRuntime/101_MNIST) to try it yourself. The initial version of the FederatedRuntime included in this release is an experimental feature that should be used only in an internal environment. We further recommend that users operate only on artificial or public data that is not considered intellectual property. The experimental tag and restrictions will be removed in future releases of OpenFL.

- [**Federated XGBoost**](https://github.com/securefederatedai/openfl/tree/develop/openfl-workspace/xgb_higgs): Adding support for XGBoost training in OpenFL via TaskRunner API, illustrated with the Higgs dataset.

- [**Callbacks**](https://openfl.readthedocs.io/en/latest/openfl.callbacks.html): An abstraction for running user-defined actions in TaskRunner API or Workflow API. Callbacks can be used to perform custom actions at different stages of the Federated Learning process.

### Enhanced Developer Experience
- **Streamlining OpenFL APIs**: With this release, the OpenFL Team will concentrate on the TaskRunner API and Workflow API. Consequently, the Python Native API and Interactive API have been deprecated and are scheduled for removal in future iterations.

- **FL Workspace Dockerization**: Revised Task Runner API workspace dockerization process, with TEE-ready containers (using Gramine and Intel® Software Guard Extensions). Follow the [updated instructions](https://github.com/securefederatedai/openfl/blob/develop/openfl-docker/README.md) to enhance the privacy and security of your FL experiments.

- **Federated Evaluation via TaskRunner API**: OpenFL 1.7 further simplifies the creation of Federated Evaluation experiments via the TaskRunner API (see the example [FedEval workspace](https://github.com/securefederatedai/openfl/tree/develop/openfl-workspace/torch_cnn_mnist_fed_eval)).

- **Keras 3 API**: Upgrading the base TaskRunner classes and example workspaces to Keras 3 for building state-of-the-art FL experiments with TensorFlow (more backends to be included in the upcoming OpenFL releases).

- **Updated Tutorials**: This includes fixes to existing tutorial and example code, and migrating a selection of key OpenFL tutorials from deprecated APIs to Workflow API. Check out the updated [Tutorials](https://github.com/securefederatedai/openfl/tree/develop/openfl-tutorials/experimental/workflow) folder.

- **Updated Official Documentation**: The [OpenFL documentation website](https://openfl.readthedocs.io/en/latest/index.html) has been comprehensively reviewed and reorganized to improve navigation and provide clearer content.

## 1.6
[Full Release Notes](https://github.com/securefederatedai/openfl/releases/tag/v1.6)

Expand Down
2 changes: 1 addition & 1 deletion tests/end_to_end/test_suites/memory_logs_tests.py
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@
import logging
import os

from tests.end_to_end.utils.common_fixtures import fx_federation_tr, fx_federation_tr_dws
from tests.end_to_end.utils.tr_common_fixtures import fx_federation_tr, fx_federation_tr_dws
import tests.end_to_end.utils.constants as constants
from tests.end_to_end.utils import federation_helper as fed_helper, ssh_helper as ssh
from tests.end_to_end.utils.generate_report import generate_memory_report, convert_to_json
Expand Down
2 changes: 1 addition & 1 deletion tests/end_to_end/test_suites/sample_tests.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
import pytest
import logging

from tests.end_to_end.utils.common_fixtures import (
from tests.end_to_end.utils.tr_common_fixtures import (
fx_federation_tr,
fx_federation_tr_dws,
)
Expand Down
2 changes: 1 addition & 1 deletion tests/end_to_end/test_suites/task_runner_tests.py
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@
import pytest
import logging

from tests.end_to_end.utils.common_fixtures import (
from tests.end_to_end.utils.tr_common_fixtures import (
fx_federation_tr,
fx_federation_tr_dws,
)
Expand Down
2 changes: 1 addition & 1 deletion tests/end_to_end/test_suites/wf_local_func_tests.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
import random
from metaflow import Step

from tests.end_to_end.utils.common_fixtures import fx_local_federated_workflow, fx_local_federated_workflow_prvt_attr
from tests.end_to_end.utils.wf_common_fixtures import fx_local_federated_workflow, fx_local_federated_workflow_prvt_attr
from tests.end_to_end.workflow.exclude_flow import TestFlowExclude
from tests.end_to_end.workflow.include_exclude_flow import TestFlowIncludeExclude
from tests.end_to_end.workflow.include_flow import TestFlowInclude
Expand Down
26 changes: 15 additions & 11 deletions tests/end_to_end/utils/generate_report.py
Original file line number Diff line number Diff line change
Expand Up @@ -24,10 +24,14 @@ def chapter_body(self, body):

def generate_memory_report(memory_usage_dict, workspace_path):
"""
Generates a memory usage report from a CSV file.
Generates a memory usage report using input dictionary
and saves it to a PDF file.
Content of memory_usage_dict comes from reading the aggregator
and collaborator memory usage json files inside respective logs folder.
Parameters:
file_path (str): The path to the CSV file containing memory usage data.
memory_usage_dict (dict): A dictionary containing memory usage data.
workspace_path (str): The path to the workspace where the report will be saved.
Returns:
None
Expand All @@ -37,22 +41,22 @@ def generate_memory_report(memory_usage_dict, workspace_path):

# Plotting the chart
plt.figure(figsize=(10, 5))
plt.plot(data["round_number"], data["virtual_memory/used"], marker="o")
plt.plot(data["round_number"], data["process_memory"], marker="o")
plt.title("Memory Usage per Round")
plt.xlabel("round_number")
plt.ylabel("Virtual Memory Used (MB)")
plt.ylabel("Process Memory Used (MB)")
plt.grid(True)
output_path = f"{workspace_path}/mem_usage_plot.png"
plt.savefig(output_path)
plt.close()

# Calculate statistics
min_mem = round(data["virtual_memory/used"].min(), 2)
max_mem = round(data["virtual_memory/used"].max(), 2)
mean_mem = round(data["virtual_memory/used"].mean(), 2)
variance_mem = round(data["virtual_memory/used"].var(), 2)
std_dev_mem = round(data["virtual_memory/used"].std(), 2)
slope, _, _, _, _ = linregress(data.index, data["virtual_memory/used"])
min_mem = round(data["process_memory"].min(), 2)
max_mem = round(data["process_memory"].max(), 2)
mean_mem = round(data["process_memory"].mean(), 2)
variance_mem = round(data["process_memory"].var(), 2)
std_dev_mem = round(data["process_memory"].std(), 2)
slope, _, _, _, _ = linregress(data.index, data["process_memory"])
slope = round(slope, 2)
stats_path = f"{workspace_path}/mem_stats.txt"
with open(stats_path, "w") as file:
Expand Down Expand Up @@ -87,7 +91,7 @@ def add_introduction(pdf):
def add_chart_analysis(pdf, output_path, data):
pdf.chapter_title("Chart Analysis")
pdf.image(output_path, w=180)
diffs = data["virtual_memory/used"].diff().round(2)
diffs = data["process_memory"].diff().round(2)
significant_changes = diffs[diffs.abs() > 500]
for index, value in significant_changes.items():
pdf.chapter_body(
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,6 @@
import collections
import concurrent.futures
import logging
import numpy as np

import tests.end_to_end.utils.constants as constants
import tests.end_to_end.utils.federation_helper as fh
Expand All @@ -21,10 +20,6 @@
"model_owner, aggregator, collaborators, workspace_path, local_bind_path",
)

workflow_local_fixture = collections.namedtuple(
"workflow_local_fixture",
"aggregator, collaborators, runtime",
)

@pytest.fixture(scope="function")
def fx_federation_tr(request):
Expand Down Expand Up @@ -235,136 +230,3 @@ def fx_federation_tr_dws(request):
workspace_path=workspace_path,
local_bind_path=local_bind_path,
)


@pytest.fixture(scope="function")
def fx_local_federated_workflow(request):
"""
Fixture to set up a local federated workflow for testing.
This fixture initializes an `Aggregator` and sets up a list of collaborators
based on the number specified in the test configuration. It also configures
a `LocalRuntime` with the aggregator, collaborators, and an optional backend
if specified in the test configuration.
Args:
request (FixtureRequest): The pytest request object that provides access
to the test configuration.
Yields:
LocalRuntime: An instance of `LocalRuntime` configured with the aggregator,
collaborators, and backend.
"""
# Import is done inline because Task Runner does not support importing below openfl packages
from openfl.experimental.workflow.interface import Aggregator, Collaborator
from openfl.experimental.workflow.runtime import LocalRuntime
from tests.end_to_end.utils.wf_helper import (
init_collaborator_private_attr_index,
init_collaborator_private_attr_name,
init_collaborate_pvt_attr_np,
init_agg_pvt_attr_np
)
collab_callback_func = request.param[0] if hasattr(request, 'param') and request.param else None
collab_value = request.param[1] if hasattr(request, 'param') and request.param else None
agg_callback_func = request.param[2] if hasattr(request, 'param') and request.param else None

# Get the callback functions from the locals using string
collab_callback_func_name = locals()[collab_callback_func] if collab_callback_func else None
agg_callback_func_name = locals()[agg_callback_func] if agg_callback_func else None
collaborators_list = []

if agg_callback_func_name:
aggregator = Aggregator( name="agg",
private_attributes_callable=agg_callback_func_name)
else:
aggregator = Aggregator()

# Setup collaborators
for i in range(request.config.num_collaborators):
func_var = i if collab_value == "int" else f"collaborator{i}" if collab_value == "str" else None
collaborators_list.append(
Collaborator(
name=f"collaborator{i}",
private_attributes_callable=collab_callback_func_name,
param = func_var
)
)

backend = request.config.backend if hasattr(request.config, 'backend') else None
if backend:
local_runtime = LocalRuntime(aggregator=aggregator, collaborators=collaborators_list, backend=backend)
local_runtime = LocalRuntime(aggregator=aggregator, collaborators=collaborators_list)

# Return the federation fixture
return workflow_local_fixture(
aggregator=aggregator,
collaborators=collaborators_list,
runtime=local_runtime,
)


@pytest.fixture(scope="function")
def fx_local_federated_workflow_prvt_attr(request):
"""
Fixture to set up a local federated workflow for testing.
This fixture initializes an `Aggregator` and sets up a list of collaborators
based on the number specified in the test configuration. It also configures
a `LocalRuntime` with the aggregator, collaborators, and an optional backend
if specified in the test configuration.
Args:
request (FixtureRequest): The pytest request object that provides access
to the test configuration.
Yields:
LocalRuntime: An instance of `LocalRuntime` configured with the aggregator,
collaborators, and backend.
"""
# Import is done inline because Task Runner does not support importing below openfl packages
from openfl.experimental.workflow.interface import Aggregator, Collaborator
from openfl.experimental.workflow.runtime import LocalRuntime
from tests.end_to_end.utils.wf_helper import (
init_collaborator_private_attr_index,
init_collaborator_private_attr_name,
init_collaborate_pvt_attr_np,
init_agg_pvt_attr_np
)
collab_callback_func = request.param[0] if hasattr(request, 'param') and request.param else None
collab_value = request.param[1] if hasattr(request, 'param') and request.param else None
agg_callback_func = request.param[2] if hasattr(request, 'param') and request.param else None

# Get the callback functions from the locals using string
collab_callback_func_name = locals()[collab_callback_func] if collab_callback_func else None
agg_callback_func_name = locals()[agg_callback_func] if agg_callback_func else None
collaborators_list = []

# Setup aggregator
if agg_callback_func_name:
aggregator = Aggregator(name="agg",
private_attributes_callable=agg_callback_func_name)
else:
aggregator = Aggregator()

aggregator.private_attributes = {
"test_loader_pvt": np.random.rand(10, 28, 28) # Random data
}
# Setup collaborators
for i in range(request.config.num_collaborators):
func_var = i if collab_value == "int" else f"collaborator{i}" if collab_value == "str" else None
collab = Collaborator(
name=f"collaborator{i}",
private_attributes_callable=collab_callback_func_name,
param = func_var
)
collab.private_attributes = {
"train_loader_pvt": np.random.rand(i * 50, 28, 28),
"test_loader_pvt": np.random.rand(i * 10, 28, 28),
}
collaborators_list.append(collab)

backend = request.config.backend if hasattr(request.config, 'backend') else None
if backend:
local_runtime = LocalRuntime(aggregator=aggregator, collaborators=collaborators_list, backend=backend)
local_runtime = LocalRuntime(aggregator=aggregator, collaborators=collaborators_list)

# Return the federation fixture
return workflow_local_fixture(
aggregator=aggregator,
collaborators=collaborators_list,
runtime=local_runtime,
)
Loading

0 comments on commit 6ed5267

Please sign in to comment.