Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fix(benchmark/camelyon): actually rely on gpu setup at Dependency level #244

Merged
merged 2 commits into from
Sep 3, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion benchmark/camelyon/benchmarks.py
Original file line number Diff line number Diff line change
Expand Up @@ -64,7 +64,7 @@ def fed_avg(params: dict, train_folder: Path, test_folder: Path):
mode=exp_params["mode"],
cp_name=exp_params["cp_name"],
cancel_cp=exp_params["cancel_cp"],
torch_gpu=exp_params["torch_gpu"],
Copy link
Member

@ThibaultFy ThibaultFy Sep 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This will break the CI, need to do a companion PR on test-release-dev.yaml

Copy link
Contributor Author

@thbcmlowk thbcmlowk Sep 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, this is identified, and it will be updated - but thanks for the reminder!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

use_gpu=exp_params["use_gpu"],
)

if exp_params["skip_pure_torch"]:
Expand Down
4 changes: 2 additions & 2 deletions benchmark/camelyon/common/utils.py
Original file line number Diff line number Diff line change
Expand Up @@ -81,7 +81,7 @@ def parse_params() -> dict:
default=False,
help="Remote only: cancel the CP after registration",
)
parser.add_argument("--torch-gpu", action="store_true", help="Use PyTorch with GPU/CUDA support")
parser.add_argument("--use-gpu", action="store_true", help="Use PyTorch with GPU/CUDA support")
parser.add_argument(
"--skip-pure-torch",
action="store_true",
Expand All @@ -107,7 +107,7 @@ def parse_params() -> dict:
params["nb_test_data_samples"] = args.nb_test_data_samples
params["data_path"] = args.data_path
params["cancel_cp"] = args.cancel_cp
params["torch_gpu"] = args.torch_gpu
params["use_gpu"] = args.use_gpu
params["skip_pure_torch"] = args.skip_pure_torch
params["cp_name"] = args.cp_name

Expand Down
7 changes: 4 additions & 3 deletions benchmark/camelyon/workflows.py
Original file line number Diff line number Diff line change
Expand Up @@ -44,7 +44,7 @@ def substrafl_fed_avg(
asset_keys_path: Path,
cp_name: Optional[str],
cancel_cp: bool = False,
torch_gpu: bool = False,
use_gpu: bool = False,
) -> benchmark_metrics.BenchmarkResults:
"""Execute Weldon algorithm for a fed avg strategy with substrafl API.

Expand All @@ -68,7 +68,7 @@ def substrafl_fed_avg(
Otherwise, all present keys in this fill will be reused per Substra in remote mode.
cp_name ben): (Optional[str]): Compute Plan name to display
cancel_cp (bool): if set to True, the CP will be canceled as soon as it's registered. Only work for remote mode.
torch_gpu (bool): Use GPU default index for pytorch
use_gpu (bool): Use GPU for Dependency object
Returns:
dict: Results of the experiment.
"""
Expand Down Expand Up @@ -97,7 +97,7 @@ def substrafl_fed_avg(
"torch==2.3.0",
"scikit-learn==1.5.1",
]
if not torch_gpu:
if not use_gpu:
pypi_dependencies += ["--extra-index-url https://download.pytorch.org/whl/cpu"]

# Dependencies
Expand All @@ -108,6 +108,7 @@ def substrafl_fed_avg(
# Keeping editable_mode=True to ensure nightly test benchmarks are ran against main substrafl git ref
editable_mode=True,
compile=True,
use_gpu=use_gpu,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The renaming is not necessary, you can just set use_gpu=torch_gpu

Copy link
Contributor Author

@thbcmlowk thbcmlowk Sep 3, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, it was mainly for consistency through the workflow and to avoid having a third variable name for this. If you prefer keeping torch_gpu I can revert tho, I don't mind!

)

# Metrics
Expand Down
1 change: 1 addition & 0 deletions changes/244.fixed
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
Actually trigger the GPU docker configuration with `use_gpu` flag when running Camelyon benchmark
Loading