You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Today the number of ranks parameter is applied globally to all benchmarks in the JSON file. We should allow users to fine-tune it by providing a per-benchmark option to override this parameter. This will add flexibility and allow the JSON to be re-used between distinct experiment sizes.
The text was updated successfully, but these errors were encountered:
@sbyna what is your take on this? @xelasim reported some issues when running the metadata stress benchmark where one of its parameters depends on the total number of ranks.
Yes, capability to override the number of ranks per benchmark makes sense. It may even be necessary in some cases because of problem partitioning may be different for each benchmark. We can still keep the global # of ranks because if someone wants to run all the benchmarks at the same scale, they would have that option.
Today the number of ranks parameter is applied globally to all benchmarks in the JSON file. We should allow users to fine-tune it by providing a per-benchmark option to override this parameter. This will add flexibility and allow the JSON to be re-used between distinct experiment sizes.
The text was updated successfully, but these errors were encountered: