diff --git a/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po b/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po index 0263e9bf3224..50125eeb379f 100644 --- a/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po +++ b/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po @@ -8,7 +8,7 @@ msgstr "" "Project-Id-Version: Flower main\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2023-11-23 18:31+0100\n" -"PO-Revision-Date: 2023-12-14 22:10+0000\n" +"PO-Revision-Date: 2023-12-18 20:08+0000\n" "Last-Translator: Yan Gao \n" "Language-Team: Chinese (Simplified) \n" @@ -11210,42 +11210,53 @@ msgid "" "Expose Flower version through `flwr.__version__` " "([#952](https://github.com/adap/flower/pull/952))" msgstr "" +"通过 `flwr.__version__` 公开 Flower 版本 ([#952](https://github.com/adap/" +"flower/pull/952))" #: ../../source/ref-changelog.md:648 msgid "" "`start_server` in `app.py` now returns a `History` object containing " "metrics from training ([#974](https://github.com/adap/flower/pull/974))" msgstr "" +"app.py \"中的 \"start_server \"现在会返回一个 \"History \"对象" +",其中包含训练中的指标([#974](https://github.com/adap/flower/pull/974))" #: ../../source/ref-changelog.md:649 msgid "" "Make `max_workers` (used by `ThreadPoolExecutor`) configurable " "([#978](https://github.com/adap/flower/pull/978))" msgstr "" +"使 \"max_workers\"(由 \"ThreadPoolExecutor \"使用" +")可配置([#978](https://github.com/adap/flower/pull/978))" #: ../../source/ref-changelog.md:650 msgid "" "Increase sleep time after server start to three seconds in all code " "examples ([#1086](https://github.com/adap/flower/pull/1086))" msgstr "" +"在所有代码示例中,将服务器启动后的休眠时间延长至三秒([#1086](https://github." +"com/adap/flower/pull/1086))" #: ../../source/ref-changelog.md:651 msgid "" "Added a new FAQ section to the documentation " "([#948](https://github.com/adap/flower/pull/948))" -msgstr "" +msgstr "在文档中添加了新的常见问题部分 ([#948](https://github.com/adap/flower/pull/" +"948))" #: ../../source/ref-changelog.md:652 msgid "" "And many more under-the-hood changes, library updates, documentation " "changes, and tooling improvements!" -msgstr "" +msgstr "还有更多底层更改、库更新、文档更改和工具改进!" #: ../../source/ref-changelog.md:656 msgid "" "**Removed** `flwr_example` **and** `flwr_experimental` **from release " "build** ([#869](https://github.com/adap/flower/pull/869))" msgstr "" +"**从发布版中删除**`flwr_example`**和**`flwr_experimental`** " +"([#869](https://github.com/adap/flower/pull/869))" #: ../../source/ref-changelog.md:658 msgid "" @@ -11255,10 +11266,14 @@ msgid "" "tensorflow`, `http-logger`, `ops`) are now no-op and will be removed in " "an upcoming release." msgstr "" +"自 Flower 0.12.0 起,软件包 `flwr_example` 和 `flwr_experimental` 已被弃用," +"它们不再包含在 Flower 的发布版本中。相关的额外包(`baseline`, `examples-" +"pytorch`, `examples-tensorflow`, `http-logger`, " +"`ops`)现在已不再使用,并将在即将发布的版本中移除。" #: ../../source/ref-changelog.md:660 msgid "v0.17.0 (2021-09-24)" -msgstr "" +msgstr "v0.17.0 (2021-09-24)" #: ../../source/ref-changelog.md:664 msgid "" @@ -11267,6 +11282,9 @@ msgid "" "[#790](https://github.com/adap/flower/pull/790) " "[#791](https://github.com/adap/flower/pull/791))" msgstr "" +"**实验性虚拟客户端引擎** ([#781](https://github.com/adap/flower/pull/781) " +"[#790](https://github.com/adap/flower/pull/790) [#791](https://github.com/" +"adap/flower/pull/791))" #: ../../source/ref-changelog.md:666 msgid "" @@ -11277,6 +11295,11 @@ msgid "" "The easiest way to test the new functionality is to look at the two new " "code examples called `quickstart_simulation` and `simulation_pytorch`." msgstr "" +"Flower 的目标之一是实现大规模研究。这一版本首次(试验性地)展示了代号为 " +"\"虚拟客户端引擎 \"的重要新功能" +"。虚拟客户端可以在单台机器或计算集群上对大量客户端进行仿真。" +"测试新功能的最简单方法是查看名为 \"quickstart_simulation \"和 " +"\"simulation_pytorch \"的两个新代码示例。" #: ../../source/ref-changelog.md:668 msgid "" @@ -11285,6 +11308,9 @@ msgid "" "known caveats. However, those who are curious are encouraged to try it " "out and share their thoughts." msgstr "" +"该功能仍处于试验阶段,因此无法保证 API 的稳定性。此外,它还没有完全准备好进入" +"黄金时间,并有一些已知的注意事项。不过,我们鼓励好奇的用户尝试使用并分享他们" +"的想法。" #: ../../source/ref-changelog.md:670 msgid "" @@ -11292,78 +11318,95 @@ msgid "" "([#828](https://github.com/adap/flower/pull/828) " "[#822](https://github.com/adap/flower/pull/822))" msgstr "" +"**新的内置策略**([#828](https://github.com/adap/flower/pull/828) " +"[#822](https://github.com/adap/flower/pull/822)" #: ../../source/ref-changelog.md:672 msgid "" "FedYogi - Federated learning strategy using Yogi on server-side. " "Implementation based on https://arxiv.org/abs/2003.00295" -msgstr "" +msgstr "FedYogi - 在服务器端使用 Yogi 的联合学习策略。基于 https://arxiv.org/abs/" +"2003.00295 实现" #: ../../source/ref-changelog.md:673 msgid "" "FedAdam - Federated learning strategy using Adam on server-side. " "Implementation based on https://arxiv.org/abs/2003.00295" -msgstr "" +msgstr "FedAdam - 在服务器端使用 Adam 的联邦学习策略。基于 https://arxiv.org/abs/" +"2003.00295 实现" #: ../../source/ref-changelog.md:675 msgid "" "**New PyTorch Lightning code example** " "([#617](https://github.com/adap/flower/pull/617))" msgstr "" +"**新的 PyTorch Lightning 代码示例** ([#617](https://github.com/adap/flower/" +"pull/617))" #: ../../source/ref-changelog.md:677 msgid "" "**New Variational Auto-Encoder code example** " "([#752](https://github.com/adap/flower/pull/752))" -msgstr "" +msgstr "**新的变分自动编码器代码示例** ([#752](https://github.com/adap/flower/pull/" +"752))" #: ../../source/ref-changelog.md:679 msgid "" "**New scikit-learn code example** " "([#748](https://github.com/adap/flower/pull/748))" msgstr "" +"**新的 scikit-learn 代码示例** ([#748](https://github.com/adap/flower/pull/" +"748))" #: ../../source/ref-changelog.md:681 msgid "" "**New experimental TensorBoard strategy** " "([#789](https://github.com/adap/flower/pull/789))" msgstr "" +"**新的实验性 TensorBoard 策略**([#789](https://github.com/adap/flower/pull/" +"789))" #: ../../source/ref-changelog.md:685 msgid "" "Improved advanced TensorFlow code example " "([#769](https://github.com/adap/flower/pull/769))" -msgstr "" +msgstr "改进的高级 TensorFlow 代码示例([#769](https://github.com/adap/flower/pull/" +"769)" #: ../../source/ref-changelog.md:686 msgid "" "Warning when `min_available_clients` is misconfigured " "([#830](https://github.com/adap/flower/pull/830))" msgstr "" +"当 `min_available_clients` 配置错误时发出警告 ([#830](https://github.com/" +"adap/flower/pull/830))" #: ../../source/ref-changelog.md:687 msgid "" "Improved gRPC server docs " "([#841](https://github.com/adap/flower/pull/841))" -msgstr "" +msgstr "改进了 gRPC 服务器文档([#841](https://github.com/adap/flower/pull/841))" #: ../../source/ref-changelog.md:688 msgid "" "Improved error message in `NumPyClient` " "([#851](https://github.com/adap/flower/pull/851))" msgstr "" +"改进了 `NumPyClient` 中的错误信息 ([#851](https://github.com/adap/flower/" +"pull/851))" #: ../../source/ref-changelog.md:689 msgid "" "Improved PyTorch quickstart code example " "([#852](https://github.com/adap/flower/pull/852))" -msgstr "" +msgstr "改进的 PyTorch 快速启动代码示例 ([#852](https://github.com/adap/flower/pull/" +"852))" #: ../../source/ref-changelog.md:693 msgid "" "**Disabled final distributed evaluation** " "([#800](https://github.com/adap/flower/pull/800))" -msgstr "" +msgstr "**禁用最终分布式评价** ([#800](https://github.com/adap/flower/pull/800))" #: ../../source/ref-changelog.md:695 msgid "" @@ -11372,12 +11415,15 @@ msgid "" "server-side evaluation). The prior behaviour can be enabled by passing " "`force_final_distributed_eval=True` to `start_server`." msgstr "" +"之前的行为是在所有连接的客户端上执行最后一轮分布式评估,而这通常是不需要的(" +"例如,在使用服务器端评估时)。可以通过向 `start_server` 传递 " +"`force_final_distributed_eval=True` 来启用之前的行为。" #: ../../source/ref-changelog.md:697 msgid "" "**Renamed q-FedAvg strategy** " "([#802](https://github.com/adap/flower/pull/802))" -msgstr "" +msgstr "**更名为 q-FedAvg 策略** ([#802](https://github.com/adap/flower/pull/802))" #: ../../source/ref-changelog.md:699 msgid "" @@ -11387,6 +11433,10 @@ msgid "" "deprecated) `QffedAvg` class is still available for compatibility reasons" " (it will be removed in a future release)." msgstr "" +"名为 `QffedAvg` 的策略已更名为 " +"`QFedAvg`,以更好地反映原始论文中给出的符号(q-FFL 是优化目标,q-FedAvg " +"是建议的求解器)。请注意,出于兼容性原因,原始(现已废弃)的 `QffedAvg` " +"类仍然可用(它将在未来的版本中移除)。" #: ../../source/ref-changelog.md:701 msgid "" @@ -11394,6 +11444,8 @@ msgid "" "`simulation_pytorch_legacy` " "([#791](https://github.com/adap/flower/pull/791))" msgstr "" +"**删除并重命名代码示例**`simulation_pytorch`**为**`simulation_pytorch_legacy`" +" ([#791](https://github.com/adap/flower/pull/791))" #: ../../source/ref-changelog.md:703 msgid "" @@ -11403,30 +11455,35 @@ msgid "" " existing example was kept for reference purposes, but it might be " "removed in the future." msgstr "" +"该示例已被新示例取代。新示例基于试验性虚拟客户端引擎,它将成为在 Flower 中进" +"行大多数类型大规模模拟的新的默认方式。现有示例将作为参考保留,但将来可能会删" +"除。" #: ../../source/ref-changelog.md:705 msgid "v0.16.0 (2021-05-11)" -msgstr "" +msgstr "v0.16.0 (2021-05-11)" #: ../../source/ref-changelog.md:709 msgid "" "**New built-in strategies** " "([#549](https://github.com/adap/flower/pull/549))" -msgstr "" +msgstr "**新的内置策略** ([#549](https://github.com/adap/flower/pull/549))" #: ../../source/ref-changelog.md:711 msgid "(abstract) FedOpt" -msgstr "" +msgstr "(摘要) FedOpt" #: ../../source/ref-changelog.md:712 +#, fuzzy msgid "FedAdagrad" -msgstr "" +msgstr "FedAdagrad" #: ../../source/ref-changelog.md:714 msgid "" "**Custom metrics for server and strategies** " "([#717](https://github.com/adap/flower/pull/717))" -msgstr "" +msgstr "**服务器和策略的自定义指标** ([#717](https://github.com/adap/flower/pull/" +"717))" #: ../../source/ref-changelog.md:716 msgid "" @@ -11436,6 +11493,9 @@ msgid "" "dictionary containing custom metrics from client to server. As of this " "release, custom metrics replace task-specific metrics on the server." msgstr "" +"Flower 服务器现在完全与任务无关,所有剩余的任务特定度量(如 \"准确度\"" +")都已被自定义度量字典取代。Flower 0.15 引入了从客户端向服务器传递包含自定义" +"指标的字典的功能。从本版本开始,自定义指标将取代服务器上的特定任务指标。" #: ../../source/ref-changelog.md:718 msgid "" @@ -11446,6 +11506,10 @@ msgid "" "even return *aggregated* metrics dictionaries for the server to keep " "track of." msgstr "" +"自定义度量字典现在可在两个面向用户的 API 中使用:它们可从策略方法 " +"`aggregate_fit`/`aggregate_evaluate` 返回,还可使传递给内置策略(通过 " +"`eval_fn`)的评估函数返回两个以上的评估度量。策略甚至可以返回 *aggregated* " +"指标字典,以便服务器跟踪。" #: ../../source/ref-changelog.md:720 msgid "" @@ -11454,18 +11518,22 @@ msgid "" "returning an empty `{}`), server-side evaluation functions should migrate" " from `return loss, accuracy` to `return loss, {\"accuracy\": accuracy}`." msgstr "" +"Stratey 实现应将其 `aggregate_fit` 和 `aggregate_evaluate` " +"方法迁移到新的返回类型(例如,只需返回空的 `{}`),服务器端评估函数应从 `" +"return loss, accuracy` 迁移到 `return loss, {\"accuracy\": accuracy}`。" #: ../../source/ref-changelog.md:722 msgid "" "Flower 0.15-style return types are deprecated (but still supported), " "compatibility will be removed in a future release." -msgstr "" +msgstr "Flower 0.15 " +"风格的返回类型已被弃用(但仍受支持),兼容性将在未来的版本中移除。" #: ../../source/ref-changelog.md:724 msgid "" "**Migration warnings for deprecated functionality** " "([#690](https://github.com/adap/flower/pull/690))" -msgstr "" +msgstr "** 过时功能的迁移警告** ([#690](https://github.com/adap/flower/pull/690))" #: ../../source/ref-changelog.md:726 msgid "" @@ -11475,6 +11543,11 @@ msgid "" "new warning messages often provide details on how to migrate to more " "recent APIs, thus easing the transition from one release to another." msgstr "" +"Flower " +"早期版本通常会迁移到新的应用程序接口,同时保持与旧版应用程序接口的兼容。" +"如果检测到使用了过时的 API,本版本将引入详细的警告信息。" +"新的警告信息通常会详细说明如何迁移到更新的 " +"API,从而简化从一个版本到另一个版本的过渡。" #: ../../source/ref-changelog.md:728 msgid "" @@ -11483,10 +11556,13 @@ msgid "" "[#692](https://github.com/adap/flower/pull/692) " "[#713](https://github.com/adap/flower/pull/713))" msgstr "" +"改进了文档和文档说明 ([#691](https://github.com/adap/flower/pull/691) " +"[#692](https://github.com/adap/flower/pull/692) [#713](https://github.com/" +"adap/flower/pull/713))" #: ../../source/ref-changelog.md:730 msgid "MXNet example and documentation" -msgstr "" +msgstr "MXNet 示例和文档" #: ../../source/ref-changelog.md:732 msgid "" @@ -11495,12 +11571,15 @@ msgid "" "[#702](https://github.com/adap/flower/pull/702) " "[#705](https://github.com/adap/flower/pull/705))" msgstr "" +"PyTorch 示例中的 FedBN 实现: 从集中到联合 ([#696](https://github.com/adap/" +"flower/pull/696) [#702](https://github.com/adap/flower/pull/702) " +"[#705](https://github.com/adap/flower/pull/705))" #: ../../source/ref-changelog.md:736 msgid "" "**Serialization-agnostic server** " "([#721](https://github.com/adap/flower/pull/721))" -msgstr "" +msgstr "**序列化无关服务器** ([#721](https://github.com/adap/flower/pull/721))" #: ../../source/ref-changelog.md:738 msgid "" @@ -11512,6 +11591,10 @@ msgid "" "these byte arrays should be interpreted (e.g., for " "serialization/deserialization)." msgstr "" +"Flower 服务器现在完全不依赖序列化。之前使用的 `Weights` 类(以反序列化的 " +"NumPy ndarrays 表示参数)已被 `Parameters` 类取代(例如在 `Strategy`中)。" +"参数 \"对象与序列化完全无关,它以字节数组的形式表示参数,\"tensor_type " +"\"属性表示如何解释这些字节数组(例如,用于序列化/反序列化)。" #: ../../source/ref-changelog.md:740 msgid "" @@ -11522,6 +11605,10 @@ msgid "" "[#721](https://github.com/adap/flower/pull/721) to see how strategies can" " easily migrate to the new format." msgstr "" +"内置策略通过在内部处理序列化和反序列化到/从 \"权重 \"来实现这种方法" +"。自定义/第三方策略实现应更新为稍有改动的策略方法定义。策略作者可查阅 PR " +"[#721](https://github.com/adap/flower/pull/721) " +"以了解如何将策略轻松迁移到新格式。" #: ../../source/ref-changelog.md:742 msgid "" @@ -11529,16 +11616,18 @@ msgid "" "`flwr.server.Server.evaluate_round` instead " "([#717](https://github.com/adap/flower/pull/717))" msgstr "" +"已弃用 `flwr.server.Server.evaluate`,改用 `flwr.server.Server." +"evaluate_round`([#717](https://github.com/adap/flower/pull/717)" #: ../../source/ref-changelog.md:744 msgid "v0.15.0 (2021-03-12)" -msgstr "" +msgstr "v0.15.0 (2021-03-12)" #: ../../source/ref-changelog.md:748 msgid "" "**Server-side parameter initialization** " "([#658](https://github.com/adap/flower/pull/658))" -msgstr "" +msgstr "**服务器端参数初始化** ([#658](https://github.com/adap/flower/pull/658))" #: ../../source/ref-changelog.md:750 msgid "" @@ -11546,6 +11635,8 @@ msgid "" "parameter initialization works via a new `Strategy` method called " "`initialize_parameters`." msgstr "" +"现在可以在服务器端初始化模型参数。服务器端参数初始化通过名为 " +"\"initialize_parameters \"的新 \"Strategy \"方法进行。" #: ../../source/ref-changelog.md:752 msgid "" @@ -11554,6 +11645,8 @@ msgid "" "will provide these initial parameters to the server on startup and then " "delete them to free the memory afterwards." msgstr "" +"内置策略支持名为 \"initial_parameters \"的新构造函数参数,用于设置初始参数。" +"内置策略会在启动时向服务器提供这些初始参数,然后删除它们以释放内存。" #: ../../source/ref-changelog.md:771 msgid "" @@ -11561,21 +11654,24 @@ msgid "" "continue to use the current behaviour (namely, it will ask one of the " "connected clients for its parameters and use these as the initial global " "parameters)." -msgstr "" +msgstr "如果没有向策略提供初始参数,服务器将继续使用当前行为(即向其中一个已连接的客" +"户端询问参数,并将这些参数用作初始全局参数)。" #: ../../source/ref-changelog.md:773 msgid "Deprecations" -msgstr "" +msgstr "停用" #: ../../source/ref-changelog.md:775 msgid "" "Deprecate `flwr.server.strategy.DefaultStrategy` (migrate to " "`flwr.server.strategy.FedAvg`, which is equivalent)" msgstr "" +"停用 `flwr.server.strategy.DefaultStrategy`(迁移到等价的 `flwr.server." +"strategy.FedAvg`)" #: ../../source/ref-changelog.md:777 msgid "v0.14.0 (2021-02-18)" -msgstr "" +msgstr "v0.14.0 (2021-02-18)" #: ../../source/ref-changelog.md:781 msgid "" @@ -11584,6 +11680,9 @@ msgid "" "[#572](https://github.com/adap/flower/pull/572) " "[#633](https://github.com/adap/flower/pull/633))" msgstr "" +"**通用** `Client.fit` **和** `Client.evaluate` **返回值** " +"([#610](https://github.com/adap/flower/pull/610) [#572](https://github.com/" +"adap/flower/pull/572) [#633](https://github.com/adap/flower/pull/633))" #: ../../source/ref-changelog.md:783 msgid "" @@ -11592,6 +11691,9 @@ msgid "" "This means one can return almost arbitrary values from `fit`/`evaluate` " "and make use of them on the server side!" msgstr "" +"客户端现在可以返回一个额外的字典,将 `str` 键映射为以下类型的值: " +"bool`、`bytes`、`float`、`int`、`str`。这意味着我们可以从 `fit`/`evaluate` " +"返回几乎任意的值,并在服务器端使用它们!" #: ../../source/ref-changelog.md:785 msgid "" @@ -11600,6 +11702,9 @@ msgid "" "dict)` representing the loss, number of examples, and a dictionary " "holding arbitrary problem-specific values like accuracy." msgstr "" +"这一改进还使 `fit` 和 `evaluate` 之间的返回类型更加一致:`evaluate` " +"现在应返回一个元组`(float, int, " +"dict)`,代表损失、示例数和一个包含特定问题任意值(如准确度)的字典。" #: ../../source/ref-changelog.md:787 msgid "" @@ -11610,18 +11715,26 @@ msgid "" "`evaluate`: `float, int, Dict[str, Scalar]`). See the example below for " "details." msgstr "" +"如果你想知道:此功能与现有项目兼容,额外的字典返回值是可选的。不过,新代码应" +"迁移到新的返回类型,以便与即将发布的 Flower 版本兼容(`fit`: `List[np." +"ndarray], int, Dict[str, Scalar]`,`evaluate`: `float, int, Dict[str, " +"Scalar]`)。详见下面的示例。" #: ../../source/ref-changelog.md:789 msgid "" "*Code example:* note the additional dictionary return values in both " "`FlwrClient.fit` and `FlwrClient.evaluate`:" -msgstr "" +msgstr "*代码示例:* 注意 `FlwrClient.fit` 和 `FlwrClient.evaluate` " +"中的附加字典返回值:" #: ../../source/ref-changelog.md:804 msgid "" "**Generalized** `config` **argument in** `Client.fit` **and** " "`Client.evaluate` ([#595](https://github.com/adap/flower/pull/595))" msgstr "" +"**在**`Client.fit` **和**`Client." +"evaluate`中泛化**`config`参数([#595](https://github.com/adap/flower/pull/" +"595))" #: ../../source/ref-changelog.md:806 msgid "" @@ -11630,6 +11743,9 @@ msgid "" "generalizes this to enable values of the following types: `bool`, " "`bytes`, `float`, `int`, `str`." msgstr "" +"配置 \"参数曾是 \"字典[str, str]\"类型" +",这意味着字典值应是字符串。新版本将其扩展为以下类型的值: " +"bool`、`bytes`、`float`、`int`、`str`。" #: ../../source/ref-changelog.md:808 msgid "" @@ -11637,50 +11753,55 @@ msgid "" "using the `config` dictionary. Yay, no more `str(epochs)` on the server-" "side and `int(config[\"epochs\"])` on the client side!" msgstr "" +"这意味着现在可以使用 `config` 字典向 `fit`/`evaluate` 传递几乎任意的值。耶," +"服务器端不再需要 `str(epochs)`,客户端不再需要 `int(config[\"epochs\"])`!" #: ../../source/ref-changelog.md:810 msgid "" "*Code example:* note that the `config` dictionary now contains non-`str` " "values in both `Client.fit` and `Client.evaluate`:" msgstr "" +"*代码示例:* 注意 `config` 字典现在在 `Client.fit` 和 `Client.evaluate` " +"中都包含非 `str` 值:" #: ../../source/ref-changelog.md:827 msgid "v0.13.0 (2021-01-08)" -msgstr "" +msgstr "v0.13.0 (2021-01-08)" #: ../../source/ref-changelog.md:831 msgid "" "New example: PyTorch From Centralized To Federated " "([#549](https://github.com/adap/flower/pull/549))" -msgstr "" +msgstr "新示例: PyTorch 从集中到联合 ([#549](https://github.com/adap/flower/pull/" +"549))" #: ../../source/ref-changelog.md:832 msgid "Improved documentation" -msgstr "" +msgstr "改进文件" #: ../../source/ref-changelog.md:833 msgid "New documentation theme ([#551](https://github.com/adap/flower/pull/551))" -msgstr "" +msgstr "新文档主题 ([#551](https://github.com/adap/flower/pull/551))" #: ../../source/ref-changelog.md:834 msgid "New API reference ([#554](https://github.com/adap/flower/pull/554))" -msgstr "" +msgstr "新的 API 参考 ([#554](https://github.com/adap/flower/pull/554))" #: ../../source/ref-changelog.md:835 msgid "" "Updated examples documentation " "([#549](https://github.com/adap/flower/pull/549))" -msgstr "" +msgstr "更新了示例文档 ([#549](https://github.com/adap/flower/pull/549))" #: ../../source/ref-changelog.md:836 msgid "" "Removed obsolete documentation " "([#548](https://github.com/adap/flower/pull/548))" -msgstr "" +msgstr "删除了过时的文档 ([#548](https://github.com/adap/flower/pull/548))" #: ../../source/ref-changelog.md:838 msgid "Bugfix:" -msgstr "" +msgstr "错误修正:" #: ../../source/ref-changelog.md:840 msgid "" @@ -11689,20 +11810,24 @@ msgid "" "([#553](https://github.com/adap/flower/pull/553) " "[#540](https://github.com/adap/flower/issues/540))." msgstr "" +"Server.fit \"完成后不会断开客户端连接,现在断开客户端连接是在 \"flwr." +"server.start_server \"中处理的([#553](https://github.com/adap/flower/pull/" +"553) [#540](https://github.com/adap/flower/issues/540))。" #: ../../source/ref-changelog.md:842 +#, fuzzy msgid "v0.12.0 (2020-12-07)" -msgstr "" +msgstr "v0.12.0 (2020-12-07)" #: ../../source/ref-changelog.md:844 ../../source/ref-changelog.md:860 msgid "Important changes:" -msgstr "" +msgstr "重要变更:" #: ../../source/ref-changelog.md:846 msgid "" "Added an example for embedded devices " "([#507](https://github.com/adap/flower/pull/507))" -msgstr "" +msgstr "添加了嵌入式设备示例 ([#507](https://github.com/adap/flower/pull/507))" #: ../../source/ref-changelog.md:847 msgid "" @@ -11710,6 +11835,9 @@ msgid "" "([#504](https://github.com/adap/flower/pull/504) " "[#508](https://github.com/adap/flower/pull/508))" msgstr "" +"添加了一个新的 NumPyClient(除现有的 KerasClient " +"之外)([#504](https://github.com/adap/flower/pull/504) [#508](https://github" +".com/adap/flower/pull/508)" #: ../../source/ref-changelog.md:848 msgid "" @@ -11718,14 +11846,17 @@ msgid "" "([#494](https://github.com/adap/flower/pull/494) " "[#512](https://github.com/adap/flower/pull/512))" msgstr "" +"弃用 `flwr_example` 软件包,并开始将示例迁移到顶层的 `examples` 目录 " +"([#494](https://github.com/adap/flower/pull/494) [#512](https://github.com/" +"adap/flower/pull/512))" #: ../../source/ref-changelog.md:850 msgid "v0.11.0 (2020-11-30)" -msgstr "" +msgstr "v0.11.0 (2020-11-30)" #: ../../source/ref-changelog.md:852 msgid "Incompatible changes:" -msgstr "" +msgstr "不兼容的更改:" #: ../../source/ref-changelog.md:854 msgid "" @@ -11736,22 +11867,26 @@ msgid "" "which is why we're removing it from the four methods in Strategy. To " "migrate rename the following `Strategy` methods accordingly:" msgstr "" +"重命名了策略方法([#486](https://github.com/adap/flower/pull/486)),以统一 " +"\"花朵 \"公共 API 的命名。其他公共方法/函数(例如 `Client` 中的每个方法," +"以及 `Strategy.evaluate`)不使用 `on_` 前缀,这就是我们从 Strategy " +"中的四个方法中移除它的原因。迁移时,请相应地重命名以下 `Strategy` 方法:" #: ../../source/ref-changelog.md:855 msgid "`on_configure_evaluate` => `configure_evaluate`" -msgstr "" +msgstr "`on_configure_evaluate` => `configure_evaluate`" #: ../../source/ref-changelog.md:856 msgid "`on_aggregate_evaluate` => `aggregate_evaluate`" -msgstr "" +msgstr "`on_aggregate_evaluate` => `aggregate_evaluate`" #: ../../source/ref-changelog.md:857 msgid "`on_configure_fit` => `configure_fit`" -msgstr "" +msgstr "`on_configure_fit` => `configure_fit`" #: ../../source/ref-changelog.md:858 msgid "`on_aggregate_fit` => `aggregate_fit`" -msgstr "" +msgstr "`on_aggregate_fit` => `aggregate_fit`" #: ../../source/ref-changelog.md:862 msgid "" @@ -11759,34 +11894,42 @@ msgid "" "([#479](https://github.com/adap/flower/pull/479)). To migrate use " "`FedAvg` instead." msgstr "" +"已废弃的 `DefaultStrategy` ([#479](https://github.com/adap/flower/pull/479)) " +"。迁移时请使用 `FedAvg`。" #: ../../source/ref-changelog.md:863 msgid "" "Simplified examples and baselines " "([#484](https://github.com/adap/flower/pull/484))." -msgstr "" +msgstr "简化示例和基线([#484](https://github.com/adap/flower/pull/484))。" #: ../../source/ref-changelog.md:864 msgid "" "Removed presently unused `on_conclude_round` from strategy interface " "([#483](https://github.com/adap/flower/pull/483))." msgstr "" +"删除了策略界面中目前未使用的 \"on_conclude_round\"([#483](https://github." +"com/adap/flower/pull/483))。" #: ../../source/ref-changelog.md:865 msgid "" "Set minimal Python version to 3.6.1 instead of 3.6.9 " "([#471](https://github.com/adap/flower/pull/471))." msgstr "" +"将最小 Python 版本设为 3.6.1,而不是 3.6.9 ([#471](https://github.com/adap/" +"flower/pull/471))." #: ../../source/ref-changelog.md:866 msgid "" "Improved `Strategy` docstrings " "([#470](https://github.com/adap/flower/pull/470))." msgstr "" +"改进了 `Strategy` docstrings([#470](https://github.com/adap/flower/pull/" +"470))。" #: ../../source/ref-example-projects.rst:2 msgid "Example projects" -msgstr "" +msgstr "项目实例" #: ../../source/ref-example-projects.rst:4 msgid "" @@ -11796,6 +11939,9 @@ msgid "" "frameworks such as `PyTorch `_ or `TensorFlow " "`_." msgstr "" +"Flower 附带了许多使用示例。这些示例演示了如何使用 Flower " +"联合不同类型的现有机器学习管道,通常是利用流行的机器学习框架,如 `PyTorch " +"`_ 或 `TensorFlow `_。" #: ../../source/ref-example-projects.rst:11 msgid "" @@ -11804,20 +11950,25 @@ msgid "" "to make them easier to use. All new examples are based in the directory " "`examples `_." msgstr "" +"Flower 的使用示例曾与 Flower 捆绑在一个名为 ``flwr_example`` " +"的软件包中。我们正在将这些示例迁移到独立项目中,以使它们更易于使用。" +"所有新示例都位于目录 `examples `_。" #: ../../source/ref-example-projects.rst:16 msgid "The following examples are available as standalone projects." -msgstr "" +msgstr "以下示例可作为独立项目使用。" #: ../../source/ref-example-projects.rst:20 msgid "Quickstart TensorFlow/Keras" -msgstr "" +msgstr "快速入门 TensorFlow/Keras" #: ../../source/ref-example-projects.rst:22 msgid "" "The TensorFlow/Keras quickstart example shows CIFAR-10 image " "classification with MobileNetV2:" -msgstr "" +msgstr "TensorFlow/Keras 快速入门示例展示了使用 MobileNetV2 进行的 CIFAR-10 " +"图像分类:" #: ../../source/ref-example-projects.rst:25 msgid "" @@ -11825,51 +11976,65 @@ msgid "" "`_" msgstr "" +"`Quickstart TensorFlow (Code) `_" #: ../../source/ref-example-projects.rst:26 +#, fuzzy msgid "" "`Quickstart TensorFlow (Tutorial) `_" msgstr "" +"`Quickstart TensorFlow (Tutorial) `_" #: ../../source/ref-example-projects.rst:27 +#, fuzzy msgid "" "`Quickstart TensorFlow (Blog Post) `_" msgstr "" +"`Quickstart TensorFlow (Blog Post) `_" #: ../../source/ref-example-projects.rst:31 #: ../../source/tutorial-quickstart-pytorch.rst:5 msgid "Quickstart PyTorch" -msgstr "" +msgstr "快速入门 PyTorch" #: ../../source/ref-example-projects.rst:33 msgid "" "The PyTorch quickstart example shows CIFAR-10 image classification with a" " simple Convolutional Neural Network:" -msgstr "" +msgstr "PyTorch 快速入门范例展示了使用简单卷积神经网络进行 CIFAR-10 图像分类的情况:" #: ../../source/ref-example-projects.rst:36 +#, fuzzy msgid "" "`Quickstart PyTorch (Code) " "`_" msgstr "" +"`Quickstart PyTorch (Code) `_" #: ../../source/ref-example-projects.rst:37 +#, fuzzy msgid "" "`Quickstart PyTorch (Tutorial) `_" msgstr "" +"`Quickstart PyTorch (Tutorial) `_" #: ../../source/ref-example-projects.rst:41 msgid "PyTorch: From Centralized To Federated" -msgstr "" +msgstr "PyTorch: 从集中到联合" #: ../../source/ref-example-projects.rst:43 msgid "" "This example shows how a regular PyTorch project can be federated using " "Flower:" -msgstr "" +msgstr "本例展示了如何使用 Flower 联合一个普通的 PyTorch 项目:" #: ../../source/ref-example-projects.rst:45 msgid "" @@ -11877,6 +12042,8 @@ msgid "" "`_" msgstr "" +"PyTorch: 从集中到联合(代码) `_" #: ../../source/ref-example-projects.rst:46 msgid "" @@ -11884,32 +12051,39 @@ msgid "" "`_" msgstr "" +"PyTorch: 从集中到联合(教程) `_" #: ../../source/ref-example-projects.rst:50 msgid "Federated Learning on Raspberry Pi and Nvidia Jetson" -msgstr "" +msgstr "树莓派和 Nvidia Jetson 上的联合学习" #: ../../source/ref-example-projects.rst:52 msgid "" "This example shows how Flower can be used to build a federated learning " "system that run across Raspberry Pi and Nvidia Jetson:" -msgstr "" +msgstr "本示例展示了如何利用 Flower 建立一个跨 Raspberry Pi 和 Nvidia Jetson " +"运行的联合学习系统:" #: ../../source/ref-example-projects.rst:54 msgid "" "`Federated Learning on Raspberry Pi and Nvidia Jetson (Code) " "`_" msgstr "" +"树莓派和 Nvidia Jetson 上的联合学习(代码) `_" #: ../../source/ref-example-projects.rst:55 msgid "" "`Federated Learning on Raspberry Pi and Nvidia Jetson (Blog Post) " "`_" msgstr "" +"树莓派和 Nvidia Jetson 上的联合学习(博文) `_" #: ../../source/ref-example-projects.rst:60 msgid "Legacy Examples (`flwr_example`)" -msgstr "" +msgstr "传统示例 (`flwr_example`)" #: ../../source/ref-example-projects.rst:63 msgid "" @@ -11917,10 +12091,12 @@ msgid "" "in the future. New examples are provided as standalone projects in " "`examples `_." msgstr "" +"在 `flwr_example` 中的使用示例已被弃用,今后将被移除。新示例将作为独立项目在 " +"`examples `_ 中提供。" #: ../../source/ref-example-projects.rst:69 msgid "Extra Dependencies" -msgstr "" +msgstr "额外依赖" #: ../../source/ref-example-projects.rst:71 msgid "" @@ -11929,38 +12105,44 @@ msgid "" "frameworks, so additional dependencies need to be installed before an " "example can be run." msgstr "" +"Flower 核心框架只保留了最低限度的依赖项。" +"这些示例在不同机器学习框架的背景下演示了 " +"Flower,因此在运行示例之前需要安装额外的依赖项。" #: ../../source/ref-example-projects.rst:75 msgid "For PyTorch examples::" -msgstr "" +msgstr "PyTorch 示例::" #: ../../source/ref-example-projects.rst:79 msgid "For TensorFlow examples::" -msgstr "" +msgstr "TensorFlow 示例::" #: ../../source/ref-example-projects.rst:83 msgid "For both PyTorch and TensorFlow examples::" -msgstr "" +msgstr "PyTorch 和 TensorFlow 示例::" #: ../../source/ref-example-projects.rst:87 msgid "" "Please consult :code:`pyproject.toml` for a full list of possible extras " "(section :code:`[tool.poetry.extras]`)." msgstr "" +"请参阅 :code:`pyproject.toml`,了解可能的 extras 的完整列表(章节 " +":code:`[tool.poems.extras]`)。" #: ../../source/ref-example-projects.rst:92 msgid "PyTorch Examples" -msgstr "" +msgstr "PyTorch 示例" #: ../../source/ref-example-projects.rst:94 msgid "" "Our PyTorch examples are based on PyTorch 1.7. They should work with " "other releases as well. So far, we provide the following examples." -msgstr "" +msgstr "我们的 PyTorch 示例基于 PyTorch 1." +"7。它们应该也能在其他版本中使用。到目前为止,我们提供了以下示例。" #: ../../source/ref-example-projects.rst:98 msgid "CIFAR-10 Image Classification" -msgstr "" +msgstr "CIFAR-10 图像分类" #: ../../source/ref-example-projects.rst:100 msgid "" @@ -11969,34 +12151,37 @@ msgid "" "to train a simple CNN classifier in a federated learning setup with two " "clients." msgstr "" +"CIFAR-10 和 CIFAR-100 ``_ " +"是流行的 RGB 图像数据集。Flower CIFAR-10 示例使用 PyTorch " +"在有两个客户端的联合学习设置中训练一个简单的 CNN 分类器。" #: ../../source/ref-example-projects.rst:104 #: ../../source/ref-example-projects.rst:121 #: ../../source/ref-example-projects.rst:146 msgid "First, start a Flower server:" -msgstr "" +msgstr "首先,启动 Flower 服务器:" #: ../../source/ref-example-projects.rst:106 msgid "$ ./src/py/flwr_example/pytorch_cifar/run-server.sh" -msgstr "" +msgstr "$ ./src/py/flwr_example/pytorch_cifar/run-server.sh" #: ../../source/ref-example-projects.rst:108 #: ../../source/ref-example-projects.rst:125 #: ../../source/ref-example-projects.rst:150 msgid "Then, start the two clients in a new terminal window:" -msgstr "" +msgstr "然后,在新的终端窗口中启动两个客户端:" #: ../../source/ref-example-projects.rst:110 msgid "$ ./src/py/flwr_example/pytorch_cifar/run-clients.sh" -msgstr "" +msgstr "$ ./src/py/flwr_example/pytorch_cifar/run-clients.sh" #: ../../source/ref-example-projects.rst:112 msgid "For more details, see :code:`src/py/flwr_example/pytorch_cifar`." -msgstr "" +msgstr "更多详情,请参阅 :code:`src/py/flwr_example/pytorch_cifar`。" #: ../../source/ref-example-projects.rst:115 msgid "ImageNet-2012 Image Classification" -msgstr "" +msgstr "ImageNet-2012 图像分类" #: ../../source/ref-example-projects.rst:117 msgid "" @@ -12004,32 +12189,36 @@ msgid "" " vision datasets. The Flower ImageNet example uses PyTorch to train a " "ResNet-18 classifier in a federated learning setup with ten clients." msgstr "" +"ImageNet-2012 `_ 是主要的计算机视觉数据集之一。" +"Flower ImageNet 示例使用 PyTorch 在有十个客户端的联合学习设置中训练 ResNet-" +"18 分类器。" #: ../../source/ref-example-projects.rst:123 msgid "$ ./src/py/flwr_example/pytorch_imagenet/run-server.sh" -msgstr "" +msgstr "$ ./src/py/flwr_example/pytorch_imagenet/run-server.sh" #: ../../source/ref-example-projects.rst:127 msgid "$ ./src/py/flwr_example/pytorch_imagenet/run-clients.sh" -msgstr "" +msgstr "$ ./src/py/flwr_example/pytorch_imagenet/run-clients.sh" #: ../../source/ref-example-projects.rst:129 msgid "For more details, see :code:`src/py/flwr_example/pytorch_imagenet`." -msgstr "" +msgstr "更多详情,请参阅 :code:`src/py/flwr_example/pytorch_imagenet`。" #: ../../source/ref-example-projects.rst:133 msgid "TensorFlow Examples" -msgstr "" +msgstr "TensorFlow 示例" #: ../../source/ref-example-projects.rst:135 msgid "" "Our TensorFlow examples are based on TensorFlow 2.0 or newer. So far, we " "provide the following examples." -msgstr "" +msgstr "我们的 TensorFlow 示例基于 TensorFlow 2.0 " +"或更新版本。到目前为止,我们提供了以下示例。" #: ../../source/ref-example-projects.rst:139 msgid "Fashion-MNIST Image Classification" -msgstr "" +msgstr "时尚-MNIST 图像分类" #: ../../source/ref-example-projects.rst:141 msgid "" @@ -12039,36 +12228,40 @@ msgid "" " Fashion-MNIST and trains a simple image classification model over those " "partitions." msgstr "" +"`Fashion-MNIST `_ " +"经常被用作机器学习的 \"你好,世界!\"。我们遵循这一传统,提供了一个从时尚-" +"MNIST 中随机抽样本地数据集的示例,并在这些分区上训练一个简单的图像分类模型。" #: ../../source/ref-example-projects.rst:148 msgid "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-server.sh" -msgstr "" +msgstr "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-server.sh" #: ../../source/ref-example-projects.rst:152 msgid "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-clients.sh" -msgstr "" +msgstr "$ ./src/py/flwr_example/tensorflow_fashion_mnist/run-clients.sh" #: ../../source/ref-example-projects.rst:154 msgid "" "For more details, see " ":code:`src/py/flwr_example/tensorflow_fashion_mnist`." -msgstr "" +msgstr "更多详情,请参阅 :code:`src/py/flwr_example/tensorflow_fashion_mnist`。" #: ../../source/ref-faq.rst:4 msgid "" "This page collects answers to commonly asked questions about Federated " "Learning with Flower." -msgstr "" +msgstr "本页收集了有关 \"Flower 联合学习 \"常见问题的答案。" #: ../../source/ref-faq.rst msgid ":fa:`eye,mr-1` Can Flower run on Juptyter Notebooks / Google Colab?" -msgstr "" +msgstr ":fa:`eye,mr-1` Flower 可以在 Juptyter Notebooks / Google Colab 上运行吗?" #: ../../source/ref-faq.rst:8 msgid "" "Yes, it can! Flower even comes with a few under-the-hood optimizations to" " make it work even better on Colab. Here's a quickstart example:" -msgstr "" +msgstr "是的,它可以!Flower 甚至还进行了一些底层优化,使其在 Colab " +"上运行得更好。下面是一个快速启动示例:" #: ../../source/ref-faq.rst:10 msgid "" @@ -12076,6 +12269,8 @@ msgid "" "`_" msgstr "" +"`Flower 模拟 PyTorch `_" #: ../../source/ref-faq.rst:11 msgid "" @@ -12083,10 +12278,12 @@ msgid "" "`_" msgstr "" +"`Flower模拟TensorFlow/Keras `_" #: ../../source/ref-faq.rst msgid ":fa:`eye,mr-1` How can I run Federated Learning on a Raspberry Pi?" -msgstr "" +msgstr ":fa:`eye,mr-1` 如何在 Raspberry Pi 上运行联合学习?" #: ../../source/ref-faq.rst:15 msgid "" @@ -12095,10 +12292,14 @@ msgid "" " and the corresponding `GitHub code example " "`_." msgstr "" +"请点击此处查看有关嵌入式设备联合学习的 \"博文\"`_和相应的 \"GitHub 代码示例\"`_。" #: ../../source/ref-faq.rst msgid ":fa:`eye,mr-1` Does Flower support federated learning on Android devices?" -msgstr "" +msgstr ":fa:`eye,mr-1` Flower 是否支持安卓设备上的联合学习?" #: ../../source/ref-faq.rst:19 msgid "" @@ -12106,44 +12307,55 @@ msgid "" "`_ or check out the code examples:" msgstr "" +"是的,确实如此。请查看我们的 \"博客文章 `_\" 或查看代码示例:" #: ../../source/ref-faq.rst:21 msgid "" "`Android Kotlin example `_" msgstr "" +"`Android Kotlin 示例 `_" #: ../../source/ref-faq.rst:22 msgid "`Android Java example `_" -msgstr "" +msgstr "Android Java 示例 `_" #: ../../source/ref-faq.rst msgid ":fa:`eye,mr-1` Can I combine federated learning with blockchain?" -msgstr "" +msgstr ":fa:`eye,mr-1` 我可以将联合学习与区块链结合起来吗?" #: ../../source/ref-faq.rst:26 msgid "" "Yes, of course. A list of available examples using Flower within a " "blockchain environment is available here:" -msgstr "" +msgstr "当然可以。有关在区块链环境中使用 Flower 的可用示例列表,请点击此处:" #: ../../source/ref-faq.rst:28 msgid "" "`Flower meets Nevermined GitHub Repository `_." msgstr "" +"`Flower meets Nevermined GitHub Repository `_." #: ../../source/ref-faq.rst:29 +#, fuzzy msgid "" "`Flower meets Nevermined YouTube video " "`_." msgstr "" +"`Flower meets Nevermined YouTube video `_." #: ../../source/ref-faq.rst:30 +#, fuzzy msgid "" "`Flower meets KOSMoS `_." msgstr "" +"`Flower meets KOSMoS `_." #: ../../source/ref-faq.rst:31 msgid "" @@ -12151,16 +12363,22 @@ msgid "" "learning-same-mask-different-faces-imen-" "ayari/?trackingId=971oIlxLQ9%2BA9RB0IQ73XQ%3D%3D>`_ ." msgstr "" +"\"Flower 与 Talan 的邂逅 \"博文 `_ 。" #: ../../source/ref-faq.rst:32 +#, fuzzy msgid "" "`Flower meets Talan GitHub Repository " "`_ ." msgstr "" +"Flower 与 Talan 的 GitHub 代码库 `_ ." #: ../../source/ref-telemetry.md:1 msgid "Telemetry" -msgstr "" +msgstr "遥测功能" #: ../../source/ref-telemetry.md:3 msgid "" @@ -12169,6 +12387,8 @@ msgid "" "Flower team to understand how Flower is used and what challenges users " "might face." msgstr "" +"Flower 开源项目收集**匿名**使用指标,以便在充分知情的情况下做出改进 Flower " +"的决定。这样做能让 Flower 团队了解 Flower 的使用情况以及用户可能面临的挑战。" #: ../../source/ref-telemetry.md:5 msgid "" @@ -12176,20 +12396,23 @@ msgid "" " Staying true to this statement, Flower makes it easy to disable " "telemetry for users that do not want to share anonymous usage metrics." msgstr "" +"**Flower 是一个用于协作式人工智能和数据科学的友好框架。** Flower " +"遵循这一声明,让不想分享匿名使用指标的用户可以轻松禁用遥测技术。" #: ../../source/ref-telemetry.md:7 msgid "Principles" -msgstr "" +msgstr "原则" #: ../../source/ref-telemetry.md:9 msgid "We follow strong principles guarding anonymous usage metrics collection:" -msgstr "" +msgstr "我们遵循严格的匿名使用指标收集原则:" #: ../../source/ref-telemetry.md:11 msgid "" "**Optional:** You will always be able to disable telemetry; read on to " "learn “[How to opt-out](#how-to-opt-out)”." -msgstr "" +msgstr "**可选:** 您始终可以禁用遥测功能;请继续阅读\"[如何退出](#how-to-opt-out)\"" +"。" #: ../../source/ref-telemetry.md:12 msgid "" @@ -12198,6 +12421,8 @@ msgid "" "metrics](#collected-metrics)” to understand what metrics are being " "reported." msgstr "" +"**匿名:** 报告的使用指标是匿名的,不包含任何个人身份信息 (PII)。请参阅\"" +"[收集的指标](#collected-metrics) \"了解报告的指标。" #: ../../source/ref-telemetry.md:13 msgid "" @@ -12205,17 +12430,20 @@ msgid "" "reported; see the section “[How to inspect what is being reported](#how-" "to-inspect-what-is-being-reported)”" msgstr "" +"**透明:** 您可以轻松查看正在报告的匿名指标;请参阅\"[如何查看正在报告的指标" +"](#how-to-inspect-what-is-being-reported)\"部分" #: ../../source/ref-telemetry.md:14 msgid "" "**Open for feedback:** You can always reach out to us if you have " "feedback; see the section “[How to contact us](#how-to-contact-us)” for " "details." -msgstr "" +msgstr "**欢迎反馈:** 如果您有反馈意见,可以随时联系我们;详情请参见\"[如何联系我们" +"](#how-to-contact-us) \"部分。" #: ../../source/ref-telemetry.md:16 msgid "How to opt-out" -msgstr "" +msgstr "如何退订" #: ../../source/ref-telemetry.md:18 msgid "" @@ -12224,6 +12452,9 @@ msgid "" "`FLWR_TELEMETRY_ENABLED=0`. Assuming you are starting a Flower server or " "client, simply do so by prepending your command as in:" msgstr "" +"Flower 启动时,会检查环境变量 `FLWR_TELEMETRY_ENABLED` 是否存在。通过设置 " +"`FLWR_TELEMETRY_ENABLED=0` 可以轻松禁用遥测功能。假设你启动的是 Flower " +"服务器或客户端,只需在命令前添加以下内容即可:" #: ../../source/ref-telemetry.md:24 msgid "" @@ -12231,14 +12462,16 @@ msgid "" " `.bashrc` (or whatever configuration file applies to your environment) " "to disable Flower telemetry permanently." msgstr "" +"或者,你也可以在 `.bashrc`(或任何适用于你的环境的配置文件)中导出 " +"`FLWR_TELEMETRY_ENABLED=0` 来永久禁用 Flower telemetry。" #: ../../source/ref-telemetry.md:26 msgid "Collected metrics" -msgstr "" +msgstr "收集的指标" #: ../../source/ref-telemetry.md:28 msgid "Flower telemetry collects the following metrics:" -msgstr "" +msgstr "Flower 遥测技术收集以下指标:" #: ../../source/ref-telemetry.md:30 msgid "" @@ -12246,13 +12479,15 @@ msgid "" "being used. This helps us to decide whether we should invest effort into " "releasing a patch version for an older version of Flower or instead use " "the bandwidth to build new features." -msgstr "" +msgstr "**了解目前使用的 Flower 版本。这有助于我们决定是否应该投入精力为旧版本的 " +"Flower 发布补丁版本,还是利用带宽来构建新功能。" #: ../../source/ref-telemetry.md:32 msgid "" "**Operating system.** Enables us to answer questions such as: *Should we " "create more guides for Linux, macOS, or Windows?*" -msgstr "" +msgstr "**操作系统**使我们能够回答以下问题: *我们应该为 Linux、macOS 还是 Windows " +"创建更多指南?*" #: ../../source/ref-telemetry.md:34 msgid "" @@ -12261,20 +12496,24 @@ msgid "" "Python or stop supporting them and start taking advantage of new Python " "features." msgstr "" +"**例如,了解 Python 版本有助于我们决定是否应该投入精力支持旧版本的 Python," +"还是停止支持这些版本并开始利用新的 Python 功能。" #: ../../source/ref-telemetry.md:36 msgid "" "**Hardware properties.** Understanding the hardware environment that " "Flower is being used in helps to decide whether we should, for example, " "put more effort into supporting low-resource environments." -msgstr "" +msgstr "**硬件属性** 了解 Flower " +"的硬件使用环境,有助于决定我们是否应在支持低资源环境等方面投入更多精力。" #: ../../source/ref-telemetry.md:38 msgid "" "**Execution mode.** Knowing what execution mode Flower starts in enables " "us to understand how heavily certain features are being used and better " "prioritize based on that." -msgstr "" +msgstr "** 执行模式** 了解 Flower " +"的启动执行模式,能让我们了解某些功能的使用率,并据此更好地确定优先级。" #: ../../source/ref-telemetry.md:40 msgid "" @@ -12283,6 +12522,8 @@ msgid "" "types not only start Flower workloads but also successfully complete " "them." msgstr "" +"**每次 Flower 工作负载启动时,Flower 遥测都会随机分配一个内存集群 ID。这样," +"我们就能了解哪些设备类型不仅启动了 Flower 工作负载,而且还成功完成了它们。" #: ../../source/ref-telemetry.md:42 msgid "" @@ -12295,6 +12536,10 @@ msgid "" "in order to reproduce the issue, multiple workloads must be started at " "the same time." msgstr "" +"**Source.** Flower 遥测会在第一次生成遥测事件时,尝试在 `~/.flwr/source` " +"中存储一个随机源 ID。源 ID 对于识别问题是否反复出现或问题是否由多个集群同时运" +"行触发(这在模拟中经常发生)非常重要。例如,如果设备同时运行多个工作负载并导" +"致问题,那么为了重现问题,必须同时启动多个工作负载。" #: ../../source/ref-telemetry.md:44 msgid "" @@ -12303,6 +12548,9 @@ msgid "" "request mentioning the source ID to `telemetry@flower.dev`. All events " "related to that source ID will then be permanently deleted." msgstr "" +"您可以随时删除源 ID。如果您希望删除特定源 ID 下记录的所有事件,可以向 " +"`telemetry@flower.dev` 发送删除请求,并提及该源 ID。届时,与该源 ID " +"相关的所有事件都将被永久删除。" #: ../../source/ref-telemetry.md:46 msgid "" @@ -12312,17 +12560,21 @@ msgid "" "any changes to the metrics collected and publish changes in the " "changelog." msgstr "" +"我们不会收集任何个人身份信息。如果您认为所收集的任何指标可能以任何方式被滥用" +",请[与我们联系](#how-to-contact-us)。我们将更新本页面,以反映对所收集指标" +"的任何更改,并在更新日志中公布更改内容。" #: ../../source/ref-telemetry.md:48 msgid "" "If you think other metrics would be helpful for us to better guide our " "decisions, please let us know! We will carefully review them; if we are " "confident that they do not compromise user privacy, we may add them." -msgstr "" +msgstr "如果您认为其他指标有助于我们更好地指导决策,请告诉我们!我们将仔细审查这些指" +"标;如果我们确信它们不会损害用户隐私,我们可能会添加这些指标。" #: ../../source/ref-telemetry.md:50 msgid "How to inspect what is being reported" -msgstr "" +msgstr "如何检查报告中的内容" #: ../../source/ref-telemetry.md:52 msgid "" @@ -12333,16 +12585,20 @@ msgid "" "`FLWR_TELEMETRY_ENABLED` so that you can inspect the telemetry feature " "without sending any metrics." msgstr "" +"我们希望能让您轻松查看所报告的匿名使用指标。通过设置环境变量 " +"`FLWR_TELEMETRY_LOGGING=1` 可以查看所有报告的遥测信息。日志记录默认为禁用。" +"您可以不使用 `FLWR_TELEMETRY_ENABLED` " +"而单独使用日志记录,这样就可以在不发送任何指标的情况下检查遥测功能。" #: ../../source/ref-telemetry.md:58 msgid "" "The inspect Flower telemetry without sending any anonymous usage metrics," " use both environment variables:" -msgstr "" +msgstr "在不发送任何匿名使用指标的情况下检查 Flower 遥测,可使用这两个环境变量:" #: ../../source/ref-telemetry.md:64 msgid "How to contact us" -msgstr "" +msgstr "如何联系我们" #: ../../source/ref-telemetry.md:66 msgid "" @@ -12351,22 +12607,26 @@ msgid "" "[Slack](https://flower.dev/join-slack/) (channel `#telemetry`) or email " "(`telemetry@flower.dev`)." msgstr "" +"我们希望听到您的意见。如果您对如何改进我们处理匿名使用指标的方式有任何反馈或" +"想法,请通过 [Slack](https://flower.dev/join-slack/) (频道 `#telemetry`)" +"或电子邮件 (`telemetry@flower.dev`)与我们联系。" #: ../../source/tutorial-quickstart-android.rst:-1 msgid "" "Read this Federated Learning quickstart tutorial for creating an Android " "app using Flower." -msgstr "" +msgstr "阅读本 Federated Learning 快速入门教程,了解如何使用 Flower 创建 Android " +"应用程序。" #: ../../source/tutorial-quickstart-android.rst:5 msgid "Quickstart Android" -msgstr "" +msgstr "快速入门 Android" #: ../../source/tutorial-quickstart-android.rst:10 msgid "" "Let's build a federated learning system using TFLite and Flower on " "Android!" -msgstr "" +msgstr "让我们在 Android 上使用 TFLite 和 Flower 构建一个联合学习系统!" #: ../../source/tutorial-quickstart-android.rst:12 msgid "" @@ -12374,20 +12634,24 @@ msgid "" "`_ to learn " "more." msgstr "" +"请参阅 \"完整代码示例 `_\" 了解更多信息。" #: ../../source/tutorial-quickstart-fastai.rst:-1 msgid "" "Check out this Federated Learning quickstart tutorial for using Flower " "with FastAI to train a vision model on CIFAR-10." msgstr "" +"查看此 Federated Learning 快速入门教程,了解如何使用 Flower 和 FastAI 在 " +"CIFAR-10 上训练视觉模型。" #: ../../source/tutorial-quickstart-fastai.rst:5 msgid "Quickstart fastai" -msgstr "" +msgstr "快速入门 fastai" #: ../../source/tutorial-quickstart-fastai.rst:10 msgid "Let's build a federated learning system using fastai and Flower!" -msgstr "" +msgstr "让我们用 fastai 和 Flower 建立一个联合学习系统!" #: ../../source/tutorial-quickstart-fastai.rst:12 msgid "" @@ -12395,22 +12659,28 @@ msgid "" "`_ " "to learn more." msgstr "" +"请参阅 \"完整代码示例 `_\" 了解更多信息。" #: ../../source/tutorial-quickstart-huggingface.rst:-1 msgid "" "Check out this Federating Learning quickstart tutorial for using Flower " "with HuggingFace Transformers in order to fine-tune an LLM." msgstr "" +"查看此 Federating Learning 快速入门教程,了解如何使用 Flower 和 HuggingFace " +"Transformers 来微调 LLM。" #: ../../source/tutorial-quickstart-huggingface.rst:5 +#, fuzzy msgid "Quickstart 🤗 Transformers" -msgstr "" +msgstr "快速入门 🤗 变形金刚" #: ../../source/tutorial-quickstart-huggingface.rst:10 +#, fuzzy msgid "" "Let's build a federated learning system using Hugging Face Transformers " "and Flower!" -msgstr "" +msgstr "让我们用 \"抱抱脸变形金刚 \"和 \"Flower \"建立一个联合学习系统!" #: ../../source/tutorial-quickstart-huggingface.rst:12 msgid "" @@ -12420,10 +12690,14 @@ msgid "" " over a dataset of IMDB ratings. The end goal is to detect if a movie " "rating is positive or negative." msgstr "" +"我们将利用 \"拥抱面孔 \"技术,使用 Flower " +"在多个客户端上联合训练语言模型。更具体地说,我们将对预先训练好的 Transformer " +"模型(distilBERT)进行微调,以便在 IMDB " +"评分数据集上进行序列分类。最终目标是检测电影评分是正面还是负面。" #: ../../source/tutorial-quickstart-huggingface.rst:18 msgid "Dependencies" -msgstr "" +msgstr "依赖关系" #: ../../source/tutorial-quickstart-huggingface.rst:20 msgid "" @@ -12432,14 +12706,18 @@ msgid "" ":code:`torch`, and :code:`transformers`. This can be done using " ":code:`pip`:" msgstr "" +"要学习本教程,您需要安装以下软件包: :code:`datasets`、 :code:`evaluate`、 " +":code:`flwr`、 :code:`torch`和 :code:`transformers`。这可以通过 :code:`pip` " +"来完成:" #: ../../source/tutorial-quickstart-huggingface.rst:30 +#, fuzzy msgid "Standard Hugging Face workflow" -msgstr "" +msgstr "标准 Hugging Face 工作流程" #: ../../source/tutorial-quickstart-huggingface.rst:33 msgid "Handling the data" -msgstr "" +msgstr "处理数据" #: ../../source/tutorial-quickstart-huggingface.rst:35 msgid "" @@ -12447,10 +12725,13 @@ msgid "" "library. We then need to tokenize the data and create :code:`PyTorch` " "dataloaders, this is all done in the :code:`load_data` function:" msgstr "" +"为了获取 IMDB 数据集,我们将使用 Hugging Face 的 :code:`datasets` " +"库。然后,我们需要对数据进行标记化,并创建 :code:`PyTorch` 数据加载器," +"这些都将在 :code:`load_data` 函数中完成:" #: ../../source/tutorial-quickstart-huggingface.rst:81 msgid "Training and testing the model" -msgstr "" +msgstr "训练和测试模型" #: ../../source/tutorial-quickstart-huggingface.rst:83 msgid "" @@ -12458,24 +12739,28 @@ msgid "" "take care of the training and testing. This is very similar to any " ":code:`PyTorch` training or testing loop:" msgstr "" +"有了创建 trainloader 和 testloader 的方法后,我们就可以进行训练和测试了。" +"这与任何 :code:`PyTorch` 训练或测试循环都非常相似:" #: ../../source/tutorial-quickstart-huggingface.rst:121 msgid "Creating the model itself" -msgstr "" +msgstr "创建模型本身" #: ../../source/tutorial-quickstart-huggingface.rst:123 msgid "" "To create the model itself, we will just load the pre-trained distillBERT" " model using Hugging Face’s :code:`AutoModelForSequenceClassification` :" msgstr "" +"要创建模型本身,我们只需使用 Hugging Face 的 " +":code:`AutoModelForSequenceClassification` 加载预训练的 distillBERT 模型:" #: ../../source/tutorial-quickstart-huggingface.rst:136 msgid "Federating the example" -msgstr "" +msgstr "将示例联合起来" #: ../../source/tutorial-quickstart-huggingface.rst:139 msgid "Creating the IMDBClient" -msgstr "" +msgstr "创建 IMDBClient" #: ../../source/tutorial-quickstart-huggingface.rst:141 msgid "" @@ -12483,6 +12768,9 @@ msgid "" "Flower client class (inheriting from :code:`flwr.client.NumPyClient`). " "This is very easy, as our model is a standard :code:`PyTorch` model:" msgstr "" +"要将我们的示例联合到多个客户端,我们首先需要编写 Flower 客户端类(继承自 " +":code:`flwr.client.NumPyClient`)。这很容易,因为我们的模型是一个标准的 " +":code:`PyTorch` 模型:" #: ../../source/tutorial-quickstart-huggingface.rst:169 msgid "" @@ -12493,10 +12781,13 @@ msgid "" ":code:`evaluate` function tests the model locally and returns the " "relevant metrics." msgstr "" +":code:`get_parameters` 函数允许服务器获取客户端的参数。相反,:code:`set_param" +"eters`函数允许服务器将其参数发送给客户端。最后,:code:`fit`函数在本地为客户端" +"训练模型,:code:`evaluate`函数在本地测试模型并返回相关指标。" #: ../../source/tutorial-quickstart-huggingface.rst:175 msgid "Starting the server" -msgstr "" +msgstr "启动服务器" #: ../../source/tutorial-quickstart-huggingface.rst:177 msgid "" @@ -12507,6 +12798,10 @@ msgid "" "all the clients' weights at each round) and then using the " ":code:`flwr.server.start_server` function:" msgstr "" +"现在我们有了实例化客户端的方法,我们需要创建服务器,以便汇总结果。使用 " +"Flower,首先选择一个策略(这里我们使用 " +":code:`FedAvg`,它将把全局权重定义为每轮所有客户端权重的平均值),然后使用 " +":code:`flwr.server.start_server`函数,就可以非常轻松地完成这项工作:" #: ../../source/tutorial-quickstart-huggingface.rst:205 msgid "" @@ -12514,20 +12809,22 @@ msgid "" "aggregate the metrics distributed amongst the clients (basically this " "allows us to display a nice average accuracy and loss for every round)." msgstr "" +"使用 :code:`weighted_average` 函数是为了提供一种方法来汇总分布在客户端的指标" +"(基本上,这可以让我们显示每一轮的平均精度和损失)。" #: ../../source/tutorial-quickstart-huggingface.rst:209 msgid "Putting everything together" -msgstr "" +msgstr "把所有东西放在一起" #: ../../source/tutorial-quickstart-huggingface.rst:211 msgid "We can now start client instances using:" -msgstr "" +msgstr "现在我们可以使用:" #: ../../source/tutorial-quickstart-huggingface.rst:221 msgid "" "And they will be able to connect to the server and start the federated " "training." -msgstr "" +msgstr "他们就能连接到服务器,开始联合培训。" #: ../../source/tutorial-quickstart-huggingface.rst:223 msgid "" @@ -12537,35 +12834,43 @@ msgid "" "huggingface](https://github.com/adap/flower/tree/main/examples" "/quickstart-huggingface)." msgstr "" +"如果您想查看所有内容,请查看完整的代码示例: [https://github.com/adap/flower/" +"tree/main/examples/quickstart-huggingface](https://github.com/adap/flower/" +"tree/main/examples/quickstart-huggingface)." #: ../../source/tutorial-quickstart-huggingface.rst:227 msgid "" "Of course, this is a very basic example, and a lot can be added or " "modified, it was just to showcase how simply we could federate a Hugging " "Face workflow using Flower." -msgstr "" +msgstr "当然,这只是一个非常基本的示例,还可以添加或修改很多内容," +"只是为了展示我们可以如何简单地使用 Flower 联合 \"拥抱的脸 \"工作流程。" #: ../../source/tutorial-quickstart-huggingface.rst:230 msgid "" "Note that in this example we used :code:`PyTorch`, but we could have very" " well used :code:`TensorFlow`." -msgstr "" +msgstr "请注意,在本例中我们使用了 :code:`PyTorch`,但也完全可以使用 " +":code:`TensorFlow`。" #: ../../source/tutorial-quickstart-ios.rst:-1 msgid "" "Read this Federated Learning quickstart tutorial for creating an iOS app " "using Flower to train a neural network on MNIST." msgstr "" +"阅读本 Federated Learning 快速入门教程,了解如何使用 Flower 创建 iOS " +"应用程序,并在 MNIST 上训练神经网络。" #: ../../source/tutorial-quickstart-ios.rst:5 msgid "Quickstart iOS" -msgstr "" +msgstr "快速入门 iOS" #: ../../source/tutorial-quickstart-ios.rst:10 msgid "" "In this tutorial we will learn how to train a Neural Network on MNIST " "using Flower and CoreML on iOS devices." -msgstr "" +msgstr "在本教程中,我们将学习如何在 iOS 设备上使用 Flower 和 CoreML 在 MNIST " +"上训练神经网络。" #: ../../source/tutorial-quickstart-ios.rst:12 msgid "" @@ -12574,12 +12879,16 @@ msgid "" "`_. For the Flower " "client implementation in iOS, it is recommended to use Xcode as our IDE." msgstr "" +"首先,为了运行 Flower Python 服务器,建议创建一个虚拟环境,并在 `virtualenv " +"`_ 中运行一切。对于在 " +"iOS 中实现 Flower 客户端,建议使用 Xcode 作为我们的集成开发环境。" #: ../../source/tutorial-quickstart-ios.rst:15 msgid "" "Our example consists of one Python *server* and two iPhone *clients* that" " all have the same model." -msgstr "" +msgstr "我们的示例包括一个 Python 服务器*和两个 iPhone " +"客户端*,它们都具有相同的模型。" #: ../../source/tutorial-quickstart-ios.rst:17 msgid "" @@ -12589,17 +12898,22 @@ msgid "" "Finally, the *server* sends this improved version of the model back to " "each *client*. A complete cycle of weight updates is called a *round*." msgstr "" +"*客户*负责根据其本地数据集为模型生成单独的权重更新。然后,这些更新会被发送到*" +"服务器,由*服务器汇总后生成一个更好的模型。最后,*服务器*将改进后的模型发送回" +"每个*客户端*。一个完整的权重更新周期称为一个*轮*。" #: ../../source/tutorial-quickstart-ios.rst:21 msgid "" "Now that we have a rough idea of what is going on, let's get started to " "setup our Flower server environment. We first need to install Flower. You" " can do this by using pip:" -msgstr "" +msgstr "现在我们已经有了一个大致的概念,让我们开始设置 Flower 服务器环境吧。首先," +"我们需要安装 Flower。你可以使用 pip 来安装:" #: ../../source/tutorial-quickstart-ios.rst:27 +#, fuzzy msgid "Or Poetry:" -msgstr "" +msgstr "或诗歌:" #: ../../source/tutorial-quickstart-ios.rst:36 msgid "" @@ -12609,6 +12923,9 @@ msgid "" "Flower client with CoreML, that has been implemented and stored inside " "the Swift SDK. The client implementation can be seen below:" msgstr "" +"现在我们已经安装了所有依赖项,让我们使用 CoreML 作为本地训练管道和 MNIST " +"作为数据集,运行一个简单的分布式训练。为了简单起见,我们将使用 CoreML 的完整 " +"Flower 客户端,该客户端已在 Swift SDK 中实现并存储。客户端实现如下:" #: ../../source/tutorial-quickstart-ios.rst:72 msgid "" @@ -12620,10 +12937,16 @@ msgid "" "`_ to learn more " "about the app." msgstr "" +"让我们在 Xcode 中创建一个新的应用程序项目,并在项目中添加 :code:`flwr` " +"作为依赖关系。对于我们的应用程序,我们将在 :code:`FLiOSModel.swift` " +"中存储应用程序的逻辑,在 :code:`ContentView.swift` 中存储 UI " +"元素。在本快速入门中,我们将更多地关注 :code:`FLiOSModel.swift`。请参阅 " +"\"完整代码示例 `_\" " +"以了解更多有关应用程序的信息。" #: ../../source/tutorial-quickstart-ios.rst:75 msgid "Import Flower and CoreML related packages in :code:`FLiOSModel.swift`:" -msgstr "" +msgstr "在 :code:`FLiOSModel.swift` 中导入 Flower 和 CoreML 相关软件包:" #: ../../source/tutorial-quickstart-ios.rst:83 msgid "" @@ -12635,6 +12958,11 @@ msgid "" "into :code:`MLBatchProvider` object. The preprocessing is done inside " ":code:`DataLoader.swift`." msgstr "" +"然后通过拖放将 mlmodel 添加到项目中,在部署到 iOS 设备时,mlmodel " +"将被捆绑到应用程序中。我们需要传递 url 以访问 mlmodel 并运行 CoreML " +"机器学习进程,可通过调用函数 :code:`Bundle.main.url` 获取。对于 MNIST " +"数据集,我们需要将其预处理为 :code:`MLBatchProvider` 对象。预处理在 " +":code:`DataLoader.swift` 中完成。" #: ../../source/tutorial-quickstart-ios.rst:99 msgid "" @@ -12645,18 +12973,23 @@ msgid "" "which are written as proto files. The implementation can be seen in " ":code:`MLModelInspect`." msgstr "" +"由于 CoreML 不允许在训练前查看模型参数,而在训练过程中或训练后访问模型参数只" +"能通过指定层名来完成,因此我们需要事先通过查看模型规范(写成 proto " +"文件)来了解这些信息。具体实现可参见 :code:`MLModelInspect`。" #: ../../source/tutorial-quickstart-ios.rst:102 msgid "" "After we have all of the necessary informations, let's create our Flower " "client." -msgstr "" +msgstr "获得所有必要信息后,让我们创建 Flower 客户端。" #: ../../source/tutorial-quickstart-ios.rst:117 msgid "" "Then start the Flower gRPC client and start communicating to the server " "by passing our Flower client to the function :code:`startFlwrGRPC`." msgstr "" +"然后启动 Flower gRPC 客户端,并通过将 Flower 客户端传递给函数 " +":code:`startFlwrGRPC` 来开始与服务器通信。" #: ../../source/tutorial-quickstart-ios.rst:124 msgid "" @@ -12667,6 +13000,10 @@ msgid "" "in the application before clicking the start button to start the " "federated learning process." msgstr "" +"这就是客户端。我们只需实现 :code:`Client` 或调用提供的 :code:`MLFlwrClient` " +"并调用 :code:`startFlwrGRPC()`。属性 :code:`hostname` 和 :code:`port` 会告诉" +"客户端要连接到哪个服务器。这可以通过在应用程序中输入主机名和端口来实现,然后" +"再点击开始按钮启动联合学习进程。" #: ../../source/tutorial-quickstart-ios.rst:131 #: ../../source/tutorial-quickstart-mxnet.rst:226 @@ -12677,6 +13014,9 @@ msgid "" "configuration possibilities at their default values. In a file named " ":code:`server.py`, import Flower and start the server:" msgstr "" +"对于简单的工作负载,我们可以启动 Flower " +"服务器,并将所有配置选项保留为默认值。在名为 :code:`server.py` 的文件中," +"导入 Flower 并启动服务器:" #: ../../source/tutorial-quickstart-ios.rst:142 #: ../../source/tutorial-quickstart-mxnet.rst:237 @@ -12684,7 +13024,7 @@ msgstr "" #: ../../source/tutorial-quickstart-scikitlearn.rst:215 #: ../../source/tutorial-quickstart-tensorflow.rst:112 msgid "Train the model, federated!" -msgstr "" +msgstr "联合训练模型!" #: ../../source/tutorial-quickstart-ios.rst:144 #: ../../source/tutorial-quickstart-pytorch.rst:218 @@ -12694,7 +13034,8 @@ msgid "" "With both client and server ready, we can now run everything and see " "federated learning in action. FL systems usually have a server and " "multiple clients. We therefore have to start the server first:" -msgstr "" +msgstr "客户端和服务器都已准备就绪,我们现在可以运行一切,看看联合学习的实际效果。FL " +"系统通常有一个服务器和多个客户端。因此,我们必须先启动服务器:" #: ../../source/tutorial-quickstart-ios.rst:152 msgid "" @@ -12705,6 +13046,11 @@ msgid "" "`_." msgstr "" +"服务器运行后,我们就可以在不同的终端启动客户端。通过 Xcode 构建并运行客户端," +"一个通过 Xcode 模拟器,另一个通过部署到 iPhone。" +"要了解更多有关如何将应用程序部署到 iPhone 或模拟器的信息,请访问 `此处 " +"`_。" #: ../../source/tutorial-quickstart-ios.rst:156 msgid "" @@ -12713,32 +13059,39 @@ msgid "" "`_ for this " "example can be found in :code:`examples/ios`." msgstr "" +"恭喜您 您已经成功地在 ios 设备中构建并运行了第一个联合学习系统。" +"本示例的完整源代码 `_ " +"可在 :code:`examples/ios` 中找到。" #: ../../source/tutorial-quickstart-jax.rst:-1 msgid "" "Check out this Federated Learning quickstart tutorial for using Flower " "with Jax to train a linear regression model on a scikit-learn dataset." msgstr "" +"查看此 Federated Learning 快速入门教程,了解如何使用 Flower 和 Jax 在 scikit-" +"learn 数据集上训练线性回归模型。" #: ../../source/tutorial-quickstart-jax.rst:5 msgid "Quickstart JAX" -msgstr "" +msgstr "快速入门 JAX" #: ../../source/tutorial-quickstart-mxnet.rst:-1 msgid "" "Check out this Federated Learning quickstart tutorial for using Flower " "with MXNet to train a Sequential model on MNIST." -msgstr "" +msgstr "查看此 Federated Learning 快速入门教程,了解如何使用 Flower 和 MXNet 在 " +"MNIST 上训练序列模型。" #: ../../source/tutorial-quickstart-mxnet.rst:5 msgid "Quickstart MXNet" -msgstr "" +msgstr "快速入门 MXNet" #: ../../source/tutorial-quickstart-mxnet.rst:10 msgid "" "In this tutorial, we will learn how to train a :code:`Sequential` model " "on MNIST using Flower and MXNet." -msgstr "" +msgstr "在本教程中,我们将学习如何使用 Flower 和 MXNet 在 MNIST 上训练 " +":code:`Sequential` 模型。" #: ../../source/tutorial-quickstart-mxnet.rst:12 #: ../../source/tutorial-quickstart-scikitlearn.rst:12 @@ -12747,6 +13100,8 @@ msgid "" "within this `virtualenv `_." msgstr "" +"建议创建一个虚拟环境,并在此 `virtualenv `_ 中运行所有内容。" #: ../../source/tutorial-quickstart-mxnet.rst:16 #: ../../source/tutorial-quickstart-scikitlearn.rst:16 @@ -12758,17 +13113,21 @@ msgid "" "model back to each *client*. A complete cycle of parameters updates is " "called a *round*." msgstr "" +"*客户*负责根据其本地数据集为模型生成单独的模型参数更新。然后,这些更新将被发" +"送到*服务器,由*服务器汇总后生成一个更新的全局模型。最后,*服务器*将这一改进" +"版模型发回给每个*客户端*。一个完整的参数更新周期称为*轮*。" #: ../../source/tutorial-quickstart-mxnet.rst:20 #: ../../source/tutorial-quickstart-scikitlearn.rst:20 msgid "" "Now that we have a rough idea of what is going on, let's get started. We " "first need to install Flower. You can do this by running:" -msgstr "" +msgstr "现在,我们已经有了一个大致的概念,让我们开始吧。首先,我们需要安装 " +"Flower。运行:" #: ../../source/tutorial-quickstart-mxnet.rst:26 msgid "Since we want to use MXNet, let's go ahead and install it:" -msgstr "" +msgstr "既然我们要使用 MXNet,那就继续安装吧:" #: ../../source/tutorial-quickstart-mxnet.rst:36 msgid "" @@ -12778,16 +13137,20 @@ msgid "" "Digit Recognition tutorial " "`_." msgstr "" +"现在,我们已经安装了所有依赖项,让我们用两个客户端和一个服务器来运行一个简单" +"的分布式训练。我们的训练程序和网络架构基于 MXNet 的 \"手写数字识别教程 " +"`_\"。" #: ../../source/tutorial-quickstart-mxnet.rst:38 msgid "" "In a file called :code:`client.py`, import Flower and MXNet related " "packages:" -msgstr "" +msgstr "在名为 :code:`client.py` 的文件中,导入 Flower 和 MXNet 相关软件包:" #: ../../source/tutorial-quickstart-mxnet.rst:53 msgid "In addition, define the device allocation in MXNet with:" -msgstr "" +msgstr "此外,还可以在 MXNet 中定义设备分配:" #: ../../source/tutorial-quickstart-mxnet.rst:59 msgid "" @@ -12795,28 +13158,33 @@ msgid "" "handwritten digits for machine learning. The MXNet utility " ":code:`mx.test_utils.get_mnist()` downloads the training and test data." msgstr "" +"我们使用 MXNet 加载 MNIST,这是一个用于机器学习的流行手写数字图像分类数据集。" +"MXNet 工具 :code:`mx.test_utils.get_mnist()` 会下载训练和测试数据。" #: ../../source/tutorial-quickstart-mxnet.rst:73 msgid "" "Define the training and loss with MXNet. We train the model by looping " "over the dataset, measure the corresponding loss, and optimize it." -msgstr "" +msgstr "用 MXNet " +"定义训练和损失。我们在数据集上循环训练模型,测量相应的损失,并对其进行优化。" #: ../../source/tutorial-quickstart-mxnet.rst:111 msgid "" "Next, we define the validation of our machine learning model. We loop " "over the test set and measure both loss and accuracy on the test set." -msgstr "" +msgstr "接下来,我们定义机器学习模型的验证。我们在测试集上循环,测量测试集上的损失和" +"准确率。" #: ../../source/tutorial-quickstart-mxnet.rst:135 msgid "" "After defining the training and testing of a MXNet machine learning " "model, we use these functions to implement a Flower client." -msgstr "" +msgstr "在定义了 MXNet 机器学习模型的训练和测试后,我们使用这些函数实现了 Flower " +"客户端。" #: ../../source/tutorial-quickstart-mxnet.rst:137 msgid "Our Flower clients will use a simple :code:`Sequential` model:" -msgstr "" +msgstr "我们的 Flower 客户端将使用简单的 :code:`Sequential` 模型:" #: ../../source/tutorial-quickstart-mxnet.rst:156 msgid "" @@ -12824,6 +13192,9 @@ msgid "" " propagation to initialize the model and model parameters with " ":code:`model(init)`. Next, we implement a Flower client." msgstr "" +"使用 :code:`load_data()` 加载数据集后,我们会执行一次前向传播,使用 " +":code:`model(init)` 初始化模型和模型参数。接下来,我们实现一个 Flower " +"客户端。" #: ../../source/tutorial-quickstart-mxnet.rst:158 #: ../../source/tutorial-quickstart-pytorch.rst:144 @@ -12835,6 +13206,10 @@ msgid "" "those instructions and calls one of the :code:`Client` methods to run " "your code (i.e., to train the neural network we defined earlier)." msgstr "" +"Flower 服务器通过一个名为 :code:`Client` 的接口与客户端交互。当服务器选择一个" +"特定的客户端进行训练时,它会通过网络发送训练指令。客户端接收到这些指令后," +"会调用 :code:`Client` " +"方法之一来运行你的代码(即训练我们之前定义的神经网络)。" #: ../../source/tutorial-quickstart-mxnet.rst:164 msgid "" @@ -12844,18 +13219,21 @@ msgid "" "defining the following methods (:code:`set_parameters` is optional " "though):" msgstr "" +"Flower 提供了一个名为 :code:`NumPyClient` 的便利类,当您的工作负载使用 MXNet " +"时,它可以让您更轻松地实现 :code:`Client` 接口。实现 :code:`NumPyClient` " +"通常意味着定义以下方法(:code:`set_parameters` 是可选的):" #: ../../source/tutorial-quickstart-mxnet.rst:170 #: ../../source/tutorial-quickstart-pytorch.rst:156 #: ../../source/tutorial-quickstart-scikitlearn.rst:109 msgid "return the model weight as a list of NumPy ndarrays" -msgstr "" +msgstr "以 NumPy ndarrays 列表形式返回模型权重" #: ../../source/tutorial-quickstart-mxnet.rst:171 #: ../../source/tutorial-quickstart-pytorch.rst:157 #: ../../source/tutorial-quickstart-scikitlearn.rst:111 msgid ":code:`set_parameters` (optional)" -msgstr "" +msgstr ":code:`set_parameters` (可选)" #: ../../source/tutorial-quickstart-mxnet.rst:172 #: ../../source/tutorial-quickstart-pytorch.rst:158 @@ -12863,41 +13241,42 @@ msgstr "" msgid "" "update the local model weights with the parameters received from the " "server" -msgstr "" +msgstr "用从服务器接收到的参数更新本地模型权重" #: ../../source/tutorial-quickstart-mxnet.rst:174 #: ../../source/tutorial-quickstart-pytorch.rst:160 #: ../../source/tutorial-quickstart-scikitlearn.rst:114 msgid "set the local model weights" -msgstr "" +msgstr "设置本地模型权重" #: ../../source/tutorial-quickstart-mxnet.rst:175 #: ../../source/tutorial-quickstart-pytorch.rst:161 #: ../../source/tutorial-quickstart-scikitlearn.rst:115 msgid "train the local model" -msgstr "" +msgstr "训练本地模型" #: ../../source/tutorial-quickstart-mxnet.rst:176 #: ../../source/tutorial-quickstart-pytorch.rst:162 #: ../../source/tutorial-quickstart-scikitlearn.rst:116 msgid "receive the updated local model weights" -msgstr "" +msgstr "接收更新的本地模型权重" #: ../../source/tutorial-quickstart-mxnet.rst:178 #: ../../source/tutorial-quickstart-pytorch.rst:164 #: ../../source/tutorial-quickstart-scikitlearn.rst:118 msgid "test the local model" -msgstr "" +msgstr "测试本地模型" #: ../../source/tutorial-quickstart-mxnet.rst:180 msgid "They can be implemented in the following way:" -msgstr "" +msgstr "它们可以通过以下方式实现:" #: ../../source/tutorial-quickstart-mxnet.rst:210 msgid "" "We can now create an instance of our class :code:`MNISTClient` and add " "one line to actually run this client:" -msgstr "" +msgstr "现在我们可以创建一个 :code:`MNISTClient` " +"类的实例,并添加一行来实际运行该客户端:" #: ../../source/tutorial-quickstart-mxnet.rst:217 #: ../../source/tutorial-quickstart-scikitlearn.rst:150 @@ -12911,6 +13290,12 @@ msgid "" "workload with the server and clients running on different machines, all " "that needs to change is the :code:`server_address` we pass to the client." msgstr "" +"这就是客户端。我们只需实现 :code:`Client` 或 :code:`NumPyClient` 并调用 " +":code:`fl.client.start_client()` 或 :code:`fl.client.start_numpy_client()`。" +"字符串 :code:`\"0.0.0.0:8080\"`会告诉客户端要连接的服务器。在本例中,我们可以" +"在同一台机器上运行服务器和客户端,因此我们使用 :code:`\"0.0.0.0:8080\"" +"`。如果我们运行的是真正的联合工作负载,服务器和客户端运行在不同的机器上," +"那么需要改变的只是传递给客户端的 :code:`server_address`。" #: ../../source/tutorial-quickstart-mxnet.rst:239 msgid "" @@ -12918,6 +13303,8 @@ msgid "" "federated learning in action. Federated learning systems usually have a " "server and multiple clients. We therefore have to start the server first:" msgstr "" +"客户端和服务器都准备就绪后,我们现在就可以运行一切,看看联合学习的运行情况。" +"联合学习系统通常有一个服务器和多个客户端。因此,我们必须先启动服务器:" #: ../../source/tutorial-quickstart-mxnet.rst:247 #: ../../source/tutorial-quickstart-pytorch.rst:226 @@ -12927,7 +13314,8 @@ msgstr "" msgid "" "Once the server is running we can start the clients in different " "terminals. Open a new terminal and start the first client:" -msgstr "" +msgstr "服务器运行后,我们就可以在不同终端启动客户端了。打开一个新终端,启动第一个客" +"户端:" #: ../../source/tutorial-quickstart-mxnet.rst:254 #: ../../source/tutorial-quickstart-pytorch.rst:233 @@ -12935,7 +13323,7 @@ msgstr "" #: ../../source/tutorial-quickstart-tensorflow.rst:129 #: ../../source/tutorial-quickstart-xgboost.rst:537 msgid "Open another terminal and start the second client:" -msgstr "" +msgstr "打开另一台终端,启动第二个客户端:" #: ../../source/tutorial-quickstart-mxnet.rst:260 #: ../../source/tutorial-quickstart-pytorch.rst:239 @@ -12945,7 +13333,8 @@ msgid "" "Each client will have its own dataset. You should now see how the " "training does in the very first terminal (the one that started the " "server):" -msgstr "" +msgstr "每个客户端都有自己的数据集。现在你应该看到第一个终端(启动服务器的终端)的训" +"练效果了:" #: ../../source/tutorial-quickstart-mxnet.rst:292 msgid "" @@ -12955,20 +13344,25 @@ msgid "" "mxnet/client.py>`_ for this example can be found in :code:`examples" "/quickstart-mxnet`." msgstr "" +"恭喜您!您已经成功构建并运行了第一个联合学习系统。本示例的完整源代码 " +"`_ 可在 :code:`examples/quickstart-mxnet` 中找到。" #: ../../source/tutorial-quickstart-pandas.rst:-1 msgid "" "Check out this Federated Learning quickstart tutorial for using Flower " "with Pandas to perform Federated Analytics." msgstr "" +"查看此 Federated Learning 快速入门教程,了解如何使用 Flower 和 Pandas 执行 " +"Federated Analytics。" #: ../../source/tutorial-quickstart-pandas.rst:5 msgid "Quickstart Pandas" -msgstr "" +msgstr "快速入门熊猫" #: ../../source/tutorial-quickstart-pandas.rst:10 msgid "Let's build a federated analytics system using Pandas and Flower!" -msgstr "" +msgstr "让我们使用 Pandas 和 Flower 建立一个联合分析系统!" #: ../../source/tutorial-quickstart-pandas.rst:12 msgid "" @@ -12976,18 +13370,23 @@ msgid "" "`_ " "to learn more." msgstr "" +"请参阅 \"完整代码示例 `_\" 了解更多信息。" #: ../../source/tutorial-quickstart-pytorch.rst:-1 msgid "" "Check out this Federated Learning quickstart tutorial for using Flower " "with PyTorch to train a CNN model on MNIST." msgstr "" +"查看此 Federated Learning 快速入门教程,了解如何使用 Flower 和 PyTorch 在 " +"MNIST 上训练 CNN 模型。" #: ../../source/tutorial-quickstart-pytorch.rst:13 msgid "" "In this tutorial we will learn how to train a Convolutional Neural " "Network on CIFAR10 using Flower and PyTorch." -msgstr "" +msgstr "在本教程中,我们将学习如何使用 Flower 和 PyTorch 在 CIFAR10 " +"上训练卷积神经网络。" #: ../../source/tutorial-quickstart-pytorch.rst:15 #: ../../source/tutorial-quickstart-xgboost.rst:36 @@ -12996,12 +13395,15 @@ msgid "" "everything within a `virtualenv `_." msgstr "" +"首先,建议创建一个虚拟环境,并在 `virtualenv `_ 中运行一切。" #: ../../source/tutorial-quickstart-pytorch.rst:29 msgid "" "Since we want to use PyTorch to solve a computer vision task, let's go " "ahead and install PyTorch and the **torchvision** library:" -msgstr "" +msgstr "既然我们想用 PyTorch 解决计算机视觉任务,那就继续安装 PyTorch 和 " +"**torchvision** 库吧:" #: ../../source/tutorial-quickstart-pytorch.rst:39 msgid "" @@ -13011,16 +13413,20 @@ msgid "" "with PyTorch " "`_." msgstr "" +"现在我们已经安装了所有的依赖项,让我们用两个客户端和一个服务器来运行一个简单" +"的分布式训练。我们的训练过程和网络架构基于 PyTorch 的《Deep Learning with " +"PyTorch `_》。" #: ../../source/tutorial-quickstart-pytorch.rst:41 msgid "" "In a file called :code:`client.py`, import Flower and PyTorch related " "packages:" -msgstr "" +msgstr "在名为 :code:`client.py` 的文件中,导入 Flower 和 PyTorch 相关软件包:" #: ../../source/tutorial-quickstart-pytorch.rst:56 msgid "In addition, we define the device allocation in PyTorch with:" -msgstr "" +msgstr "此外,我们还在 PyTorch 中定义了设备分配:" #: ../../source/tutorial-quickstart-pytorch.rst:62 msgid "" @@ -13028,37 +13434,42 @@ msgid "" "dataset for machine learning. The PyTorch :code:`DataLoader()` downloads " "the training and test data that are then normalized." msgstr "" +"我们使用 PyTorch 来加载 " +"CIFAR10,这是一个用于机器学习的流行彩色图像分类数据集。PyTorch " +":code:`DataLoader()`下载训练数据和测试数据,然后进行归一化处理。" #: ../../source/tutorial-quickstart-pytorch.rst:78 msgid "" "Define the loss and optimizer with PyTorch. The training of the dataset " "is done by looping over the dataset, measure the corresponding loss and " "optimize it." -msgstr "" +msgstr "使用 PyTorch 定义损失和优化器。数据集的训练是通过循环数据集、测量相应的损失并" +"对其进行优化来完成的。" #: ../../source/tutorial-quickstart-pytorch.rst:94 msgid "" "Define then the validation of the machine learning network. We loop over" " the test set and measure the loss and accuracy of the test set." -msgstr "" +msgstr "然后定义机器学习网络的验证。我们在测试集上循环,测量测试集的损失和准确率。" #: ../../source/tutorial-quickstart-pytorch.rst:113 msgid "" "After defining the training and testing of a PyTorch machine learning " "model, we use the functions for the Flower clients." -msgstr "" +msgstr "在定义了 PyTorch 机器学习模型的训练和测试之后,我们将这些功能用于 Flower " +"客户端。" #: ../../source/tutorial-quickstart-pytorch.rst:115 msgid "" "The Flower clients will use a simple CNN adapted from 'PyTorch: A 60 " "Minute Blitz':" -msgstr "" +msgstr "Flower 客户端将使用一个简单的从“PyTorch: 60 分钟突击\"改编的CNN:" #: ../../source/tutorial-quickstart-pytorch.rst:142 msgid "" "After loading the data set with :code:`load_data()` we define the Flower " "interface." -msgstr "" +msgstr "使用 :code:`load_data()` 加载数据集后,我们定义了 Flower 接口。" #: ../../source/tutorial-quickstart-pytorch.rst:150 msgid "" @@ -13068,17 +13479,22 @@ msgid "" "defining the following methods (:code:`set_parameters` is optional " "though):" msgstr "" +"Flower 提供了一个名为 :code:`NumPyClient` 的便利类,当您的工作负载使用 " +"PyTorch 时,它使 :code:`Client` 接口的实现变得更容易。实现 " +":code:`NumPyClient` 通常意味着定义以下方法(:code:`set_parameters` " +"是可选的):" #: ../../source/tutorial-quickstart-pytorch.rst:166 msgid "which can be implemented in the following way:" -msgstr "" +msgstr "可以通过以下方式实现:" #: ../../source/tutorial-quickstart-pytorch.rst:189 #: ../../source/tutorial-quickstart-tensorflow.rst:82 msgid "" "We can now create an instance of our class :code:`CifarClient` and add " "one line to actually run this client:" -msgstr "" +msgstr "现在我们可以创建一个 :code:`CifarClient` " +"类的实例,并添加一行来实际运行该客户端:" #: ../../source/tutorial-quickstart-pytorch.rst:196 #: ../../source/tutorial-quickstart-tensorflow.rst:90 @@ -13092,6 +13508,12 @@ msgid "" "server and clients running on different machines, all that needs to " "change is the :code:`server_address` we point the client at." msgstr "" +"这就是客户端。我们只需实现 :code:`Client` 或 :code:`NumPyClient` 并调用 " +":code:`fl.client.start_client()` 或 :code:`fl.client.start_numpy_client()`。" +"字符串 :code:`\"[::]:8080\"`会告诉客户端要连接的服务器。在本例中,我们可以在" +"同一台机器上运行服务器和客户端,因此使用 :code:`\"[::]:8080\"" +"。如果我们运行的是真正的联合工作负载,服务器和客户端运行在不同的机器上," +"那么需要改变的只是客户端指向的 :code:`server_address`。" #: ../../source/tutorial-quickstart-pytorch.rst:271 msgid "" @@ -13101,22 +13523,27 @@ msgid "" "pytorch/client.py>`_ for this example can be found in :code:`examples" "/quickstart-pytorch`." msgstr "" +"恭喜您!您已经成功构建并运行了第一个联合学习系统。本示例的完整源代码 " +"`_ 可以在 :code:`examples/quickstart-pytorch` 中找到。" #: ../../source/tutorial-quickstart-pytorch-lightning.rst:-1 msgid "" "Check out this Federated Learning quickstart tutorial for using Flower " "with PyTorch Lightning to train an Auto Encoder model on MNIST." msgstr "" +"查看此 Federated Learning 快速入门教程,了解如何使用 Flower 和 PyTorch " +"Lightning 在 MNIST 上训练自动编码器模型。" #: ../../source/tutorial-quickstart-pytorch-lightning.rst:5 msgid "Quickstart PyTorch Lightning" -msgstr "" +msgstr "快速入门 PyTorch Lightning" #: ../../source/tutorial-quickstart-pytorch-lightning.rst:10 msgid "" "Let's build a horizontal federated learning system using PyTorch " "Lightning and Flower!" -msgstr "" +msgstr "让我们使用 PyTorch Lightning 和 Flower 构建一个水平联合学习系统!" #: ../../source/tutorial-quickstart-pytorch-lightning.rst:12 msgid "" @@ -13124,30 +13551,36 @@ msgid "" "`_ to learn more." msgstr "" +"请参阅 \"完整代码示例 `_\" 了解更多信息。" #: ../../source/tutorial-quickstart-scikitlearn.rst:-1 msgid "" "Check out this Federated Learning quickstart tutorial for using Flower " "with scikit-learn to train a linear regression model." -msgstr "" +msgstr "查看此 Federated Learning 快速入门教程,了解如何使用 Flower 和 scikit-learn " +"训练线性回归模型。" #: ../../source/tutorial-quickstart-scikitlearn.rst:5 +#, fuzzy msgid "Quickstart scikit-learn" -msgstr "" +msgstr "快速入门 scikit-learn" #: ../../source/tutorial-quickstart-scikitlearn.rst:10 msgid "" "In this tutorial, we will learn how to train a :code:`Logistic " "Regression` model on MNIST using Flower and scikit-learn." msgstr "" +"在本教程中,我们将学习如何使用 Flower 和 scikit-learn 在 MNIST 上训练一个 " +":code:`Logistic Regression` 模型。" #: ../../source/tutorial-quickstart-scikitlearn.rst:26 msgid "Since we want to use scikt-learn, let's go ahead and install it:" -msgstr "" +msgstr "既然我们要使用 scikt-learn,那就继续安装吧:" #: ../../source/tutorial-quickstart-scikitlearn.rst:32 msgid "Or simply install all dependencies using Poetry:" -msgstr "" +msgstr "或者直接使用 Poetry 安装所有依赖项:" #: ../../source/tutorial-quickstart-scikitlearn.rst:42 msgid "" @@ -13158,54 +13591,64 @@ msgid "" ":code:`utils.py` contains different functions defining all the machine " "learning basics:" msgstr "" +"现在我们已经安装了所有的依赖项,让我们用两个客户端和一个服务器来运行一个简单" +"的分布式训练。不过,在设置客户端和服务器之前,我们将在 :code:`utils.py` " +"中定义联合学习设置所需的所有功能。:code:`utils." +"py`包含定义所有机器学习基础知识的不同函数:" #: ../../source/tutorial-quickstart-scikitlearn.rst:45 msgid ":code:`get_model_parameters()`" -msgstr "" +msgstr "代码:`get_model_parameters()` 获取模型参数" #: ../../source/tutorial-quickstart-scikitlearn.rst:46 msgid "Returns the paramters of a :code:`sklearn` LogisticRegression model" -msgstr "" +msgstr "返回 :code:`sklearn` LogisticRegression 模型的参数" #: ../../source/tutorial-quickstart-scikitlearn.rst:47 +#, fuzzy msgid ":code:`set_model_params()`" -msgstr "" +msgstr "代码:`set_model_params()` 设置模型参数" #: ../../source/tutorial-quickstart-scikitlearn.rst:48 +#, fuzzy msgid "Sets the parameters of a :code:`sklean` LogisticRegression model" -msgstr "" +msgstr "设置 :code:`sklean` LogisticRegression 模型的参数" #: ../../source/tutorial-quickstart-scikitlearn.rst:49 +#, fuzzy msgid ":code:`set_initial_params()`" -msgstr "" +msgstr ":code:`set_initial_params()`" #: ../../source/tutorial-quickstart-scikitlearn.rst:50 msgid "Initializes the model parameters that the Flower server will ask for" -msgstr "" +msgstr "初始化 Flower 服务器将要求的模型参数" #: ../../source/tutorial-quickstart-scikitlearn.rst:51 +#, fuzzy msgid ":code:`load_mnist()`" -msgstr "" +msgstr ":code:`load_mnist()`" #: ../../source/tutorial-quickstart-scikitlearn.rst:52 msgid "Loads the MNIST dataset using OpenML" -msgstr "" +msgstr "使用 OpenML 加载 MNIST 数据集" #: ../../source/tutorial-quickstart-scikitlearn.rst:53 +#, fuzzy msgid ":code:`shuffle()`" -msgstr "" +msgstr ":code:`shuffle()`" #: ../../source/tutorial-quickstart-scikitlearn.rst:54 msgid "Shuffles data and its label" -msgstr "" +msgstr "对数据及其标签进行洗牌" #: ../../source/tutorial-quickstart-scikitlearn.rst:56 +#, fuzzy msgid ":code:`partition()`" -msgstr "" +msgstr ":code:`partition()`" #: ../../source/tutorial-quickstart-scikitlearn.rst:56 msgid "Splits datasets into a number of partitions" -msgstr "" +msgstr "将数据集分割成多个分区" #: ../../source/tutorial-quickstart-scikitlearn.rst:58 msgid "" @@ -13215,6 +13658,10 @@ msgid "" " the :code:`client.py` and imported. The :code:`client.py` also requires " "to import several packages such as Flower and scikit-learn:" msgstr "" +"更多详情请查看 :code:`utils.py`` 这里 `_。在 :code:`client.py` " +"中使用并导入了预定义函数。:code:`client.py` 还需要导入几个软件包,如 Flower " +"和 scikit-learn:" #: ../../source/tutorial-quickstart-scikitlearn.rst:73 msgid "" @@ -13224,12 +13671,17 @@ msgid "" "and test data. The training set is split afterwards into 10 partitions " "with :code:`utils.partition()`." msgstr "" +"我们从 `OpenML `_ 中加载 MNIST " +"数据集,这是一个用于机器学习的流行手写数字图像分类数据集。实用程序 " +":code:`utils.load_mnist()` 下载训练和测试数据。然后使用 :code:`utils." +"partition()`将训练集分割成 10 个分区。" #: ../../source/tutorial-quickstart-scikitlearn.rst:85 msgid "" "Next, the logistic regression model is defined and initialized with " ":code:`utils.set_initial_params()`." -msgstr "" +msgstr "接下来,使用 :code:`utils.set_initial_params()` " +"对逻辑回归模型进行定义和初始化。" #: ../../source/tutorial-quickstart-scikitlearn.rst:97 msgid "" @@ -13239,6 +13691,10 @@ msgid "" "those instructions and calls one of the :code:`Client` methods to run " "your code (i.e., to fit the logistic regression we defined earlier)." msgstr "" +"Flower 服务器通过一个名为 :code:`Client` 的接口与客户端交互。当服务器选择一个" +"特定的客户端进行训练时,它会通过网络发送训练指令。客户端接收到这些指令后," +"会调用 :code:`Client` " +"方法之一来运行您的代码(即拟合我们之前定义的逻辑回归)。" #: ../../source/tutorial-quickstart-scikitlearn.rst:103 msgid "" @@ -13248,20 +13704,25 @@ msgid "" "means defining the following methods (:code:`set_parameters` is optional " "though):" msgstr "" +"Flower 提供了一个名为 :code:`NumPyClient` 的便利类,当你的工作负载使用 " +"scikit-learn 时,它可以让你更容易地实现 :code:`Client` 接口。实现 " +":code:`NumPyClient` 通常意味着定义以下方法(:code:`set_parameters` " +"是可选的):" #: ../../source/tutorial-quickstart-scikitlearn.rst:112 msgid "is directly imported with :code:`utils.set_model_params()`" -msgstr "" +msgstr "直接导入 :code:`utils.set_model_params()`" #: ../../source/tutorial-quickstart-scikitlearn.rst:120 msgid "The methods can be implemented in the following way:" -msgstr "" +msgstr "这些方法可以通过以下方式实现:" #: ../../source/tutorial-quickstart-scikitlearn.rst:143 msgid "" "We can now create an instance of our class :code:`MnistClient` and add " "one line to actually run this client:" -msgstr "" +msgstr "现在我们可以创建一个 :code:`MnistClient` " +"类的实例,并添加一行来实际运行该客户端:" #: ../../source/tutorial-quickstart-scikitlearn.rst:159 msgid "" @@ -13269,10 +13730,12 @@ msgid "" "evaluation function for the server-side evaluation. First, we import " "again all required libraries such as Flower and scikit-learn." msgstr "" +"下面的 Flower 服务器更先进一些,会返回一个用于服务器端评估的评估函数。首先," +"我们再次导入所有需要的库,如 Flower 和 scikit-learn。" #: ../../source/tutorial-quickstart-scikitlearn.rst:162 msgid ":code:`server.py`, import Flower and start the server:" -msgstr "" +msgstr ":code:`server.py`, import Flower 并启动服务器:" #: ../../source/tutorial-quickstart-scikitlearn.rst:173 msgid "" @@ -13281,6 +13744,8 @@ msgid "" "function is called after each federated learning round and gives you " "information about loss and accuracy." msgstr "" +"联合学习轮数在 :code:`fit_round()` 中设置,评估在 :code:`get_evaluate_fn()` " +"中定义。每轮联合学习后都会调用评估函数,并提供有关损失和准确率的信息。" #: ../../source/tutorial-quickstart-scikitlearn.rst:198 msgid "" @@ -13292,6 +13757,11 @@ msgid "" " :code:`fl.server.start_server(server_address=\"0.0.0.0:8080\", " "strategy=strategy, config=fl.server.ServerConfig(num_rounds=3))`." msgstr "" +"代码:`main`包含服务器端参数初始化 :代码:`utils.set_initial_params()`" +"以及聚合策略 :代码:`fl.server.strategy:FedAvg()`。该策略是默认的联合平均(或 " +"FedAvg)策略,有两个客户端,在每轮联合学习后进行评估。可以使用 :code:`fl." +"server.start_server(server_address=\"0.0.0.0:8080\", strategy=strategy, " +"config=fl.server.ServerConfig(num_rounds=3))` 命令启动服务器。" #: ../../source/tutorial-quickstart-scikitlearn.rst:217 msgid "" @@ -13300,6 +13770,8 @@ msgid "" "server and multiple clients. We, therefore, have to start the server " "first:" msgstr "" +"客户端和服务器都准备就绪后,我们现在就可以运行一切,看看联合学习的运行情况。" +"联合学习系统通常有一个服务器和多个客户端。因此,我们必须先启动服务器:" #: ../../source/tutorial-quickstart-scikitlearn.rst:271 msgid "" @@ -13309,34 +13781,39 @@ msgid "" "mnist>`_ for this example can be found in :code:`examples/sklearn-logreg-" "mnist`." msgstr "" +"恭喜您!您已经成功构建并运行了第一个联合学习系统。本示例的完整源代码 " +"`_ " +"可以在 :code:`examples/sklearn-logreg-mnist` 中找到。" #: ../../source/tutorial-quickstart-tensorflow.rst:-1 msgid "" "Check out this Federated Learning quickstart tutorial for using Flower " "with TensorFlow to train a MobilNetV2 model on CIFAR-10." msgstr "" +"查看此 Federated Learning 快速入门教程,了解如何使用 Flower 和 TensorFlow 在 " +"CIFAR-10 上训练 MobilNetV2 模型。" #: ../../source/tutorial-quickstart-tensorflow.rst:5 msgid "Quickstart TensorFlow" -msgstr "" +msgstr "快速入门 TensorFlow" #: ../../source/tutorial-quickstart-tensorflow.rst:13 msgid "Let's build a federated learning system in less than 20 lines of code!" -msgstr "" +msgstr "让我们用不到 20 行代码构建一个联邦学习系统!" #: ../../source/tutorial-quickstart-tensorflow.rst:15 msgid "Before Flower can be imported we have to install it:" -msgstr "" +msgstr "在导入 Flower 之前,我们必须先安装它:" #: ../../source/tutorial-quickstart-tensorflow.rst:21 msgid "" "Since we want to use the Keras API of TensorFlow (TF), we have to install" " TF as well:" -msgstr "" +msgstr "由于我们要使用 TensorFlow (TF) 的 Keras API,因此还必须安装 TF:" #: ../../source/tutorial-quickstart-tensorflow.rst:31 msgid "Next, in a file called :code:`client.py`, import Flower and TensorFlow:" -msgstr "" +msgstr "接下来,在名为 :code:`client.py` 的文件中导入 Flower 和 TensorFlow:" #: ../../source/tutorial-quickstart-tensorflow.rst:38 msgid "" @@ -13346,12 +13823,16 @@ msgid "" "it locally, and then returns the entire training and test set as NumPy " "ndarrays." msgstr "" +"我们使用 TF 的 Keras 实用程序加载 " +"CIFAR10,这是一个用于机器学习的流行彩色图像分类数据集。调用 :code:`tf.keras." +"datasets.cifar10.load_data()` 会下载 CIFAR10,将其缓存到本地,然后以 NumPy " +"ndarrays 的形式返回整个训练集和测试集。" #: ../../source/tutorial-quickstart-tensorflow.rst:47 msgid "" "Next, we need a model. For the purpose of this tutorial, we use " "MobilNetV2 with 10 output classes:" -msgstr "" +msgstr "接下来,我们需要一个模型。在本教程中,我们使用带有 10 个输出类的 MobilNetV2:" #: ../../source/tutorial-quickstart-tensorflow.rst:60 msgid "" @@ -13360,16 +13841,19 @@ msgid "" "workload uses Keras. The :code:`NumPyClient` interface defines three " "methods which can be implemented in the following way:" msgstr "" +"Flower 提供了一个名为 :code:`NumPyClient` 的便利类,当您的工作负载使用 Keras " +"时,该类可以更轻松地实现 :code:`Client` 接口。:code:`NumPyClient` " +"接口定义了三个方法,可以通过以下方式实现:" #: ../../source/tutorial-quickstart-tensorflow.rst:135 msgid "Each client will have its own dataset." -msgstr "" +msgstr "每个客户都有自己的数据集。" #: ../../source/tutorial-quickstart-tensorflow.rst:137 msgid "" "You should now see how the training does in the very first terminal (the " "one that started the server):" -msgstr "" +msgstr "现在你应该能在第一个终端(启动服务器的终端)看到训练的效果了:" #: ../../source/tutorial-quickstart-tensorflow.rst:169 msgid "" @@ -13379,20 +13863,25 @@ msgid "" "tensorflow/client.py>`_ for this can be found in :code:`examples" "/quickstart-tensorflow/client.py`." msgstr "" +"恭喜您!您已经成功构建并运行了第一个联合学习系统。完整的源代码 " +"`_ 可以在 :code:`examples/quickstart-tensorflow/client.py` 中找到。" #: ../../source/tutorial-quickstart-xgboost.rst:-1 msgid "" "Check out this Federated Learning quickstart tutorial for using Flower " "with XGBoost to train classification models on trees." -msgstr "" +msgstr "查看此 Federated Learning 快速入门教程,了解如何使用 Flower 和 XGBoost " +"在树上训练分类模型。" #: ../../source/tutorial-quickstart-xgboost.rst:5 +#, fuzzy msgid "Quickstart XGBoost" -msgstr "" +msgstr "快速入门 XGBoost" #: ../../source/tutorial-quickstart-xgboost.rst:11 msgid "Federated XGBoost" -msgstr "" +msgstr "联邦 XGBoost" #: ../../source/tutorial-quickstart-xgboost.rst:13 msgid "" @@ -13403,17 +13892,22 @@ msgid "" "speed of machine learning models. In XGBoost, trees are constructed " "concurrently, unlike the sequential approach taken by GBDT." msgstr "" +"EXtreme Gradient Boosting(**XGBoost**)是梯度提升决策树(**GBDT**)的一种稳" +"健而高效的实现方法,能最大限度地提高提升树方法的计算边界。它主要用于提高机器" +"学习模型的性能和计算速度。在 XGBoost 中,决策树是并发构建的,与 GBDT " +"采用的顺序方法不同。" #: ../../source/tutorial-quickstart-xgboost.rst:17 msgid "" "Often, for tabular data on medium-sized datasets with fewer than 10k " "training examples, XGBoost surpasses the results of deep learning " "techniques." -msgstr "" +msgstr "对于训练示例少于 10k 的中型数据集上的表格数据,XGBoost " +"的结果往往超过深度学习技术。" #: ../../source/tutorial-quickstart-xgboost.rst:20 msgid "Why federated XGBoost?" -msgstr "" +msgstr "为什么选择联邦 XGBoost?" #: ../../source/tutorial-quickstart-xgboost.rst:22 msgid "" @@ -13421,7 +13915,8 @@ msgid "" "there's an increasing requirement to implement federated XGBoost systems " "for specialised applications, like survival analysis and financial fraud " "detection." -msgstr "" +msgstr "事实上,随着对数据隐私和分散学习的需求不断增长,越来越多的专业应用(如生存分" +"析和金融欺诈检测)需要实施联邦 XGBoost 系统。" #: ../../source/tutorial-quickstart-xgboost.rst:24 msgid "" @@ -13431,6 +13926,9 @@ msgid "" "of XGBoost, combining it with federated learning offers a promising " "solution for these specific challenges." msgstr "" +"联邦学习可确保原始数据保留在本地设备上,因此对于数据安全和隐私至关重要的敏感" +"领域来说,这是一种极具吸引力的方法。鉴于 XGBoost 的稳健性和高效性,将其与联邦" +"学习相结合为应对这些特定挑战提供了一种前景广阔的解决方案。" #: ../../source/tutorial-quickstart-xgboost.rst:27 msgid "" @@ -13443,22 +13941,30 @@ msgid "" "comprehensive `_) to run various experiments." msgstr "" +"在本教程中,我们将学习如何使用 Flower 和 :code:`xgboost` 软件包在 HIGGS " +"数据集上训练联邦 XGBoost 模型。我们将使用一个包含两个 * 客户端* 和一个 * " +"服务器* 的简单示例 (`完整代码 xgboost-quickstart `_)来演示联邦 XGBoost 如何工作," +"然后我们将深入到一个更复杂的示例 (`完整代码 xgboost-comprehensive " +"`_),以运行各种实验。" #: ../../source/tutorial-quickstart-xgboost.rst:34 msgid "Environment Setup" -msgstr "" +msgstr "环境设定" #: ../../source/tutorial-quickstart-xgboost.rst:38 msgid "" "We first need to install Flower and Flower Datasets. You can do this by " "running :" -msgstr "" +msgstr "我们首先需要安装 Flower 和 Flower Datasets。您可以通过运行 :" #: ../../source/tutorial-quickstart-xgboost.rst:44 msgid "" "Since we want to use :code:`xgboost` package to build up XGBoost trees, " "let's go ahead and install :code:`xgboost`:" -msgstr "" +msgstr "既然我们要使用 :code:`xgboost` 软件包来构建 XGBoost 树,那就继续安装 " +":code:`xgboost`:" #: ../../source/tutorial-quickstart-xgboost.rst:54 msgid "" @@ -13467,22 +13973,26 @@ msgid "" "dependencies installed, let's run a simple distributed training with two " "clients and one server." msgstr "" +"*客户*负责根据其本地数据集为模型生成单独的权重更新。现在我们已经安装了所有的" +"依赖项,让我们用两个客户端和一个服务器来运行一个简单的分布式训练。" #: ../../source/tutorial-quickstart-xgboost.rst:57 msgid "" "In a file called :code:`client.py`, import xgboost, Flower, Flower " "Datasets and other related functions:" -msgstr "" +msgstr "在名为 :code:`client.py` 的文件中,导入 xgboost、Flower、Flower Datasets " +"和其他相关函数:" #: ../../source/tutorial-quickstart-xgboost.rst:84 msgid "Dataset partition and hyper-parameter selection" -msgstr "" +msgstr "数据集划分和超参数选择" #: ../../source/tutorial-quickstart-xgboost.rst:86 msgid "" "Prior to local training, we require loading the HIGGS dataset from Flower" " Datasets and conduct data partitioning for FL:" -msgstr "" +msgstr "在本地训练之前,我们需要从 Flower Datasets 加载 HIGGS 数据集,并对 FL " +"进行数据分区:" #: ../../source/tutorial-quickstart-xgboost.rst:99 msgid "" @@ -13490,22 +14000,27 @@ msgid "" "distribution (:code:`IidPartitioner(num_partitions=2)`). Then, we load " "the partition for the given client based on :code:`node_id`:" msgstr "" +"在此示例中,我们将数据集分割成两个均匀分布的分区(:code:`IidPartitioner(num_p" +"artitions=2)`)。然后,我们根据 :code:`node_id` 为给定客户端加载分区:" #: ../../source/tutorial-quickstart-xgboost.rst:118 msgid "" "After that, we do train/test splitting on the given partition (client's " "local data), and transform data format for :code:`xgboost` package." -msgstr "" +msgstr "然后,我们在给定的分区(客户端的本地数据)上进行训练/测试分割,并为 " +":code:`xgboost` 软件包转换数据格式。" #: ../../source/tutorial-quickstart-xgboost.rst:131 msgid "" "The functions of :code:`train_test_split` and " ":code:`transform_dataset_to_dmatrix` are defined as below:" msgstr "" +":code:`train_test_split` 和 :code:`transform_dataset_too_dmatrix` " +"的函数定义如下:" #: ../../source/tutorial-quickstart-xgboost.rst:155 msgid "Finally, we define the hyper-parameters used for XGBoost training." -msgstr "" +msgstr "最后,我们定义了用于 XGBoost 训练的超参数。" #: ../../source/tutorial-quickstart-xgboost.rst:171 msgid "" @@ -13514,10 +14029,13 @@ msgid "" "GPU by setting :code:`tree_method` to :code:`gpu_hist`. We use AUC as " "evaluation metric." msgstr "" +"代码:`num_local_round`表示本地树提升的迭代次数。我们默认使用 CPU 进行训练。" +"可以通过将 :code:`tree_method` 设置为 :code:`gpu_hist`,将其转换为 GPU。" +"我们使用 AUC 作为评估指标。" #: ../../source/tutorial-quickstart-xgboost.rst:178 msgid "Flower client definition for XGBoost" -msgstr "" +msgstr "用于 XGBoost 的 Flower 客户端定义" #: ../../source/tutorial-quickstart-xgboost.rst:180 msgid "" @@ -13525,6 +14043,8 @@ msgid "" "general rule to define :code:`XgbClient` class inherited from " ":code:`fl.client.Client`." msgstr "" +"加载数据集后,我们定义 Flower 客户端。我们按照一般规则定义从 :code:`fl.client" +".Client` 继承而来的 :code:`XgbClient` 类。" #: ../../source/tutorial-quickstart-xgboost.rst:190 msgid "" @@ -13533,12 +14053,16 @@ msgid "" "integrated in earlier rounds and maintain other essential data structures" " for training." msgstr "" +"代码:`self.bst`用于保存在各轮中保持一致的 Booster 对象,使其能够存储在前几轮" +"中集成的树的预测结果,并维护其他用于训练的重要数据结构。" #: ../../source/tutorial-quickstart-xgboost.rst:193 msgid "" "Then, we override :code:`get_parameters`, :code:`fit` and " ":code:`evaluate` methods insides :code:`XgbClient` class as follows." msgstr "" +"然后,我们在 :code:`XgbClient` 类中覆盖 :code:`get_parameters`、:code:`fit` " +"和 :code:`evaluate` 方法如下。" #: ../../source/tutorial-quickstart-xgboost.rst:207 msgid "" @@ -13549,6 +14073,10 @@ msgid "" ":code:`get_parameters` when it is called by the server at the first " "round." msgstr "" +"与神经网络训练不同,XGBoost 树不是从指定的随机权重开始的。在这种情况下," +"我们不使用 :code:`get_parameters` 和 :code:`set_parameters` 来初始化 XGBoost " +"的模型参数。因此,当服务器在第一轮调用 :code:`get_parameters` 时,让我们在 " +":code:`get_parameters` 中返回一个空张量。" #: ../../source/tutorial-quickstart-xgboost.rst:248 msgid "" @@ -13559,6 +14087,10 @@ msgid "" ":code:`self.bst`, and then update model weights on local training data " "with function :code:`local_boost` as follows:" msgstr "" +"在 :code:`fit`中,第一轮我们调用 :code:`xgb.train()`来建立第一组树,返回的 " +"Booster 对象和 config 分别存储在 :code:`self.bst` 和 :code:`self.config` " +"中。从第二轮开始,我们将服务器发送的全局模型加载到 :code:`self.bst`," +"然后使用函数 :code:`local_boost`更新本地训练数据的模型权重,如下所示:" #: ../../source/tutorial-quickstart-xgboost.rst:266 msgid "" @@ -13566,18 +14098,24 @@ msgid "" ":code:`self.bst.update` method. After training, the last " ":code:`N=num_local_round` trees will be extracted to send to the server." msgstr "" +"给定 :code:`num_local_round`,我们通过调用 :code:`self.bst." +"update`方法更新树。训练结束后,我们将提取最后一个 :code:`N=num_local_round` " +"树并发送给服务器。" #: ../../source/tutorial-quickstart-xgboost.rst:288 msgid "" "In :code:`evaluate`, we call :code:`self.bst.eval_set` function to " "conduct evaluation on valid set. The AUC value will be returned." msgstr "" +"在 :code:`evaluate`中,我们调用 :code:`self.bst." +"eval_set`函数对有效集合进行评估。将返回 AUC 值。" #: ../../source/tutorial-quickstart-xgboost.rst:291 msgid "" "Now, we can create an instance of our class :code:`XgbClient` and add one" " line to actually run this client:" -msgstr "" +msgstr "现在,我们可以创建一个 :code:`XgbClient` " +"类的实例,并添加一行来实际运行该客户端:" #: ../../source/tutorial-quickstart-xgboost.rst:297 msgid "" @@ -13589,6 +14127,12 @@ msgid "" "server and clients running on different machines, all that needs to " "change is the :code:`server_address` we point the client at." msgstr "" +"这就是客户端。我们只需实现 :code:`客户端`并调用 :code:`fl.client." +"start_client()`。字符串 :code:`\"[::]:8080\"`会告诉客户端要连接的服务器。在本" +"例中,我们可以在同一台机器上运行服务器和客户端,因此我们使用 :code:`\"[::]:" +"8080\"" +"`。如果我们运行的是真正的联合工作负载,服务器和客户端运行在不同的机器上," +"那么需要改变的只是客户端指向的 :code:`server_address`。" #: ../../source/tutorial-quickstart-xgboost.rst:308 msgid "" @@ -13596,16 +14140,20 @@ msgid "" "produce a better model. Finally, the *server* sends this improved version" " of the model back to each *client* to finish a complete FL round." msgstr "" +"然后,这些更新会被发送到*服务器,由*服务器汇总后生成一个更好的模型。最后,*服" +"务器*将这个改进版的模型发回给每个*客户端*,以完成一轮完整的 FL。" #: ../../source/tutorial-quickstart-xgboost.rst:311 msgid "" "In a file named :code:`server.py`, import Flower and FedXgbBagging from " ":code:`flwr.server.strategy`." msgstr "" +"在名为 :code:`server.py` 的文件中,从 :code:`flwr.server.strategy` 导入 " +"Flower 和 FedXgbBagging。" #: ../../source/tutorial-quickstart-xgboost.rst:313 msgid "We first define a strategy for XGBoost bagging aggregation." -msgstr "" +msgstr "我们首先定义了 XGBoost 袋式聚合策略。" #: ../../source/tutorial-quickstart-xgboost.rst:336 msgid "" @@ -13613,20 +14161,22 @@ msgid "" ":code:`evaluate_metrics_aggregation` function is defined to collect and " "wighted average the AUC values from clients." msgstr "" +"本示例使用两个客户端。我们定义了一个 :code:`evaluate_metrics_aggregation` " +"函数,用于收集客户机的 AUC 值并求取平均值。" #: ../../source/tutorial-quickstart-xgboost.rst:339 msgid "Then, we start the server:" -msgstr "" +msgstr "然后,我们启动服务器:" #: ../../source/tutorial-quickstart-xgboost.rst:351 msgid "Tree-based bagging aggregation" -msgstr "" +msgstr "基于树的袋式聚合" #: ../../source/tutorial-quickstart-xgboost.rst:353 msgid "" "You must be curious about how bagging aggregation works. Let's look into " "the details." -msgstr "" +msgstr "你一定很好奇袋式聚合是如何工作的。让我们来详细了解一下。" #: ../../source/tutorial-quickstart-xgboost.rst:355 msgid "" @@ -13635,12 +14185,18 @@ msgid "" " Then, we override the :code:`aggregate_fit`, :code:`aggregate_evaluate` " "and :code:`evaluate` methods as follows:" msgstr "" +"在文件 :code:`flwr.server.strategy.fedxgb_bagging.py`中,我们定义了从 " +":code:`flwr.server.strategy.FedAvg`继承的 :code:`FedXgbBagging`。然后," +"我们覆盖 :code:`aggregate_fit`、:code:`aggregate_evaluate` 和 " +":code:`evaluate` 方法如下:" #: ../../source/tutorial-quickstart-xgboost.rst:451 msgid "" "In :code:`aggregate_fit`, we sequentially aggregate the clients' XGBoost " "trees by calling :code:`aggregate()` function:" msgstr "" +"在 :code:`aggregate_fit` 中,我们通过调用 :code:`aggregate()` 函数," +"按顺序聚合客户的 XGBoost 树:" #: ../../source/tutorial-quickstart-xgboost.rst:510 msgid "" @@ -13650,16 +14206,20 @@ msgid "" " After that, the trees (containing model weights) are aggregated to " "generate a new tree model." msgstr "" +"在该函数中,我们首先通过调用 :code:`_get_tree_nums` 获取当前模型和上一个模型" +"的树数和并行树数。然后,对获取的信息进行汇总。然后,汇总树(包含模型权重)," +"生成新的树模型。" #: ../../source/tutorial-quickstart-xgboost.rst:515 msgid "" "After traversal of all clients' models, a new global model is generated, " "followed by the serialisation, and sending back to each client." -msgstr "" +msgstr "在遍历所有客户的模型后,会生成一个新的全局模型,然后进行序列化,并发回给每个" +"客户。" #: ../../source/tutorial-quickstart-xgboost.rst:520 msgid "Launch Federated XGBoost!" -msgstr "" +msgstr "启动联邦 XGBoost!" #: ../../source/tutorial-quickstart-xgboost.rst:582 msgid "" @@ -13668,6 +14228,9 @@ msgid "" ":code:`metrics_distributed`. One can see that the average AUC increases " "over FL rounds." msgstr "" +"恭喜您!您已成功构建并运行了第一个联合 XGBoost 系统。可以在 " +":code:`metrics_distributed` 中查看 AUC 值。我们可以看到,平均 AUC 随 FL " +"轮数的增加而增加。" #: ../../source/tutorial-quickstart-xgboost.rst:587 msgid "" @@ -13675,10 +14238,12 @@ msgid "" "/xgboost-quickstart/>`_ for this example can be found in :code:`examples" "/xgboost-quickstart`." msgstr "" +"此示例的完整源代码 `_ 可在 :code:`examples/xgboost-quickstart` 中找到。" #: ../../source/tutorial-quickstart-xgboost.rst:591 msgid "Comprehensive Federated XGBoost" -msgstr "" +msgstr "全面的联邦 XGBoost" #: ../../source/tutorial-quickstart-xgboost.rst:593 msgid "" @@ -13690,10 +14255,15 @@ msgid "" " setups, including data partitioning and centralised/distributed " "evaluation. Let's take a look!" msgstr "" +"既然您已经知道联合 XGBoost 如何与 Flower " +"协同工作,那么现在就该通过自定义实验设置来运行一些更全面的实验了。在 xgboost-" +"comprehensive 示例 (`完整代码 `_)中,我们提供了更多选项来定义各种实验设置," +"包括数据分区和集中/分布式评估。让我们一起来看看!" #: ../../source/tutorial-quickstart-xgboost.rst:599 msgid "Customised data partitioning" -msgstr "" +msgstr "定制数据分区" #: ../../source/tutorial-quickstart-xgboost.rst:601 msgid "" @@ -13703,16 +14273,20 @@ msgid "" "provide four supported partitioner type to simulate the uniformity/non-" "uniformity in data quantity (uniform, linear, square, exponential)." msgstr "" +"在 :code:`dataset.py` 中,我们有一个函数 :code:`instantiate_partitioner` " +"来根据给定的 :code:`num_partitions` 和 :code:`partitioner_type` 来实例化数据" +"分区器。目前,我们提供四种支持的分区器类型(均匀、线性、正方形、指数)来模拟" +"数据量的均匀性/非均匀性。" #: ../../source/tutorial-quickstart-xgboost.rst:632 msgid "Customised centralised/distributed evaluation" -msgstr "" +msgstr "定制的集中/分布式评估" #: ../../source/tutorial-quickstart-xgboost.rst:634 msgid "" "To facilitate centralised evaluation, we define a function in " ":code:`server.py`:" -msgstr "" +msgstr "为便于集中评估,我们在 :code:`server.py` 中定义了一个函数:" #: ../../source/tutorial-quickstart-xgboost.rst:666 msgid "" @@ -13721,6 +14295,9 @@ msgid "" "evaluation is conducted by calling :code:`eval_set()` method, and the " "tested AUC value is reported." msgstr "" +"此函数返回一个评估函数,该函数实例化一个 :code:`Booster` " +"对象,并向其加载全局模型权重。评估通过调用 :code:`eval_set()` 方法进行," +"并报告测试的 AUC 值。" #: ../../source/tutorial-quickstart-xgboost.rst:669 msgid "" @@ -13728,17 +14305,20 @@ msgid "" "start example by overriding the :code:`evaluate()` method insides the " ":code:`XgbClient` class in :code:`client.py`." msgstr "" +"至于客户端上的分布式评估,与快速启动示例相同,通过覆盖 :code:`client.py` 中 " +":code:`XgbClient` 类内部的 :code:`evaluate()` 方法。" #: ../../source/tutorial-quickstart-xgboost.rst:673 msgid "Arguments parser" -msgstr "" +msgstr "参数 解析器" #: ../../source/tutorial-quickstart-xgboost.rst:675 msgid "" "In :code:`utils.py`, we define the arguments parsers for clients and " "server, allowing users to specify different experimental settings. Let's " "first see the sever side:" -msgstr "" +msgstr "在 :code:`utils.py` 中,我们定义了客户端和服务器端的参数解析器,允许用户指定" +"不同的实验设置。让我们先看看服务器端:" #: ../../source/tutorial-quickstart-xgboost.rst:714 msgid "" @@ -13748,31 +14328,36 @@ msgid "" "evaluation and all functionalities for client evaluation will be " "disabled." msgstr "" +"这允许用户指定总客户数/FL 轮数/参与客户数/评估客户数以及评估方式。请注意," +"如果使用 :code:`--centralised-eval`,S sever " +"将进行集中评估,客户端评估的所有功能将被禁用。" #: ../../source/tutorial-quickstart-xgboost.rst:718 msgid "Then, the argument parser on client side:" -msgstr "" +msgstr "然后是客户端的参数解析器:" #: ../../source/tutorial-quickstart-xgboost.rst:760 msgid "" "This defines various options for client data partitioning. Besides, " "clients also have a option to conduct evaluation on centralised test set " "by setting :code:`--centralised-eval`." -msgstr "" +msgstr "这定义了客户端数据分区的各种选项。此外,通过设置 :code:`-centralised-" +"eval`,客户端还可以选择在集中测试集上进行评估。" #: ../../source/tutorial-quickstart-xgboost.rst:764 msgid "Example commands" -msgstr "" +msgstr "命令示例" #: ../../source/tutorial-quickstart-xgboost.rst:766 msgid "" "To run a centralised evaluated experiment on 5 clients with exponential " "distribution for 50 rounds, we first start the server as below:" -msgstr "" +msgstr "为了在 5 个客户端上进行 50 " +"轮指数分布的集中评估实验,我们首先启动服务器,如下所示:" #: ../../source/tutorial-quickstart-xgboost.rst:773 msgid "Then, on each client terminal, we start the clients:" -msgstr "" +msgstr "然后,我们在每个客户终端上启动客户机:" #: ../../source/tutorial-quickstart-xgboost.rst:779 msgid "" @@ -13780,10 +14365,13 @@ msgid "" "/xgboost-comprehensive/>`_ for this comprehensive example can be found in" " :code:`examples/xgboost-comprehensive`." msgstr "" +"此综合示例的全部源代码 `_ 可在 :code:`examples/xgboost-comprehensive` " +"中找到。" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:9 msgid "Build a strategy from scratch" -msgstr "" +msgstr "从零开始制定策略" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:11 msgid "" @@ -13795,6 +14383,12 @@ msgid "" " (`part 2 `__)." msgstr "" +"欢迎来到 Flower 联合学习教程的第三部分。在本教程的前几部分,我们介绍了 " +"PyTorch 和 Flower 的联合学习(`part 1 `__),并学习了如何使用策略来定制服务器和客户端的执行(`part 2 " +"`__)。" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:13 msgid "" @@ -13803,6 +14397,9 @@ msgid "" " using `Flower `__ and `PyTorch " "`__)." msgstr "" +"在本笔记本中,我们将通过创建 FedAvg 的自定义版本(再次使用 `Flower " +"`__ 和 `PyTorch `__),继续定制我们之前构建的联合学习系统。" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:15 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:16 @@ -13815,15 +14412,19 @@ msgid "" "you in the ``#introductions`` channel! And if anything is unclear, head " "over to the ``#questions`` channel." msgstr "" +"`Star Flower on GitHub `__ ⭐️ 并加入 Slack " +"上的 Flower 社区,进行交流、提问并获得帮助: 加入 Slack `__ 🌼 我们希望在 ``#introductions`` " +"频道听到您的声音!如果有任何不清楚的地方,请访问 ``#questions`` 频道。" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:17 msgid "Let's build a new ``Strategy`` from scratch!" -msgstr "" +msgstr "让我们从头开始构建一个新的 \"策略\"!" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:29 #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:29 msgid "Preparation" -msgstr "" +msgstr "准备工作" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:31 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:32 @@ -13831,20 +14432,20 @@ msgstr "" msgid "" "Before we begin with the actual code, let's make sure that we have " "everything we need." -msgstr "" +msgstr "在开始实际代码之前,让我们先确保我们已经准备好了所需的一切。" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:43 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:44 #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:43 #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:43 msgid "Installing dependencies" -msgstr "" +msgstr "安装依赖项" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:45 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:46 #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:45 msgid "First, we install the necessary packages:" -msgstr "" +msgstr "首先,我们安装必要的软件包:" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:65 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:66 @@ -13853,7 +14454,7 @@ msgstr "" msgid "" "Now that we have all dependencies installed, we can import everything we " "need for this tutorial:" -msgstr "" +msgstr "现在我们已经安装了所有依赖项,可以导入本教程所需的所有内容:" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:101 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:102 @@ -13869,12 +14470,18 @@ msgid "" "has GPU acceleration enabled, you should see the output ``Training on " "cuda``, otherwise it'll say ``Training on cpu``." msgstr "" +"可以切换到已启用 GPU 加速的运行时(在 Google Colab 上: 运行时 > " +"更改运行时类型 > 硬件加速: GPU > 保存``)。但请注意,Google Colab " +"并非总能提供 GPU 加速。如果在以下部分中看到与 GPU 可用性相关的错误," +"请考虑通过设置 ``DEVICE = torch.device(\"cpu\")`` 切回基于 CPU 的执行。" +"如果运行时已启用 GPU 加速,你应该会看到输出``Training on cuda``,否则会显示``" +"Training on cpu``。" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:114 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:115 #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:114 msgid "Data loading" -msgstr "" +msgstr "数据加载" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:116 #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:116 @@ -13885,12 +14492,16 @@ msgid "" " ``num_clients`` which allows us to call ``load_datasets`` with different" " numbers of clients." msgstr "" +"现在,让我们加载 CIFAR-10 训练集和测试集,将它们分割成 10 " +"个较小的数据集(每个数据集又分为训练集和验证集),并将所有数据都封装在各自的 " +"``DataLoader`` 中。我们引入了一个新参数 ``num_clients``," +"它允许我们使用不同数量的客户端调用 ``load_datasets``。" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:167 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:168 #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:167 msgid "Model training/evaluation" -msgstr "" +msgstr "模型培训/评估" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:169 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:170 @@ -13898,12 +14509,13 @@ msgstr "" msgid "" "Let's continue with the usual model definition (including " "``set_parameters`` and ``get_parameters``), training and test functions:" -msgstr "" +msgstr "让我们继续使用常见的模型定义(包括 `set_parameters` 和 " +"`get_parameters`)、训练和测试函数:" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:258 #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:258 msgid "Flower client" -msgstr "" +msgstr "Flower 客户端" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:260 #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:260 @@ -13913,14 +14525,18 @@ msgid "" "``get_parameters``, ``fit``, and ``evaluate``. Here, we also pass the " "``cid`` to the client and use it log additional details:" msgstr "" +"为了实现 Flower 客户端,我们(再次)创建了 ``flwr.client.NumPyClient`` " +"的子类,并实现了 ``get_parameters``、``fit`` 和 " +"``evaluate``三个方法。在这里,我们还将 ``cid`` " +"传递给客户端,并使用它记录其他详细信息:" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:308 msgid "Let's test what we have so far before we continue:" -msgstr "" +msgstr "在继续之前,让我们先测试一下我们目前掌握的情况:" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:339 msgid "Build a Strategy from scratch" -msgstr "" +msgstr "从零开始构建策略" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:341 msgid "" @@ -13930,18 +14546,22 @@ msgid "" " it is in ``FedAvg`` and then change the configuration dictionary (one of" " the ``FitIns`` attributes)." msgstr "" +"让我们重写 ``configure_fit`` " +"方法,使其向一部分客户的优化器传递更高的学习率(可能还有其他超参数)。" +"我们将保持 ``FedAvg`` 中的客户端采样,然后更改配置字典(``FitIns`` " +"属性之一)。" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:507 msgid "" "The only thing left is to use the newly created custom Strategy " "``FedCustom`` when starting the experiment:" -msgstr "" +msgstr "剩下的唯一工作就是在启动实验时使用新创建的自定义策略 ``FedCustom`` :" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:534 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:932 #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:697 msgid "Recap" -msgstr "" +msgstr "回顾" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:536 msgid "" @@ -13953,6 +14573,10 @@ msgid "" "functions to the constructor of your new class (``__init__``) and then " "call these functions whenever needed." msgstr "" +"在本笔记本中,我们了解了如何实施自定义策略。自定义策略可以对客户端节点配置、" +"结果聚合等进行细粒度控制。要定义自定义策略,只需覆盖(抽象)基类 ``Strategy``" +" 的抽象方法即可。为使自定义策略更加强大,您可以将自定义函数传递给新类的构造函" +"数(`__init__``),然后在需要时调用这些函数。" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:550 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:948 @@ -13963,6 +14587,8 @@ msgid "" "Before you continue, make sure to join the Flower community on Slack: " "`Join Slack `__" msgstr "" +"在继续之前,请务必加入 Slack 上的 Flower 社区:`Join Slack `__" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:552 #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:950 @@ -13972,7 +14598,8 @@ msgstr "" msgid "" "There's a dedicated ``#questions`` channel if you need help, but we'd " "also love to hear who you are in ``#introductions``!" -msgstr "" +msgstr "如果您需要帮助,我们有专门的 ``#questions`` 频道,但我们也很乐意在 " +"``#introductions`` 中了解您是谁!" #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:554 msgid "" @@ -13981,10 +14608,13 @@ msgid "" "pytorch.html>`__ introduces ``Client``, the flexible API underlying " "``NumPyClient``." msgstr "" +"Flower联邦学习教程 - 第4部分 `__ " +"介绍了``Client``,它是``NumPyClient``底层的灵活应用程序接口。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:9 msgid "Customize the client" -msgstr "" +msgstr "自定义客户端" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:11 msgid "" @@ -13998,6 +14628,14 @@ msgid "" "custom strategy from scratch (`part 3 `__)." msgstr "" +"欢迎来到 Flower 联邦学习教程的第四部分。在本教程的前几部分中,我们介绍了 " +"PyTorch 和 Flower 的联邦学习(`part 1 `__),了解了如何使用策略来定制服务器和客户端的执行(`part 2 " +"`__),并从头开始构建了我们自己的定制策略(`part 3 " +"`__)。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:14 msgid ""