diff --git a/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po b/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po index 4a3330806129..e99c410e6c60 100644 --- a/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po +++ b/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po @@ -8,7 +8,7 @@ msgstr "" "Project-Id-Version: Flower main\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2023-11-23 18:31+0100\n" -"PO-Revision-Date: 2023-11-28 20:03+0000\n" +"PO-Revision-Date: 2023-12-07 11:04+0000\n" "Last-Translator: Yan Gao \n" "Language-Team: Chinese (Simplified) \n" @@ -482,13 +482,16 @@ msgstr "通过 ``pyproject.toml`` 从本地轮子文件安装 ``flwr``:" msgid "" "``flwr = { path = \"../../dist/flwr-1.0.0-py3-none-any.whl\" }`` (without" " extras)" -msgstr "" +msgstr "``flwr = { path = \"../../dist/flwr-1.0.0-py3-none-any.whl\" " +"}``(无额外内容)" #: ../../source/contributor-how-to-install-development-versions.rst:23 msgid "" "``flwr = { path = \"../../dist/flwr-1.0.0-py3-none-any.whl\", extras = " "[\"simulation\"] }`` (with extras)" msgstr "" +"``flwr = { path = \"../../dist/flwr-1.0.0-py3-none-any.whl\", extras = [" +"\"simulation\"] }`` (包含额外内容)" #: ../../source/contributor-how-to-install-development-versions.rst:25 msgid "" @@ -496,102 +499,115 @@ msgid "" "Dependency Specification `_" msgstr "" +"有关详细信息,请参阅 Poetry 文档: 诗歌依赖性规范 `_" #: ../../source/contributor-how-to-install-development-versions.rst:28 msgid "Using pip (recommended on Colab)" -msgstr "" +msgstr "使用 pip(建议在 Colab 上使用)" #: ../../source/contributor-how-to-install-development-versions.rst:30 msgid "Install a ``flwr`` pre-release from PyPI:" -msgstr "" +msgstr "从 PyPI 安装 ``flwr`` 预发行版:" #: ../../source/contributor-how-to-install-development-versions.rst:32 msgid "``pip install -U --pre flwr`` (without extras)" -msgstr "" +msgstr "`pip install -U -pre flwr``(不含额外功能)" #: ../../source/contributor-how-to-install-development-versions.rst:33 msgid "``pip install -U --pre flwr[simulation]`` (with extras)" -msgstr "" +msgstr "`pip install -U -pre flwr[simulation]``(包含额外功能)" #: ../../source/contributor-how-to-install-development-versions.rst:35 msgid "" "Python packages can be installed from git repositories. Use one of the " "following commands to install the Flower directly from GitHub." -msgstr "" +msgstr "Python 软件包可以从 git 仓库安装。使用以下命令之一直接从 GitHub 安装 Flower。" #: ../../source/contributor-how-to-install-development-versions.rst:37 msgid "Install ``flwr`` from the default GitHub branch (``main``):" -msgstr "" +msgstr "从 GitHub 的默认分支 (``main`) 安装 ``flwr``:" #: ../../source/contributor-how-to-install-development-versions.rst:39 msgid "" "``pip install flwr@git+https://github.com/adap/flower.git`` (without " "extras)" -msgstr "" +msgstr "`pip install flwr@git+https://github.com/adap/flower.git`` (不含额外功能)" #: ../../source/contributor-how-to-install-development-versions.rst:40 msgid "" "``pip install flwr[simulation]@git+https://github.com/adap/flower.git`` " "(with extras)" msgstr "" +"`pip install flwr[simulation]@git+https://github.com/adap/flower." +"git``(带附加功能)" #: ../../source/contributor-how-to-install-development-versions.rst:42 msgid "Install ``flwr`` from a specific GitHub branch (``branch-name``):" -msgstr "" +msgstr "从特定的 GitHub 分支 (`分支名`) 安装 ``flwr``:" #: ../../source/contributor-how-to-install-development-versions.rst:44 msgid "" "``pip install flwr@git+https://github.com/adap/flower.git@branch-name`` " "(without extras)" msgstr "" +"`pip install flwr@git+https://github.com/adap/flower.git@branch-name`` " +"(不含附加功能)" #: ../../source/contributor-how-to-install-development-versions.rst:45 msgid "" "``pip install flwr[simulation]@git+https://github.com/adap/flower.git" "@branch-name`` (with extras)" -msgstr "" +msgstr "`pip安装flwr[模拟]@git+https://github.com/adap/flower." +"git@分支名``(带附加功能)" #: ../../source/contributor-how-to-install-development-versions.rst:49 msgid "Open Jupyter Notebooks on Google Colab" -msgstr "" +msgstr "在谷歌 Colab 上打开 Jupyter 笔记本" #: ../../source/contributor-how-to-install-development-versions.rst:51 msgid "" "Open the notebook ``doc/source/tutorial-get-started-with-flower-" "pytorch.ipynb``:" -msgstr "" +msgstr "打开笔记本 ``doc/source/tutorial-get-started-with-flower-pytorch.ipynb``:" #: ../../source/contributor-how-to-install-development-versions.rst:53 msgid "" "https://colab.research.google.com/github/adap/flower/blob/main/doc/source" "/tutorial-get-started-with-flower-pytorch.ipynb" msgstr "" +"https://colab.research.google.com/github/adap/flower/blob/main/doc/source/" +"tutorial-get-started-with-flower-pytorch.ipynb" #: ../../source/contributor-how-to-install-development-versions.rst:55 msgid "" "Open a development version of the same notebook from branch `branch-name`" " by changing ``main`` to ``branch-name`` (right after ``blob``):" msgstr "" +"将 ``main`` 改为 ``branch-name``(紧跟在 ``blob``之后),从分支 `branch-name`" +" 打开同一笔记本的开发版本:" #: ../../source/contributor-how-to-install-development-versions.rst:57 msgid "" "https://colab.research.google.com/github/adap/flower/blob/branch-" "name/doc/source/tutorial-get-started-with-flower-pytorch.ipynb" msgstr "" +"https://colab.research.google.com/github/adap/flower/blob/branch-name/doc/" +"source/tutorial-get-started-with-flower-pytorch.ipynb" #: ../../source/contributor-how-to-install-development-versions.rst:59 msgid "Install a `whl` on Google Colab:" -msgstr "" +msgstr "在 Google Colab 上安装 `whl`:" #: ../../source/contributor-how-to-install-development-versions.rst:61 msgid "" "In the vertical icon grid on the left hand side, select ``Files`` > " "``Upload to session storage``" -msgstr "" +msgstr "在左侧的垂直图标网格中,选择 \"文件\">\"上传到会话存储\"" #: ../../source/contributor-how-to-install-development-versions.rst:62 msgid "Upload the whl (e.g., ``flwr-1.6.0-py3-none-any.whl``)" -msgstr "" +msgstr "上传 whl(例如 ``flwr-1.6.0-py3-none-any.whl``)" #: ../../source/contributor-how-to-install-development-versions.rst:63 msgid "" @@ -599,20 +615,23 @@ msgid "" "matplotlib`` to ``!pip install -q 'flwr-1.6.0-py3-none-" "any.whl[simulation]' torch torchvision matplotlib``" msgstr "" +"将``!pip install -q 'flwr[simulation]' torch torchvision matplotlib``更改为``" +"!pip install -q 'flwr-1.6.0-py3-none-any.whl[simulation]' torch torch " +"torchvision matplotlib``" #: ../../source/contributor-how-to-release-flower.rst:2 msgid "Release Flower" -msgstr "" +msgstr "发布 Flower" #: ../../source/contributor-how-to-release-flower.rst:4 msgid "" "This document describes the current release process. It may or may not " "change in the future." -msgstr "" +msgstr "本文件描述了当前的发布流程。今后可能会有变化,也可能不会有变化。" #: ../../source/contributor-how-to-release-flower.rst:7 msgid "Before the release" -msgstr "" +msgstr "发布前" #: ../../source/contributor-how-to-release-flower.rst:9 msgid "" @@ -621,12 +640,17 @@ msgid "" "``v1.2.0``, you can use the following URL to see all commits that got " "merged into ``main`` since then:" msgstr "" +"更新更新日志 (``changelog.md``),加入上次发布后发生的所有相关变更。" +"如果上次发布的版本被标记为 ``v1.2.0``,则可以使用以下 URL 查看此后合并到 " +"``main`` 的所有提交:" #: ../../source/contributor-how-to-release-flower.rst:11 msgid "" "`GitHub: Compare v1.2.0...main " "`_" msgstr "" +"`GitHub: Compare v1.2.0...main `_" #: ../../source/contributor-how-to-release-flower.rst:13 msgid "" @@ -635,17 +659,21 @@ msgid "" "be ran multiple times and will update the names in the list if new " "contributors were added in the meantime)." msgstr "" +"感谢自上次发布以来做出贡献的作者。可以通过运行 ``./dev/add-shortlog.sh`` 方便" +"脚本来完成(可以多次运行,如果在此期间有新的贡献者加入,则会更新列表中的名字" +")。" #: ../../source/contributor-how-to-release-flower.rst:16 msgid "During the release" -msgstr "" +msgstr "在发布期间" #: ../../source/contributor-how-to-release-flower.rst:18 msgid "" "The version number of a release is stated in ``pyproject.toml``. To " "release a new version of Flower, the following things need to happen (in " "that order):" -msgstr "" +msgstr "版本号在 ``pyproject.toml`` 中说明。要发布 Flower " +"的新版本,需要完成以下工作(按顺序排列):" #: ../../source/contributor-how-to-release-flower.rst:20 msgid "" @@ -653,6 +681,8 @@ msgid "" "version number and date for the release you are building. Create a pull " "request with the change." msgstr "" +"更新 ``changelog.md`` 部分的标题 ``Unreleased`` " +"以包含你正在构建的版本的版本号和日期。创建一个包含更改的拉取请求。" #: ../../source/contributor-how-to-release-flower.rst:21 msgid "" @@ -661,92 +691,95 @@ msgid "" " draft release on GitHub containing the correct artifacts and the " "relevant part of the changelog." msgstr "" +"在 PR 合并后立即用版本号标记发布提交:``git tag v0.12.3``,然后``git push " +"--tags``。这将在 GitHub 上创建一个包含正确工件和更新日志相关部分的发布草案。" #: ../../source/contributor-how-to-release-flower.rst:22 msgid "Check the draft release on GitHub, and if everything is good, publish it." -msgstr "" +msgstr "检查 GitHub 上的发布稿,如果一切正常,就发布它。" #: ../../source/contributor-how-to-release-flower.rst:25 msgid "After the release" -msgstr "" +msgstr "发布后" #: ../../source/contributor-how-to-release-flower.rst:27 msgid "Create a pull request which contains the following changes:" -msgstr "" +msgstr "创建包含以下更改的拉取请求:" #: ../../source/contributor-how-to-release-flower.rst:29 msgid "Increase the minor version in ``pyproject.toml`` by one." -msgstr "" +msgstr "将 ``pyproject.toml`` 中的次要版本增加一个。" #: ../../source/contributor-how-to-release-flower.rst:30 msgid "Update all files which contain the current version number if necessary." -msgstr "" +msgstr "如有必要,更新包含当前版本号的所有文件。" #: ../../source/contributor-how-to-release-flower.rst:31 msgid "Add a new ``Unreleased`` section in ``changelog.md``." -msgstr "" +msgstr "在 ``changelog.md`` 中添加新的 ``Unreleased`` 部分。" #: ../../source/contributor-how-to-release-flower.rst:33 msgid "" "Merge the pull request on the same day (i.e., before a new nighly release" " gets published to PyPI)." -msgstr "" +msgstr "在同一天合并拉取请求(即在新版本发布到 PyPI 之前)。" #: ../../source/contributor-how-to-release-flower.rst:36 msgid "Publishing a pre-release" -msgstr "" +msgstr "发布预发布版本" #: ../../source/contributor-how-to-release-flower.rst:39 msgid "Pre-release naming" -msgstr "" +msgstr "释放前命名" #: ../../source/contributor-how-to-release-flower.rst:41 msgid "" "PyPI supports pre-releases (alpha, beta, release candiate). Pre-releases " "MUST use one of the following naming patterns:" -msgstr "" +msgstr "PyPI 支持预发布版本(alpha、beta、release " +"candiate)。预发布版本必须使用以下命名模式之一:" #: ../../source/contributor-how-to-release-flower.rst:43 msgid "Alpha: ``MAJOR.MINOR.PATCHaN``" -msgstr "" +msgstr "阿尔法 ``MAJOR.MINOR.PATCHaN``" #: ../../source/contributor-how-to-release-flower.rst:44 msgid "Beta: ``MAJOR.MINOR.PATCHbN``" -msgstr "" +msgstr "贝塔: ``MAJOR.MINOR.PATCHbN``" #: ../../source/contributor-how-to-release-flower.rst:45 msgid "Release candiate (RC): ``MAJOR.MINOR.PATCHrcN``" -msgstr "" +msgstr "版本代号 (RC): ``MAJOR.MINOR.PATCHrcN``" #: ../../source/contributor-how-to-release-flower.rst:47 msgid "Examples include:" -msgstr "" +msgstr "例子包括:" #: ../../source/contributor-how-to-release-flower.rst:49 msgid "``1.0.0a0``" -msgstr "" +msgstr "``1.0.0a0``" #: ../../source/contributor-how-to-release-flower.rst:50 msgid "``1.0.0b0``" -msgstr "" +msgstr "``1.0.0b0``" #: ../../source/contributor-how-to-release-flower.rst:51 msgid "``1.0.0rc0``" -msgstr "" +msgstr "``1.0.0rc0``" #: ../../source/contributor-how-to-release-flower.rst:52 msgid "``1.0.0rc1``" -msgstr "" +msgstr "``1.0.0rc1``" #: ../../source/contributor-how-to-release-flower.rst:54 msgid "" "This is in line with PEP-440 and the recommendations from the Python " "Packaging Authority (PyPA):" -msgstr "" +msgstr "这符合 PEP-440 和 Python 包装管理局 (PyPA) 的建议:" #: ../../source/contributor-how-to-release-flower.rst:57 msgid "`PEP-440 `_" -msgstr "" +msgstr "`PEP-440 `_" #: ../../source/contributor-how-to-release-flower.rst:58 msgid "" @@ -754,6 +787,8 @@ msgid "" "`_" msgstr "" +"`PyPA 选择版本控制方案 `_" #: ../../source/contributor-how-to-release-flower.rst:60 msgid "" @@ -762,33 +797,37 @@ msgid "" "`_ (specifically item " "11 on precedence)." msgstr "" +"请注意,PyPA 所定义的方法与 SemVer 2.0.0 " +"规范不兼容,详情请查阅《语义版本规范》`_(特别是关于优先级的第 11 项)。" #: ../../source/contributor-how-to-release-flower.rst:63 msgid "Pre-release classification" -msgstr "" +msgstr "发布前分类" #: ../../source/contributor-how-to-release-flower.rst:65 msgid "Should the next pre-release be called alpha, beta, or release candidate?" -msgstr "" +msgstr "下一个预发布版应该叫阿尔法版、贝塔版还是候选发布版?" #: ../../source/contributor-how-to-release-flower.rst:67 msgid "" "RC: feature complete, no known issues (apart from issues that are " "classified as \"won't fix\" for the next stable release) - if no issues " "surface this will become the next stable release" -msgstr "" +msgstr "RC:功能完整,无已知问题(除了下一个稳定版中被列为 \"不会修复 \"的问题" +")--如果没有问题出现,这将成为下一个稳定版" #: ../../source/contributor-how-to-release-flower.rst:68 msgid "Beta: feature complete, allowed to have known issues" -msgstr "" +msgstr "贝塔版:功能完整,允许存在已知问题" #: ../../source/contributor-how-to-release-flower.rst:69 msgid "Alpha: not feature complete, allowed to have known issues" -msgstr "" +msgstr "阿尔法版:功能不完整,允许存在已知问题" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:2 msgid "Set up a virtual env" -msgstr "" +msgstr "建立虚拟环境" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:4 msgid "" @@ -797,10 +836,13 @@ msgid "" "environment with pyenv virtualenv, poetry, or Anaconda. You can follow " "the instructions or choose your preferred setup." msgstr "" +"建议在虚拟环境中运行 Python 设置。本指南展示了如何使用 pyenv virtualenv、" +"poes 或 Anaconda " +"创建虚拟环境的三个不同示例。您可以按照说明或选择您喜欢的设置。" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:9 msgid "Python Version" -msgstr "" +msgstr "Python 版本" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:11 #: ../../source/how-to-install-flower.rst:8 @@ -809,10 +851,12 @@ msgid "" "but `Python 3.10 `_ or above is " "recommended." msgstr "" +"Flower 至少需要 `Python 3.8 `_,但建议使用 `" +"Python 3.10 `_或更高版本。" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:14 msgid "Virutualenv with Pyenv/Virtualenv" -msgstr "" +msgstr "Virutualenv 和 Pyenv/Virtualenv" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:16 msgid "" @@ -821,24 +865,29 @@ msgid "" "/pyenv-virtualenv>`_. Please see `Flower examples " "`_ for details." msgstr "" +"其中一个推荐的虚拟环境是 `pyenv `_/`" +"virtualenv `_。详情请参见 `Flower " +"示例 `_。" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:18 msgid "" "Once Pyenv is set up, you can use it to install `Python Version 3.10 " "`_ or above:" msgstr "" +"一旦设置好 Pyenv,就可以用它来安装 `Python 3.10 `_ 或更高版本:" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:24 msgid "Create the virtualenv with:" -msgstr "" +msgstr "创建虚拟环境:" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:31 msgid "Activate the virtualenv by running the following command:" -msgstr "" +msgstr "运行以下命令激活 virtualenv:" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:39 msgid "Virtualenv with Poetry" -msgstr "" +msgstr "有诗意的 Virtualenv" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:41 msgid "" @@ -846,16 +895,18 @@ msgid "" "poetry.org/docs/>`_ to manage dependencies. After installing Poetry you " "simply create a virtual environment with:" msgstr "" +"Flower 示例基于 `Poetry `_ 来管理依赖关系。" +"安装 Poetry 后,只需创建一个虚拟环境即可:" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:47 msgid "" "If you open a new terminal you can activate the previously created " "virtual environment with the following command:" -msgstr "" +msgstr "如果打开一个新终端,可以使用以下命令激活之前创建的虚拟环境:" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:55 msgid "Virtualenv with Anaconda" -msgstr "" +msgstr "使用 Anaconda 的 Virtualenv" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:57 msgid "" @@ -864,28 +915,33 @@ msgid "" "/user-guide/install/index.html>`_ package. After setting it up you can " "create a virtual environment with:" msgstr "" +"如果你更喜欢在虚拟环境中使用 Anaconda,那么请安装并设置 `conda `_ " +"软件包。设置完成后,您就可以使用以下工具创建虚拟环境:" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:63 msgid "and activate the virtual environment with:" -msgstr "" +msgstr "并激活虚拟环境:" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:71 msgid "And then?" -msgstr "" +msgstr "然后呢?" #: ../../source/contributor-how-to-set-up-a-virtual-env.rst:73 msgid "" "As soon as you created your virtual environment you clone one of the " "`Flower examples `_." msgstr "" +"创建虚拟环境后,您可以克隆一个 `Flower 示例 `_。" #: ../../source/contributor-how-to-write-documentation.rst:2 msgid "Write documentation" -msgstr "" +msgstr "编写文件" #: ../../source/contributor-how-to-write-documentation.rst:6 msgid "Project layout" -msgstr "" +msgstr "项目布局" #: ../../source/contributor-how-to-write-documentation.rst:8 msgid "" @@ -893,6 +949,8 @@ msgid "" " documentation system supports both reStructuredText (``.rst`` files) and" " Markdown (``.md`` files)." msgstr "" +"Flower 文档位于 ``doc`` 目录中。基于 Sphinx 的文档系统支持 " +"reStructuredText(``.rst`` 文件)和 Markdown(``.md`` 文件)。" #: ../../source/contributor-how-to-write-documentation.rst:10 #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:119 @@ -902,44 +960,46 @@ msgid "" "_` needs to be installed on the " "system." msgstr "" +"请注意,要在本地构建文档(使用 ``poetry run make html``,如下所述)," +"系统上必须安装 ``Pandoc _`。" #: ../../source/contributor-how-to-write-documentation.rst:14 msgid "Edit an existing page" -msgstr "" +msgstr "编辑现有页面" #: ../../source/contributor-how-to-write-documentation.rst:16 msgid "Edit an existing ``.rst`` (or ``.md``) file under ``doc/source/``" -msgstr "" +msgstr "编辑 ``doc/source/`` 下现有的 ``.rst`` (或 ``.md``) 文件" #: ../../source/contributor-how-to-write-documentation.rst:17 #: ../../source/contributor-how-to-write-documentation.rst:27 msgid "Compile the docs: ``cd doc``, then ``poetry run make html``" -msgstr "" +msgstr "编译文档: cd doc``,然后 ``poetry run make html``" #: ../../source/contributor-how-to-write-documentation.rst:18 #: ../../source/contributor-how-to-write-documentation.rst:28 msgid "Open ``doc/build/html/index.html`` in the browser to check the result" -msgstr "" +msgstr "在浏览器中打开 ``doc/build/html/index.html`` 查看结果" #: ../../source/contributor-how-to-write-documentation.rst:22 msgid "Create a new page" -msgstr "" +msgstr "创建新页面" #: ../../source/contributor-how-to-write-documentation.rst:24 msgid "Add new ``.rst`` file under ``doc/source/``" -msgstr "" +msgstr "在 ``doc/source/`` 下添加新的 ``.rst`` 文件" #: ../../source/contributor-how-to-write-documentation.rst:25 msgid "Add content to the new ``.rst`` file" -msgstr "" +msgstr "为新的 ``.rst`` 文件添加内容" #: ../../source/contributor-how-to-write-documentation.rst:26 msgid "Link to the new rst from ``index.rst``" -msgstr "" +msgstr "从 ``index.rst`` 链接到新的 rst" #: ../../source/contributor-ref-good-first-contributions.rst:2 msgid "Good first contributions" -msgstr "" +msgstr "良好的首批捐款" #: ../../source/contributor-ref-good-first-contributions.rst:4 msgid "" @@ -948,33 +1008,36 @@ msgid "" "where to start to increase your chances of getting your PR accepted into " "the Flower codebase." msgstr "" +"我们欢迎为《鲜花》投稿!然而,要知道从哪里开始并非易事。因此,我们提出了一些" +"建议,告诉您从哪里开始,以增加您的 PR 被 Flower 代码库接受的机会。" #: ../../source/contributor-ref-good-first-contributions.rst:11 msgid "Where to start" -msgstr "" +msgstr "从哪里开始" #: ../../source/contributor-ref-good-first-contributions.rst:13 msgid "" "Until the Flower core library matures it will be easier to get PR's " "accepted if they only touch non-core areas of the codebase. Good " "candidates to get started are:" -msgstr "" +msgstr "在 Flower 核心库成熟之前,如果 PR " +"只涉及代码库中的非核心区域,则会更容易被接受。可以从以下方面入手:" #: ../../source/contributor-ref-good-first-contributions.rst:17 msgid "Documentation: What's missing? What could be expressed more clearly?" -msgstr "" +msgstr "文件: 缺少什么?哪些内容可以表达得更清楚?" #: ../../source/contributor-ref-good-first-contributions.rst:18 msgid "Baselines: See below." -msgstr "" +msgstr "基线: 见下文。" #: ../../source/contributor-ref-good-first-contributions.rst:19 msgid "Examples: See below." -msgstr "" +msgstr "举例说明: 见下文。" #: ../../source/contributor-ref-good-first-contributions.rst:23 msgid "Request for Flower Baselines" -msgstr "" +msgstr "Flower 基线申请" #: ../../source/contributor-ref-good-first-contributions.rst:25 msgid "" @@ -982,6 +1045,8 @@ msgid "" "out our `contributing guide for baselines `_." msgstr "" +"如果您对 Flower Baselines 还不熟悉,也许应该看看我们的 \"基线贡献指南 " +"`_\"。" #: ../../source/contributor-ref-good-first-contributions.rst:27 msgid "" @@ -991,39 +1056,43 @@ msgid "" " and that has no assignes, feel free to assign it to yourself and start " "working on it!" msgstr "" +"然后,您应该查看开放的 `issues `_ 基线请求。如果您" +"发现了自己想做的基线,而它还没有被分配,请随时把它分配给自己,然后开始工作!" #: ../../source/contributor-ref-good-first-contributions.rst:31 msgid "" "Otherwise, if you don't find a baseline you'd like to work on, be sure to" " open a new issue with the baseline request template!" -msgstr "" +msgstr "否则,如果您没有找到想要处理的基线,请务必使用基线请求模板打开一个新问题!" #: ../../source/contributor-ref-good-first-contributions.rst:34 msgid "Request for examples" -msgstr "" +msgstr "要求提供范例" #: ../../source/contributor-ref-good-first-contributions.rst:36 msgid "" "We wish we had more time to write usage examples because we believe they " "help users to get started with building what they want to build. Here are" " a few ideas where we'd be happy to accept a PR:" -msgstr "" +msgstr "我们希望有更多的时间来撰写使用示例,因为我们相信这些示例可以帮助用户开始构建" +"他们想要构建的东西。以下是我们乐意接受 PR 的几个想法:" #: ../../source/contributor-ref-good-first-contributions.rst:40 msgid "Llama 2 fine-tuning, with Hugging Face Transformers and PyTorch" -msgstr "" +msgstr "微调 \"拉玛 2\",使用 \"抱脸变形金刚 \"和 PyTorch" #: ../../source/contributor-ref-good-first-contributions.rst:41 msgid "XGBoost" -msgstr "" +msgstr "XGBoost" #: ../../source/contributor-ref-good-first-contributions.rst:42 msgid "Android ONNX on-device training" -msgstr "" +msgstr "安卓 ONNX 设备上培训" #: ../../source/contributor-ref-secure-aggregation-protocols.rst:2 msgid "Secure Aggregation Protocols" -msgstr "" +msgstr "安全聚合协议" #: ../../source/contributor-ref-secure-aggregation-protocols.rst:4 msgid "" @@ -1032,10 +1101,13 @@ msgid "" " not be accurate in practice. The SecAgg protocol can be considered as a " "special case of the SecAgg+ protocol." msgstr "" +"包括 SecAgg、SecAgg+ 和 LightSecAgg 协议。LightSecAgg " +"协议尚未实施,因此其图表和抽象在实践中可能并不准确。SecAgg 协议可视为 SecAgg+" +" 协议的特例。" #: ../../source/contributor-ref-secure-aggregation-protocols.rst:8 msgid "The :code:`SecAgg+` abstraction" -msgstr "" +msgstr "代码:`SecAgg+` 抽象" #: ../../source/contributor-ref-secure-aggregation-protocols.rst:10 #: ../../source/contributor-ref-secure-aggregation-protocols.rst:161 @@ -1044,31 +1116,33 @@ msgid "" "(int) for secure aggregation, and thus many python dictionaries used have" " keys of int type rather than ClientProxy type." msgstr "" +"在此实现中,将为每个客户端分配一个唯一索引(int),以确保聚合的安全性," +"因此使用的许多 python 字典的键都是 int 类型,而不是 ClientProxy 类型。" #: ../../source/contributor-ref-secure-aggregation-protocols.rst:65 #: ../../source/contributor-ref-secure-aggregation-protocols.rst:198 msgid "" "The Flower server will execute and process received results in the " "following order:" -msgstr "" +msgstr "Flower 服务器将按以下顺序执行和处理收到的结果:" #: ../../source/contributor-ref-secure-aggregation-protocols.rst:159 msgid "The :code:`LightSecAgg` abstraction" -msgstr "" +msgstr "代码:`LightSecAgg` 抽象" #: ../../source/contributor-ref-secure-aggregation-protocols.rst:271 msgid "Types" -msgstr "" +msgstr "类型" #: ../../source/contributor-tutorial-contribute-on-github.rst:2 msgid "Contribute on GitHub" -msgstr "" +msgstr "在 GitHub 上投稿" #: ../../source/contributor-tutorial-contribute-on-github.rst:4 msgid "" "This guide is for people who want to get involved with Flower, but who " "are not used to contributing to GitHub projects." -msgstr "" +msgstr "本指南适用于想参与 Flower,但不习惯为 GitHub 项目贡献的人。" #: ../../source/contributor-tutorial-contribute-on-github.rst:6 msgid "" @@ -1078,14 +1152,18 @@ msgid "" "examples of `good first contributions `_." msgstr "" +"如果您熟悉如何在 GitHub 上贡献,可以直接查看我们的 \"贡献者入门指南\" " +"`_ 和 " +"\"优秀的首次贡献示例\" `_。" #: ../../source/contributor-tutorial-contribute-on-github.rst:12 msgid "Setting up the repository" -msgstr "" +msgstr "建立资源库" #: ../../source/contributor-tutorial-contribute-on-github.rst:23 msgid "**Create a GitHub account and setup Git**" -msgstr "" +msgstr "**创建 GitHub 账户并设置 Git**" #: ../../source/contributor-tutorial-contribute-on-github.rst:15 msgid "" @@ -1095,19 +1173,23 @@ msgid "" "follow this `guide `_ to set it up." msgstr "" +"Git 是一种分布式版本控制工具。它可以将整个代码库的历史记录保存在每个开发人员" +"的机器上。您需要在本地计算机上安装该软件,可以按照本指南 `_ 进行设置。" #: ../../source/contributor-tutorial-contribute-on-github.rst:18 msgid "" "GitHub, itself, is a code hosting platform for version control and " "collaboration. It allows for everyone to collaborate and work from " "anywhere on remote repositories." -msgstr "" +msgstr "GitHub 本身是一个用于版本控制和协作的代码托管平台。它允许每个人在任何地方对远" +"程仓库进行协作和工作。" #: ../../source/contributor-tutorial-contribute-on-github.rst:20 msgid "" "If you haven't already, you will need to create an account on `GitHub " "`_." -msgstr "" +msgstr "如果还没有,您需要在 `GitHub `_ 上创建一个账户。" #: ../../source/contributor-tutorial-contribute-on-github.rst:22 msgid "" @@ -1116,10 +1198,13 @@ msgid "" "locally and keep track of them using Git and then you upload your new " "history back to GitHub." msgstr "" +"通用的 Git 和 GitHub 工作流程背后的理念可以归结为:从 GitHub " +"上的远程仓库下载代码,在本地进行修改并使用 Git 进行跟踪," +"然后将新的历史记录上传回 GitHub。" #: ../../source/contributor-tutorial-contribute-on-github.rst:34 msgid "**Forking the Flower repository**" -msgstr "" +msgstr "**叉花仓库**" #: ../../source/contributor-tutorial-contribute-on-github.rst:26 msgid "" @@ -1128,6 +1213,9 @@ msgid "" "connected to your GitHub account) and click the ``Fork`` button situated " "on the top right of the page." msgstr "" +"fork 是 GitHub 仓库的个人副本。要为 Flower 创建一个 fork,您必须导航到 " +"https://github.com/adap/flower(同时连接到您的 GitHub 账户)," +"然后点击页面右上方的 ``Fork`` 按钮。" #: ../../source/contributor-tutorial-contribute-on-github.rst:31 msgid "" @@ -1136,10 +1224,13 @@ msgid "" "(i.e., in your own list of repositories). Once created, you should see on" " the top left corner that you are looking at your own version of Flower." msgstr "" +"您可以更改名称,但没有必要,因为这个版本的 Flower 将是您自己的,并位于您自己" +"的账户中(即,在您自己的版本库列表中)。创建完成后,您会在左上角看到自己的 " +"Flower 版本。" #: ../../source/contributor-tutorial-contribute-on-github.rst:49 msgid "**Cloning your forked repository**" -msgstr "" +msgstr "**克隆你的分叉仓库**" #: ../../source/contributor-tutorial-contribute-on-github.rst:37 msgid "" @@ -1148,26 +1239,30 @@ msgid "" "first click on the ``Code`` button on the right, this will give you the " "ability to copy the HTTPS link of the repository." msgstr "" +"下一步是在你的机器上下载分叉版本库,以便对其进行修改。在分叉版本库页面上," +"首先点击右侧的 \"代码 \"按钮,这样就能复制版本库的 HTTPS 链接。" #: ../../source/contributor-tutorial-contribute-on-github.rst:43 msgid "" "Once you copied the \\, you can open a terminal on your machine, " "navigate to the place you want to download the repository to and type:" -msgstr "" +msgstr "一旦复制了 (),你就可以在你的机器上打开一个终端,导航到你想下载软件源的地方,然后键入:" #: ../../source/contributor-tutorial-contribute-on-github.rst:49 msgid "" "This will create a `flower/` (or the name of your fork if you renamed it)" " folder in the current working directory." -msgstr "" +msgstr "这将在当前工作目录下创建一个 `flower/`(如果重命名了,则使用 fork " +"的名称)文件夹。" #: ../../source/contributor-tutorial-contribute-on-github.rst:68 msgid "**Add origin**" -msgstr "" +msgstr "**添加原产地**" #: ../../source/contributor-tutorial-contribute-on-github.rst:52 msgid "You can then go into the repository folder:" -msgstr "" +msgstr "然后,您就可以进入存储库文件夹:" #: ../../source/contributor-tutorial-contribute-on-github.rst:58 msgid "" @@ -1176,26 +1271,30 @@ msgid "" "previously mentioned by going to our fork repository on our GitHub " "account and copying the link." msgstr "" +"在这里,我们需要为我们的版本库添加一个 origin。origin 是远程 fork 仓库的 \\<" +"URL/>。要获得它,我们可以像前面提到的那样,访问 GitHub " +"账户上的分叉仓库并复制链接。" #: ../../source/contributor-tutorial-contribute-on-github.rst:63 msgid "" "Once the \\ is copied, we can type the following command in our " "terminal:" -msgstr "" +msgstr "一旦复制了 \\ ,我们就可以在终端中键入以下命令:" #: ../../source/contributor-tutorial-contribute-on-github.rst:92 msgid "**Add upstream**" -msgstr "" +msgstr "**增加上游**" #: ../../source/contributor-tutorial-contribute-on-github.rst:71 msgid "" "Now we will add an upstream address to our repository. Still in the same " "directroy, we must run the following command:" -msgstr "" +msgstr "现在,我们要为版本库添加一个上游地址。还是在同一目录下,我们必须运行以下命令" +":" #: ../../source/contributor-tutorial-contribute-on-github.rst:78 msgid "The following diagram visually explains what we did in the previous steps:" -msgstr "" +msgstr "下图直观地解释了我们在前面步骤中的操作:" #: ../../source/contributor-tutorial-contribute-on-github.rst:82 msgid "" @@ -1205,16 +1304,20 @@ msgid "" "remote address of the forked repository we created, i.e. the copy (fork) " "in our own account." msgstr "" +"上游是父版本库(这里是 Flower)的 GitHub " +"远程地址,即我们最终要贡献的版本库,因此需要最新的历史记录。origin " +"只是我们创建的分叉仓库的 GitHub 远程地址,即我们自己账户中的副本(分叉)。" #: ../../source/contributor-tutorial-contribute-on-github.rst:86 msgid "" "To make sure our local version of the fork is up-to-date with the latest " "changes from the Flower repository, we can execute the following command:" -msgstr "" +msgstr "为了确保本地版本的分叉程序与 Flower " +"代码库的最新更改保持一致,我们可以执行以下命令:" #: ../../source/contributor-tutorial-contribute-on-github.rst:95 msgid "Setting up the coding environment" -msgstr "" +msgstr "设置编码环境" #: ../../source/contributor-tutorial-contribute-on-github.rst:97 msgid "" @@ -1222,92 +1325,96 @@ msgid "" "contributors`_ (note that you won't need to clone the repository). Once " "you are able to write code and test it, you can finally start making " "changes!" -msgstr "" +msgstr "您可以按照这份 \"贡献者入门指南\"__(注意,您不需要克隆版本库)来实现这一点。" +"一旦您能够编写代码并进行测试,您就可以开始修改了!" #: ../../source/contributor-tutorial-contribute-on-github.rst:102 msgid "Making changes" -msgstr "" +msgstr "做出改变" #: ../../source/contributor-tutorial-contribute-on-github.rst:104 msgid "" "Before making any changes make sure you are up-to-date with your " "repository:" -msgstr "" +msgstr "在进行任何更改之前,请确保您的版本库是最新的:" #: ../../source/contributor-tutorial-contribute-on-github.rst:110 msgid "And with Flower's repository:" -msgstr "" +msgstr "还有Flower的存储库:" #: ../../source/contributor-tutorial-contribute-on-github.rst:124 msgid "**Create a new branch**" -msgstr "" +msgstr "**创建一个新分支**" #: ../../source/contributor-tutorial-contribute-on-github.rst:117 msgid "" "To make the history cleaner and easier to work with, it is good practice " "to create a new branch for each feature/project that needs to be " "implemented." -msgstr "" +msgstr "为了使历史记录更简洁、更易于操作,为每个需要实现的功能/项目创建一个新分支是个" +"不错的做法。" #: ../../source/contributor-tutorial-contribute-on-github.rst:120 msgid "" "To do so, just run the following command inside the repository's " "directory:" -msgstr "" +msgstr "为此,只需在版本库目录下运行以下命令即可:" #: ../../source/contributor-tutorial-contribute-on-github.rst:127 msgid "**Make changes**" -msgstr "" +msgstr "**进行修改**" #: ../../source/contributor-tutorial-contribute-on-github.rst:127 msgid "Write great code and create wonderful changes using your favorite editor!" -msgstr "" +msgstr "使用您最喜欢的编辑器编写优秀的代码并创建精彩的更改!" #: ../../source/contributor-tutorial-contribute-on-github.rst:140 msgid "**Test and format your code**" -msgstr "" +msgstr "**测试并格式化您的代码**" #: ../../source/contributor-tutorial-contribute-on-github.rst:130 msgid "" "Don't forget to test and format your code! Otherwise your code won't be " "able to be merged into the Flower repository. This is done so the " "codebase stays consistent and easy to understand." -msgstr "" +msgstr "不要忘记测试和格式化您的代码!否则您的代码将无法并入 Flower " +"代码库。这样做是为了使代码库保持一致并易于理解。" #: ../../source/contributor-tutorial-contribute-on-github.rst:133 msgid "To do so, we have written a few scripts that you can execute:" -msgstr "" +msgstr "为此,我们编写了一些脚本供您执行:" #: ../../source/contributor-tutorial-contribute-on-github.rst:152 msgid "**Stage changes**" -msgstr "" +msgstr "**舞台变化**" #: ../../source/contributor-tutorial-contribute-on-github.rst:143 msgid "" "Before creating a commit that will update your history, you must specify " "to Git which files it needs to take into account." -msgstr "" +msgstr "在创建更新历史记录的提交之前,必须向 Git 说明需要考虑哪些文件。" #: ../../source/contributor-tutorial-contribute-on-github.rst:145 msgid "This can be done with:" -msgstr "" +msgstr "这可以通过:" #: ../../source/contributor-tutorial-contribute-on-github.rst:151 msgid "" "To check which files have been modified compared to the last version " "(last commit) and to see which files are staged for commit, you can use " "the :code:`git status` command." -msgstr "" +msgstr "要查看与上一版本(上次提交)相比哪些文件已被修改,以及哪些文件处于提交阶段," +"可以使用 :code:`git status` 命令。" #: ../../source/contributor-tutorial-contribute-on-github.rst:162 msgid "**Commit changes**" -msgstr "" +msgstr "**提交更改**" #: ../../source/contributor-tutorial-contribute-on-github.rst:155 msgid "" "Once you have added all the files you wanted to commit using :code:`git " "add`, you can finally create your commit using this command:" -msgstr "" +msgstr "使用 :code:`git add` 添加完所有要提交的文件后,就可以使用此命令创建提交了:" #: ../../source/contributor-tutorial-contribute-on-github.rst:161 msgid "" @@ -1315,58 +1422,63 @@ msgid "" "does. It should be written in an imperative style and be concise. An " "example would be :code:`git commit -m \"Add images to README\"`." msgstr "" +" " +"用于向他人解释提交的作用。它应该以命令式风格书写,并且简明扼要。例如 :code:`" +"git commit -m \"Add images to README\"`。" #: ../../source/contributor-tutorial-contribute-on-github.rst:173 msgid "**Push the changes to the fork**" -msgstr "" +msgstr "**将更改推送到分叉**" #: ../../source/contributor-tutorial-contribute-on-github.rst:165 msgid "" "Once we have committed our changes, we have effectively updated our local" " history, but GitHub has no way of knowing this unless we push our " "changes to our origin's remote address:" -msgstr "" +msgstr "一旦提交了修改,我们就有效地更新了本地历史记录,但除非我们将修改推送到原点的" +"远程地址,否则 GitHub 无法得知:" #: ../../source/contributor-tutorial-contribute-on-github.rst:172 msgid "" "Once this is done, you will see on the GitHub that your forked repo was " "updated with the changes you have made." -msgstr "" +msgstr "完成此操作后,您将在 GitHub 上看到您的分叉仓库已根据您所做的更改进行了更新。" #: ../../source/contributor-tutorial-contribute-on-github.rst:176 msgid "Creating and merging a pull request (PR)" -msgstr "" +msgstr "创建和合并拉取请求 (PR)" #: ../../source/contributor-tutorial-contribute-on-github.rst:203 msgid "**Create the PR**" -msgstr "" +msgstr "**创建 PR**" #: ../../source/contributor-tutorial-contribute-on-github.rst:179 msgid "" "Once you have pushed changes, on the GitHub webpage of your repository " "you should see the following message:" -msgstr "" +msgstr "推送更改后,在仓库的 GitHub 网页上应该会看到以下信息:" #: ../../source/contributor-tutorial-contribute-on-github.rst:183 msgid "Otherwise you can always find this option in the `Branches` page." -msgstr "" +msgstr "否则,您可以在 \"分支 \"页面找到该选项。" #: ../../source/contributor-tutorial-contribute-on-github.rst:185 msgid "" "Once you click the `Compare & pull request` button, you should see " "something similar to this:" -msgstr "" +msgstr "点击 \"比较和拉取请求 \"按钮后,您应该会看到类似下面的内容:" #: ../../source/contributor-tutorial-contribute-on-github.rst:189 msgid "At the top you have an explanation of which branch will be merged where:" -msgstr "" +msgstr "在顶部,你可以看到关于哪个分支将被合并的说明:" #: ../../source/contributor-tutorial-contribute-on-github.rst:193 msgid "" "In this example you can see that the request is to merge the branch " "``doc-fixes`` from my forked repository to branch ``main`` from the " "Flower repository." -msgstr "" +msgstr "在这个例子中,你可以看到请求将我分叉的版本库中的分支 ``doc-fixes`` 合并到 " +"Flower 版本库中的分支 ``main``。" #: ../../source/contributor-tutorial-contribute-on-github.rst:195 msgid "" @@ -1374,163 +1486,175 @@ msgid "" "does and to link it to existing issues. We have placed comments (that " "won't be rendered once the PR is opened) to guide you through the " "process." -msgstr "" +msgstr "中间的输入框供您描述 PR " +"的作用,并将其与现有问题联系起来。我们在此放置了注释(一旦 PR " +"打开,注释将不会显示),以指导您完成整个过程。" #: ../../source/contributor-tutorial-contribute-on-github.rst:198 msgid "" "At the bottom you will find the button to open the PR. This will notify " "reviewers that a new PR has been opened and that they should look over it" " to merge or to request changes." -msgstr "" +msgstr "在底部,您可以找到打开 PR 的按钮。这将通知审核人员新的 PR 已经打开," +"他们应该查看该 PR 以进行合并或要求修改。" #: ../../source/contributor-tutorial-contribute-on-github.rst:201 msgid "" "If your PR is not yet ready for review, and you don't want to notify " "anyone, you have the option to create a draft pull request:" -msgstr "" +msgstr "如果您的 PR " +"尚未准备好接受审核,而且您不想通知任何人,您可以选择创建一个草案拉取请求:" #: ../../source/contributor-tutorial-contribute-on-github.rst:206 msgid "**Making new changes**" -msgstr "" +msgstr "**作出新的改变**" #: ../../source/contributor-tutorial-contribute-on-github.rst:206 msgid "" "Once the PR has been opened (as draft or not), you can still push new " "commits to it the same way we did before, by making changes to the branch" " associated with the PR." -msgstr "" +msgstr "一旦 PR 被打开(无论是否作为草案),你仍然可以像以前一样,通过修改与 PR " +"关联的分支来推送新的提交。" #: ../../source/contributor-tutorial-contribute-on-github.rst:228 msgid "**Review the PR**" -msgstr "" +msgstr "**审查 PR**" #: ../../source/contributor-tutorial-contribute-on-github.rst:209 msgid "" "Once the PR has been opened or once the draft PR has been marked as " "ready, a review from code owners will be automatically requested:" -msgstr "" +msgstr "一旦 PR 被打开或 PR 草案被标记为就绪,就会自动要求代码所有者进行审核:" #: ../../source/contributor-tutorial-contribute-on-github.rst:213 msgid "" "Code owners will then look into the code, ask questions, request changes " "or validate the PR." -msgstr "" +msgstr "然后,代码所有者会查看代码、提出问题、要求修改或验证 PR。" #: ../../source/contributor-tutorial-contribute-on-github.rst:215 msgid "Merging will be blocked if there are ongoing requested changes." -msgstr "" +msgstr "如果有正在进行的更改请求,合并将被阻止。" #: ../../source/contributor-tutorial-contribute-on-github.rst:219 msgid "" "To resolve them, just push the necessary changes to the branch associated" " with the PR:" -msgstr "" +msgstr "要解决这些问题,只需将必要的更改推送到与 PR 关联的分支即可:" #: ../../source/contributor-tutorial-contribute-on-github.rst:223 msgid "And resolve the conversation:" -msgstr "" +msgstr "并解决对话:" #: ../../source/contributor-tutorial-contribute-on-github.rst:227 msgid "" "Once all the conversations have been resolved, you can re-request a " "review." -msgstr "" +msgstr "一旦所有对话都得到解决,您就可以重新申请审核。" #: ../../source/contributor-tutorial-contribute-on-github.rst:248 msgid "**Once the PR is merged**" -msgstr "" +msgstr "**一旦 PR 被合并**" #: ../../source/contributor-tutorial-contribute-on-github.rst:231 msgid "" "If all the automatic tests have passed and reviewers have no more changes" " to request, they can approve the PR and merge it." -msgstr "" +msgstr "如果所有自动测试都已通过,且审核员不再需要修改,他们就可以批准 PR " +"并将其合并。" #: ../../source/contributor-tutorial-contribute-on-github.rst:235 msgid "" "Once it is merged, you can delete the branch on GitHub (a button should " "appear to do so) and also delete it locally by doing:" -msgstr "" +msgstr "合并后,您可以在 GitHub " +"上删除该分支(会出现一个删除按钮),也可以在本地删除该分支:" #: ../../source/contributor-tutorial-contribute-on-github.rst:242 msgid "Then you should update your forked repository by doing:" -msgstr "" +msgstr "然后,你应该更新你的分叉仓库:" #: ../../source/contributor-tutorial-contribute-on-github.rst:251 msgid "Example of first contribution" -msgstr "" +msgstr "首次捐款实例" #: ../../source/contributor-tutorial-contribute-on-github.rst:254 msgid "Problem" -msgstr "" +msgstr "问题" #: ../../source/contributor-tutorial-contribute-on-github.rst:256 msgid "" "For our documentation, we’ve started to use the `Diàtaxis framework " "`_." -msgstr "" +msgstr "对于我们的文档,我们已经开始使用 \"Diàtaxis 框架 `_\"。" #: ../../source/contributor-tutorial-contribute-on-github.rst:258 msgid "" "Our “How to” guides should have titles that continue the sencence “How to" " …”, for example, “How to upgrade to Flower 1.0”." -msgstr "" +msgstr "我们的 \"如何 \"指南的标题应延续 \"如何...... \"的句式,例如 \"如何升级到 " +"Flower 1.0\"。" #: ../../source/contributor-tutorial-contribute-on-github.rst:260 msgid "" "Most of our guides do not follow this new format yet, and changing their " "title is (unfortunately) more involved than one might think." -msgstr "" +msgstr "我们的大多数指南还没有采用这种新格式,而更改其标题(不幸的是)比人们想象的要" +"复杂得多。" #: ../../source/contributor-tutorial-contribute-on-github.rst:262 msgid "" "This issue is about changing the title of a doc from present continious " "to present simple." -msgstr "" +msgstr "这个问题是关于将文档标题从现在进行时改为现在进行时。" #: ../../source/contributor-tutorial-contribute-on-github.rst:264 msgid "" "Let's take the example of “Saving Progress” which we changed to “Save " "Progress”. Does this pass our check?" -msgstr "" +msgstr "以 \"保存进度 \"为例,我们将其改为 \"保存进度\"。这是否通过了我们的检查?" #: ../../source/contributor-tutorial-contribute-on-github.rst:266 msgid "Before: ”How to saving progress” ❌" -msgstr "" +msgstr "之前: \"如何保存进度\" ❌" #: ../../source/contributor-tutorial-contribute-on-github.rst:268 msgid "After: ”How to save progress” ✅" -msgstr "" +msgstr "之后: \"如何保存进度\"✅" #: ../../source/contributor-tutorial-contribute-on-github.rst:271 msgid "Solution" -msgstr "" +msgstr "解决方案" #: ../../source/contributor-tutorial-contribute-on-github.rst:273 msgid "" "This is a tiny change, but it’ll allow us to test your end-to-end setup. " "After cloning and setting up the Flower repo, here’s what you should do:" -msgstr "" +msgstr "这只是一个很小的改动,但可以让我们测试你的端到端设置。克隆并设置好 Flower " +"repo 后,你应该这样做:" #: ../../source/contributor-tutorial-contribute-on-github.rst:275 msgid "Find the source file in `doc/source`" -msgstr "" +msgstr "在 `doc/source` 中查找源文件" #: ../../source/contributor-tutorial-contribute-on-github.rst:276 msgid "" "Make the change in the `.rst` file (beware, the dashes under the title " "should be the same length as the title itself)" -msgstr "" +msgstr "在 `.rst` 文件中进行修改(注意,标题下的破折号应与标题本身的长度相同)" #: ../../source/contributor-tutorial-contribute-on-github.rst:277 msgid "" "Build the docs and check the result: ``_" msgstr "" +"构建文档并检查结果: ``_" #: ../../source/contributor-tutorial-contribute-on-github.rst:280 msgid "Rename file" -msgstr "" +msgstr "重命名文件" #: ../../source/contributor-tutorial-contribute-on-github.rst:282 msgid "" @@ -1539,76 +1663,82 @@ msgid "" "is **very important** to avoid that, breaking links can harm our search " "engine ranking." msgstr "" +"您可能已经注意到,文件名仍然反映了旧的措辞。如果我们只是更改文件,那么就会破" +"坏与该文件的所有现有链接--" +"避免这种情况是***重要的,破坏链接会损害我们的搜索引擎排名。" #: ../../source/contributor-tutorial-contribute-on-github.rst:285 msgid "Here’s how to change the file name:" -msgstr "" +msgstr "下面是更改文件名的方法:" #: ../../source/contributor-tutorial-contribute-on-github.rst:287 msgid "Change the file name to `save-progress.rst`" -msgstr "" +msgstr "将文件名改为`save-progress.rst`" #: ../../source/contributor-tutorial-contribute-on-github.rst:288 msgid "Add a redirect rule to `doc/source/conf.py`" -msgstr "" +msgstr "在 `doc/source/conf.py` 中添加重定向规则" #: ../../source/contributor-tutorial-contribute-on-github.rst:290 msgid "" "This will cause a redirect from `saving-progress.html` to `save-" "progress.html`, old links will continue to work." -msgstr "" +msgstr "这将导致从 `saving-progress.html` 重定向到 `save-progress." +"html`,旧链接将继续工作。" #: ../../source/contributor-tutorial-contribute-on-github.rst:293 msgid "Apply changes in the index file" -msgstr "" +msgstr "应用索引文件中的更改" #: ../../source/contributor-tutorial-contribute-on-github.rst:295 msgid "" "For the lateral navigation bar to work properly, it is very important to " "update the `index.rst` file as well. This is where we define the whole " "arborescence of the navbar." -msgstr "" +msgstr "要使横向导航栏正常工作,更新 `index.rst` " +"文件也非常重要。我们就是在这里定义整个导航栏的结构。" #: ../../source/contributor-tutorial-contribute-on-github.rst:298 msgid "Find and modify the file name in `index.rst`" -msgstr "" +msgstr "查找并修改 `index.rst` 中的文件名" #: ../../source/contributor-tutorial-contribute-on-github.rst:301 msgid "Open PR" -msgstr "" +msgstr "开放式 PR" #: ../../source/contributor-tutorial-contribute-on-github.rst:303 msgid "" "Commit the changes (commit messages are always imperative: “Do " "something”, in this case “Change …”)" -msgstr "" +msgstr "提交更改(提交信息总是命令式的:\"做某事\",这里是 \"更改......\")" #: ../../source/contributor-tutorial-contribute-on-github.rst:304 msgid "Push the changes to your fork" -msgstr "" +msgstr "将更改推送到分叉" #: ../../source/contributor-tutorial-contribute-on-github.rst:305 msgid "Open a PR (as shown above)" -msgstr "" +msgstr "打开 PR(如上图所示)" #: ../../source/contributor-tutorial-contribute-on-github.rst:306 msgid "Wait for it to be approved!" -msgstr "" +msgstr "等待审批!" #: ../../source/contributor-tutorial-contribute-on-github.rst:307 msgid "Congrats! 🥳 You're now officially a Flower contributor!" -msgstr "" +msgstr "祝贺你 🥳 您现在正式成为 \"Flower \"贡献者!" #: ../../source/contributor-tutorial-contribute-on-github.rst:311 msgid "How to write a good PR title" -msgstr "" +msgstr "如何撰写好的公关标题" #: ../../source/contributor-tutorial-contribute-on-github.rst:313 msgid "" "A well-crafted PR title helps team members quickly understand the purpose" " and scope of the changes being proposed. Here's a guide to help you " "write a good GitHub PR title:" -msgstr "" +msgstr "一个精心撰写的公关标题能帮助团队成员迅速了解所提修改的目的和范围。" +"以下指南可帮助您撰写一个好的 GitHub PR 标题:" #: ../../source/contributor-tutorial-contribute-on-github.rst:315 msgid "" @@ -1619,61 +1749,66 @@ msgid "" "it Short: Avoid lengthy titles for easy readability. 1. Use Proper " "Capitalization and Punctuation: Follow grammar rules for clarity." msgstr "" +"1. 简明扼要: 以简明扼要的方式清楚地概述变化。1. 使用可操作的动词: 使用 " +"\"添加\"、\"更新 \"或 \"修复 \"等动词来表明目的。1. 包含相关信息: " +"提及受影响的功能或模块以了解上下文。1. 简短:避免冗长的标题,以方便阅读。1. " +"使用正确的大小写和标点符号: 遵守语法规则,以确保清晰。" #: ../../source/contributor-tutorial-contribute-on-github.rst:321 msgid "" "Let's start with a few examples for titles that should be avoided because" " they do not provide meaningful information:" -msgstr "" +msgstr "让我们先举例说明几个应该避免使用的标题,因为它们不能提供有意义的信息:" #: ../../source/contributor-tutorial-contribute-on-github.rst:323 msgid "Implement Algorithm" -msgstr "" +msgstr "执行算法" #: ../../source/contributor-tutorial-contribute-on-github.rst:324 msgid "Database" -msgstr "" +msgstr "数据库" #: ../../source/contributor-tutorial-contribute-on-github.rst:325 msgid "Add my_new_file.py to codebase" -msgstr "" +msgstr "在代码库中添加 my_new_file.py" #: ../../source/contributor-tutorial-contribute-on-github.rst:326 msgid "Improve code in module" -msgstr "" +msgstr "改进模块中的代码" #: ../../source/contributor-tutorial-contribute-on-github.rst:327 msgid "Change SomeModule" -msgstr "" +msgstr "更改 SomeModule" #: ../../source/contributor-tutorial-contribute-on-github.rst:329 msgid "" "Here are a few positive examples which provide helpful information " "without repeating how they do it, as that is already visible in the " "\"Files changed\" section of the PR:" -msgstr "" +msgstr "这里有几个正面的例子,提供了有用的信息,但没有重复他们是如何做的,因为在 PR " +"的 \"已更改文件 \"部分已经可以看到:" #: ../../source/contributor-tutorial-contribute-on-github.rst:331 msgid "Update docs banner to mention Flower Summit 2023" -msgstr "" +msgstr "更新文件横幅,提及 2023 年 Flower 峰会" #: ../../source/contributor-tutorial-contribute-on-github.rst:332 msgid "Remove unnecessary XGBoost dependency" -msgstr "" +msgstr "移除不必要的 XGBoost 依赖性" #: ../../source/contributor-tutorial-contribute-on-github.rst:333 msgid "Remove redundant attributes in strategies subclassing FedAvg" -msgstr "" +msgstr "删除 FedAvg 子类化策略中的多余属性" #: ../../source/contributor-tutorial-contribute-on-github.rst:334 msgid "Add CI job to deploy the staging system when the `main` branch changes" -msgstr "" +msgstr "添加 CI 作业,以便在 \"主 \"分支发生变化时部署暂存系统" #: ../../source/contributor-tutorial-contribute-on-github.rst:335 msgid "" "Add new amazing library which will be used to improve the simulation " "engine" -msgstr "" +msgstr "添加新的惊人库,用于改进模拟引擎" #: ../../source/contributor-tutorial-contribute-on-github.rst:339 #: ../../source/tutorial-series-build-a-strategy-from-scratch-pytorch.ipynb:548 @@ -1682,13 +1817,13 @@ msgstr "" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:713 #: ../../source/tutorial-series-what-is-federated-learning.ipynb:367 msgid "Next steps" -msgstr "" +msgstr "接下来的步骤" #: ../../source/contributor-tutorial-contribute-on-github.rst:341 msgid "" "Once you have made your first PR, and want to contribute more, be sure to" " check out the following :" -msgstr "" +msgstr "一旦您完成了第一份 PR,并希望做出更多贡献,请务必查看以下内容:" #: ../../source/contributor-tutorial-contribute-on-github.rst:343 msgid "" @@ -1696,30 +1831,32 @@ msgid "" "ref-good-first-contributions.html>`_, where you should particularly look " "into the :code:`baselines` contributions." msgstr "" +"好的第一批贡献 `_,在这里你应该特别看看 :code:`baselines` 的贡献。" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:2 msgid "Get started as a contributor" -msgstr "" +msgstr "成为贡献者" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:5 msgid "Prerequisites" -msgstr "" +msgstr "先决条件" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:7 msgid "`Python 3.7 `_ or above" -msgstr "" +msgstr "Python 3.7 `_ 或更高版本" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:8 msgid "`Poetry 1.3 `_ or above" -msgstr "" +msgstr "`Poetry 1.3 `_ 或更高版本" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:9 msgid "(Optional) `pyenv `_" -msgstr "" +msgstr "(可选) `pyenv `_" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:10 msgid "(Optional) `pyenv-virtualenv `_" -msgstr "" +msgstr "(可选) `pyenv-virtualenv `_" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:12 msgid "" @@ -1727,16 +1864,19 @@ msgid "" "development tools (the ones which support it). Poetry is a build tool " "which supports `PEP 517 `_." msgstr "" +"Flower 使用 :code:`pyproject.toml` 来管理依赖关系和配置开发工具(支持它的)。" +"Poetry 是一种支持 `PEP 517 `_ " +"的构建工具。" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:18 msgid "Developer Machine Setup" -msgstr "" +msgstr "开发者机器设置" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:20 msgid "" "First, clone the `Flower repository `_ " "from GitHub::" -msgstr "" +msgstr "首先,从 GitHub 克隆 \"Flower 存储库 `_\":" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:26 msgid "" @@ -1746,6 +1886,10 @@ msgid "" "default it will use :code:`Python 3.8.17`, but you can change it by " "providing a specific :code:``)::" msgstr "" +"其次,创建虚拟环境(并激活它)。如果您选择使用 :code:`pyenv`(使用 :code" +":`pyenv-virtualenv`插件),并且已经安装了该插件,则可以使用下面的便捷脚本(" +"默认情况下使用 :code:`Python3.8.17`,但您可以通过提供特定的 " +":code:`<版本>`来更改)::" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:33 msgid "" @@ -1753,16 +1897,19 @@ msgid "" "script that will install pyenv, set it up and create the virtual " "environment (with :code:`Python 3.8.17` by default)::" msgstr "" +"如果没有安装 :code:`pyenv`,可以使用以下脚本安装 pyenv、设置并创建虚拟环境(" +"默认使用 :code:`Python3.8.17)::" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:39 msgid "" "Third, install the Flower package in development mode (think :code:`pip " "install -e`) along with all necessary dependencies::" -msgstr "" +msgstr "第三,在开发模式下安装 Flower 软件包(想想 :code:`pip install " +"-e`)以及所有必要的依赖项::" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:46 msgid "Convenience Scripts" -msgstr "" +msgstr "便捷脚本" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:48 msgid "" @@ -1771,26 +1918,28 @@ msgid "" ":code:`/dev` subdirectory for a full list. The following scripts are " "amonst the most important ones:" msgstr "" +"Flower 软件仓库包含大量便捷脚本,可使重复性开发任务更轻松、更不易出错。" +"完整列表请参见 :code:`/dev` 子目录。以下是最重要的脚本:" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:54 msgid "Create/Delete Virtual Environment" -msgstr "" +msgstr "创建/删除虚拟环境" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:62 msgid "Compile ProtoBuf Definitions" -msgstr "" +msgstr "编译 ProtoBuf 定义" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:69 msgid "Auto-Format Code" -msgstr "" +msgstr "自动格式化代码" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:76 msgid "Run Linters and Tests" -msgstr "" +msgstr "运行分类器和测试" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:83 msgid "Run Github Actions (CI) locally" -msgstr "" +msgstr "在本地运行 Github 操作 (CI)" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:85 msgid "" @@ -1799,32 +1948,35 @@ msgid "" "Please refer to the installation instructions under the linked repository" " and run the next command under Flower main cloned repository folder::" msgstr "" +"开发人员可以使用 `Act _` 在本地环境下运行全套 " +"Github Actions 工作流程。请参考链接仓库下的安装说明,并在 Flower " +"主克隆仓库文件夹下运行下一条命令::" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:92 msgid "" "The Flower default workflow would run by setting up the required Docker " "machines underneath." -msgstr "" +msgstr "Flower 默认工作流程将通过在下面设置所需的 Docker 机器来运行。" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:97 msgid "Build Release" -msgstr "" +msgstr "版本发布" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:99 msgid "" "Flower uses Poetry to build releases. The necessary command is wrapped in" " a simple script::" -msgstr "" +msgstr "Flower 使用 Poetry 创建发布版本。必要的命令封装在一个简单的脚本中::" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:104 msgid "" "The resulting :code:`.whl` and :code:`.tar.gz` releases will be stored in" " the :code:`/dist` subdirectory." -msgstr "" +msgstr "生成的 :code:`.whl` 和 :code:`.tar.gz` 版本将存储在 :code:`/dist` 子目录中。" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:109 msgid "Build Documentation" -msgstr "" +msgstr "构建文档" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:111 msgid "" @@ -1832,14 +1984,16 @@ msgid "" "There's no convenience script to re-build the documentation yet, but it's" " pretty easy::" msgstr "" +"Flower 的文档使用 `Sphinx `_。目前还没有方便的脚本来重新构建文档,但这很容易::" #: ../../source/contributor-tutorial-get-started-as-a-contributor.rst:117 msgid "This will generate HTML documentation in ``doc/build/html``." -msgstr "" +msgstr "这将在 ``doc/build/html`` 中生成 HTML 文档。" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:2 msgid "Example: FedBN in PyTorch - From Centralized To Federated" -msgstr "" +msgstr "示例: PyTorch 中的 FedBN - 从集中式到联合式" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:4 msgid "" @@ -1852,11 +2006,17 @@ msgid "" "PyTorch - From Centralized To Federated `_." msgstr "" +"本教程将向您展示如何使用 \"Flower \"为现有的机器学习工作负载构建一个联合版本" +",并使用 \"FedBN `_\"(一种针对非 iid " +"数据设计的联合训练策略)。我们正在使用 PyTorch 在 CIFAR-10 " +"数据集上训练一个卷积神经网络(带有批量归一化层)。在应用 FedBN 时,与 \"示例 " +"\"相比只需做少量改动: PyTorch - 从集中到 Federated `_。" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:9 #: ../../source/example-pytorch-from-centralized-to-federated.rst:10 msgid "Centralized Training" -msgstr "" +msgstr "集中训练" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:10 msgid "" @@ -1865,17 +2025,20 @@ msgid "" "federated.html>`_. The only thing to do is modifying the file called " ":code:`cifar.py`, revised part is shown below:" msgstr "" +"所有文件均根据 `Example: PyTorch - From Centralized To Federated " +"`_。唯一要做的就是修改名为 :code:`cifar.py` 的文件,修改部分如下所示:" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:13 msgid "" "The model architecture defined in class Net() is added with Batch " "Normalization layers accordingly." -msgstr "" +msgstr "类 Net() 中定义的模型架构会相应添加批量规范化层。" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:41 #: ../../source/example-pytorch-from-centralized-to-federated.rst:157 msgid "You can now run your machine learning workload:" -msgstr "" +msgstr "现在,您可以运行机器学习工作负载:" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:47 msgid "" @@ -1884,11 +2047,14 @@ msgid "" "federated learning system within FedBN, the sytstem consists of one " "server and two clients." msgstr "" +"到目前为止,如果您以前使用过 " +"PyTorch,这一切看起来应该相当熟悉。让我们进行下一步,使用我们所构建的内容在 " +"FedBN 中创建一个联合学习系统,该系统由一个服务器和两个客户端组成。" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:51 #: ../../source/example-pytorch-from-centralized-to-federated.rst:167 msgid "Federated Training" -msgstr "" +msgstr "联合培训" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:53 msgid "" @@ -1900,12 +2066,19 @@ msgid "" "PyTorch - From Centralized To Federated `_. first." msgstr "" +"如果你读过 `Example: PyTorch - From Centralized To Federated `_,下面的部分就很容易理解了,只需要修改 :code:`get_parameters` 和 " +":code:`set_parameters` 中的 :code:`client.py` 函数。如果没有,请阅读 \"示例\"" +": PyTorch - 从集中到联合 `_。" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:56 msgid "" "Our example consists of one *server* and two *clients*. In FedBN, " ":code:`server.py` keeps unchanged, we can start the server directly." -msgstr "" +msgstr "我们的示例包括一个*服务器*和两个*客户端*。在 FedBN 中,:code:`server.py` " +"保持不变,我们可以直接启动服务器。" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:62 msgid "" @@ -1914,17 +2087,21 @@ msgid "" "we will exclude batch normalization parameters from model parameter list " "when sending to or receiving from the server." msgstr "" +"最后,我们将修改 *client* 的逻辑,修改 :code:`client.py` 中的 " +":code:`get_parameters` 和 :code:`set_parameters`,在向服务器发送或从服务器接" +"收时,我们将从模型参数列表中排除批量规范化参数。" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:85 msgid "Now, you can now open two additional terminal windows and run" -msgstr "" +msgstr "现在,您可以打开另外两个终端窗口,运行" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:91 msgid "" "in each window (make sure that the server is still running before you do " "so) and see your (previously centralized) PyTorch project run federated " "learning with FedBN strategy across two clients. Congratulations!" -msgstr "" +msgstr "确保服务器仍在运行),然后就能看到你的 PyTorch 项目(之前是集中式的)通过 " +"FedBN 策略在两个客户端上运行联合学习。恭喜您!" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:94 #: ../../source/example-jax-from-centralized-to-federated.rst:277 @@ -1932,7 +2109,7 @@ msgstr "" #: ../../source/example-pytorch-from-centralized-to-federated.rst:310 #: ../../source/tutorial-quickstart-jax.rst:283 msgid "Next Steps" -msgstr "" +msgstr "下一步工作" #: ../../source/example-fedbn-pytorch-from-centralized-to-federated.rst:96 msgid "" @@ -1944,10 +2121,15 @@ msgid "" "using different subsets of CIFAR-10 on each client? How about adding more" " clients?" msgstr "" +"本示例的完整源代码可在 `_ 找到。当然,我们的示例有些过于简单," +"因为两个客户端都加载了完全相同的数据集,这并不现实。现在,您已经准备好进一步" +"探讨这一主题了。在每个客户端使用不同的 CIFAR-10 " +"子集如何?增加更多客户端如何?" #: ../../source/example-jax-from-centralized-to-federated.rst:2 msgid "Example: JAX - Run JAX Federated" -msgstr "" +msgstr "示例: JAX - 联合运行 JAX" #: ../../source/example-jax-from-centralized-to-federated.rst:4 #: ../../source/tutorial-quickstart-jax.rst:10 @@ -1963,6 +2145,13 @@ msgid "" " tutorial`. Then, we build upon the centralized training code to run the " "training in a federated fashion." msgstr "" +"本教程将向您展示如何使用 Flower 构建现有 JAX 工作负载的联合版本。我们将使用 " +"JAX 在 scikit-learn 数据集上训练线性回归模型。我们将采用与 \"PyTorch - " +"从集中到联合 `_ 演练 \"类似的示例结构。首先,我们根据 \"使用 JAX " +"的线性回归 `_ 教程 \"构建集中式训练方法" +"。然后,我们在集中式训练代码的基础上以联合方式运行训练。" #: ../../source/example-jax-from-centralized-to-federated.rst:10 #: ../../source/tutorial-quickstart-jax.rst:16 @@ -1970,11 +2159,13 @@ msgid "" "Before we start building our JAX example, we need install the packages " ":code:`jax`, :code:`jaxlib`, :code:`scikit-learn`, and :code:`flwr`:" msgstr "" +"在开始构建 JAX 示例之前,我们需要安装软件包 " +":code:`jax`、:code:`jaxlib`、:code:`scikit-learn` 和 :code:`flwr`:" #: ../../source/example-jax-from-centralized-to-federated.rst:18 #: ../../source/tutorial-quickstart-jax.rst:24 msgid "Linear Regression with JAX" -msgstr "" +msgstr "使用 JAX 进行线性回归" #: ../../source/example-jax-from-centralized-to-federated.rst:20 #: ../../source/tutorial-quickstart-jax.rst:26 @@ -1984,6 +2175,9 @@ msgid "" "explanation of what's going on then have a look at the official `JAX " "documentation `_." msgstr "" +"首先,我们将简要介绍基于 :code:`Linear Regression` " +"模型的集中式训练代码。如果您想获得更深入的解释,请参阅官方的 `JAX 文档 " +"`_。" #: ../../source/example-jax-from-centralized-to-federated.rst:23 #: ../../source/tutorial-quickstart-jax.rst:29 @@ -1997,20 +2191,27 @@ msgid "" "not yet import the :code:`flwr` package for federated learning. This will" " be done later." msgstr "" +"让我们创建一个名为 :code:`jax_training.py` " +"的新文件,其中包含传统(集中式)线性回归训练所需的所有组件。首先,需要导入 " +"JAX 包 :code:`jax` 和 :code:`jaxlib`。此外,我们还需要导入 :code:`sklearn`," +"因为我们使用 :code:`make_regression` 创建数据集,并使用 " +":code:`train_test_split` 将数据集拆分成训练集和测试集。你可以看到," +"我们还没有导入用于联合学习的 :code:`flwr` 软件包。这将在稍后完成。" #: ../../source/example-jax-from-centralized-to-federated.rst:37 #: ../../source/tutorial-quickstart-jax.rst:43 msgid "" "The :code:`load_data()` function loads the mentioned training and test " "sets." -msgstr "" +msgstr ":code:`load_data()` 函数会加载上述训练集和测试集。" #: ../../source/example-jax-from-centralized-to-federated.rst:47 #: ../../source/tutorial-quickstart-jax.rst:53 msgid "" "The model architecture (a very simple :code:`Linear Regression` model) is" " defined in :code:`load_model()`." -msgstr "" +msgstr "模型结构(一个非常简单的 :code:` 线性回归模型)在 :code:`load_model()` " +"中定义。" #: ../../source/example-jax-from-centralized-to-federated.rst:59 #: ../../source/tutorial-quickstart-jax.rst:65 @@ -2021,6 +2222,10 @@ msgid "" " is separate since JAX takes derivatives with a :code:`grad()` function " "(defined in the :code:`main()` function and called in :code:`train()`)." msgstr "" +"现在,我们需要定义训练(函数 " +":code:`train()`),它循环遍历训练集,并测量每批训练示例的损失(函数 " +":code:`loss_fn()`)。由于 JAX 使用 :code:`grad()` 函数(在 :code:`main()` " +"函数中定义,并在 :code:`train()` 中调用)提取导数,因此损失函数是独立的。" #: ../../source/example-jax-from-centralized-to-federated.rst:77 #: ../../source/tutorial-quickstart-jax.rst:83 @@ -2028,7 +2233,8 @@ msgid "" "The evaluation of the model is defined in the function " ":code:`evaluation()`. The function takes all test examples and measures " "the loss of the linear regression model." -msgstr "" +msgstr "模型的评估在函数 :code:`evaluation()` " +"中定义。该函数获取所有测试示例,并测量线性回归模型的损失。" #: ../../source/example-jax-from-centralized-to-federated.rst:88 #: ../../source/tutorial-quickstart-jax.rst:94 @@ -2038,11 +2244,14 @@ msgid "" "As already mentioned, the :code:`jax.grad()` function is defined in " ":code:`main()` and passed to :code:`train()`." msgstr "" +"在定义了数据加载、模型架构、训练和评估之后,我们就可以把所有东西放在一起," +"使用 JAX 训练我们的模型了。如前所述,:code:`jax.grad()` 函数在 :code:`main()`" +" 中定义,并传递给 :code:`train()`。" #: ../../source/example-jax-from-centralized-to-federated.rst:105 #: ../../source/tutorial-quickstart-jax.rst:111 msgid "You can now run your (centralized) JAX linear regression workload:" -msgstr "" +msgstr "现在您可以运行(集中式)JAX 线性回归工作负载:" #: ../../source/example-jax-from-centralized-to-federated.rst:111 #: ../../source/tutorial-quickstart-jax.rst:117 @@ -2051,11 +2260,13 @@ msgid "" "Let's take the next step and use what we've built to create a simple " "federated learning system consisting of one server and two clients." msgstr "" +"到目前为止,如果你以前使用过 JAX,就会对这一切感到相当熟悉。下一步,让我们利" +"用已构建的内容创建一个由一个服务器和两个客户端组成的简单联合学习系统。" #: ../../source/example-jax-from-centralized-to-federated.rst:115 #: ../../source/tutorial-quickstart-jax.rst:121 msgid "JAX meets Flower" -msgstr "" +msgstr "JAX 遇见Flower" #: ../../source/example-jax-from-centralized-to-federated.rst:117 #: ../../source/tutorial-quickstart-jax.rst:123 @@ -2069,6 +2280,11 @@ msgid "" "parameter updates. This describes one round of the federated learning " "process, and we repeat this for multiple rounds." msgstr "" +"联合现有工作负载的概念始终是相同的,也很容易理解。我们必须启动一个*服务器*," +"然后对连接到*服务器*的*客户端*使用 :code:`jax_training.py`中的代码。服务器*向" +"客户端发送模型参数。客户端*运行训练并更新参数。更新后的参数被发回*服务器,*服" +"务器对所有收到的参数更新进行平均。以上描述的是一轮联合学习过程,我们将重复进" +"行多轮学习。" #: ../../source/example-jax-from-centralized-to-federated.rst:123 #: ../../source/example-mxnet-walk-through.rst:204 @@ -2080,13 +2296,16 @@ msgid "" ":code:`flwr`. Next, we use the :code:`start_server` function to start a " "server and tell it to perform three rounds of federated learning." msgstr "" +"我们的示例包括一个*服务器*和两个*客户端*。让我们先设置 :code:`server." +"py`。服务器*需要导入 Flower 软件包 :code:`flwr`。接下来,我们使用 " +":code:`start_server` 函数启动服务器,并告诉它执行三轮联合学习。" #: ../../source/example-jax-from-centralized-to-federated.rst:133 #: ../../source/example-mxnet-walk-through.rst:214 #: ../../source/example-pytorch-from-centralized-to-federated.rst:191 #: ../../source/tutorial-quickstart-jax.rst:139 msgid "We can already start the *server*:" -msgstr "" +msgstr "我们已经可以启动*服务器*了:" #: ../../source/example-jax-from-centralized-to-federated.rst:139 #: ../../source/tutorial-quickstart-jax.rst:145 @@ -2096,6 +2315,10 @@ msgid "" " *client* needs to import :code:`flwr`, but also :code:`jax` and " ":code:`jaxlib` to update the parameters on our JAX model:" msgstr "" +"最后,我们将在 :code:`client.py` 中定义我们的 *client* 逻辑,并以之前在 " +":code:`jax_training.py` 中定义的 JAX 训练为基础。我们的 *client* 需要导入 " +":code:`flwr`,还需要导入 :code:`jax` 和 :code:`jaxlib` 以更新 JAX " +"模型的参数:" #: ../../source/example-jax-from-centralized-to-federated.rst:154 #: ../../source/tutorial-quickstart-jax.rst:160 @@ -2111,12 +2334,19 @@ msgid "" "parameters, one method for training the model, and one method for testing" " the model:" msgstr "" +"实现 Flower *client*基本上意味着实现 :code:`flwr.client.Client` 或 " +":code:`flwr.client.NumPyClient` 的子类。我们的实现将基于 :code:`flwr.client." +"NumPyClient`,并将其命名为 :code:`FlowerClient`。如果使用具有良好 NumPy " +"互操作性的框架(如 JAX),:code:`NumPyClient` 比 " +":code:`Client`更容易实现,因为它避免了一些必要的模板。:code:`FlowerClient` 需" +"要实现四个方法,两个用于获取/设置模型参数,一个用于训练模型,一个用于测试模型" +":" #: ../../source/example-jax-from-centralized-to-federated.rst:161 #: ../../source/example-mxnet-walk-through.rst:242 #: ../../source/tutorial-quickstart-jax.rst:167 msgid ":code:`set_parameters (optional)`" -msgstr "" +msgstr "代码:\"set_parameters(可选)\"" #: ../../source/example-jax-from-centralized-to-federated.rst:160 #: ../../source/example-mxnet-walk-through.rst:241 @@ -2125,12 +2355,12 @@ msgstr "" msgid "" "set the model parameters on the local model that are received from the " "server" -msgstr "" +msgstr "在本地模型上设置从服务器接收的模型参数" #: ../../source/example-jax-from-centralized-to-federated.rst:161 #: ../../source/tutorial-quickstart-jax.rst:167 msgid "transform parameters to NumPy :code:`ndarray`'s" -msgstr "" +msgstr "将参数转换为 NumPy :code:`ndarray`'s" #: ../../source/example-jax-from-centralized-to-federated.rst:162 #: ../../source/example-mxnet-walk-through.rst:243 @@ -2139,7 +2369,8 @@ msgstr "" msgid "" "loop over the list of model parameters received as NumPy " ":code:`ndarray`'s (think list of neural network layers)" -msgstr "" +msgstr "循环遍历以 NumPy :code:`ndarray`'s " +"形式接收的模型参数列表(可视为神经网络层列表)" #: ../../source/example-jax-from-centralized-to-federated.rst:163 #: ../../source/example-mxnet-walk-through.rst:244 @@ -2149,7 +2380,7 @@ msgstr "" #: ../../source/tutorial-quickstart-pytorch.rst:155 #: ../../source/tutorial-quickstart-scikitlearn.rst:108 msgid ":code:`get_parameters`" -msgstr "" +msgstr "代码:`get_parameters`(获取参数" #: ../../source/example-jax-from-centralized-to-federated.rst:164 #: ../../source/example-mxnet-walk-through.rst:245 @@ -2159,6 +2390,8 @@ msgid "" "get the model parameters and return them as a list of NumPy " ":code:`ndarray`'s (which is what :code:`flwr.client.NumPyClient` expects)" msgstr "" +"获取模型参数,并以 NumPy :code:`ndarray`的列表形式返回(这正是 :code:`flwr." +"client.NumPyClient`所期望的)" #: ../../source/example-jax-from-centralized-to-federated.rst:167 #: ../../source/example-mxnet-walk-through.rst:248 @@ -2168,7 +2401,7 @@ msgstr "" #: ../../source/tutorial-quickstart-pytorch.rst:161 #: ../../source/tutorial-quickstart-scikitlearn.rst:115 msgid ":code:`fit`" -msgstr "" +msgstr "代码:\"fit" #: ../../source/example-jax-from-centralized-to-federated.rst:166 #: ../../source/example-jax-from-centralized-to-federated.rst:170 @@ -2181,19 +2414,19 @@ msgstr "" msgid "" "update the parameters of the local model with the parameters received " "from the server" -msgstr "" +msgstr "用从服务器接收到的参数更新本地模型的参数" #: ../../source/example-jax-from-centralized-to-federated.rst:167 #: ../../source/example-mxnet-walk-through.rst:248 #: ../../source/example-pytorch-from-centralized-to-federated.rst:225 #: ../../source/tutorial-quickstart-jax.rst:173 msgid "train the model on the local training set" -msgstr "" +msgstr "在本地训练集上训练模型" #: ../../source/example-jax-from-centralized-to-federated.rst:168 #: ../../source/tutorial-quickstart-jax.rst:174 msgid "get the updated local model parameters and return them to the server" -msgstr "" +msgstr "获取更新后的本地模型参数并返回服务器" #: ../../source/example-jax-from-centralized-to-federated.rst:172 #: ../../source/example-mxnet-walk-through.rst:253 @@ -2203,19 +2436,19 @@ msgstr "" #: ../../source/tutorial-quickstart-pytorch.rst:164 #: ../../source/tutorial-quickstart-scikitlearn.rst:118 msgid ":code:`evaluate`" -msgstr "" +msgstr "代码:`评估" #: ../../source/example-jax-from-centralized-to-federated.rst:171 #: ../../source/example-mxnet-walk-through.rst:252 #: ../../source/example-pytorch-from-centralized-to-federated.rst:229 #: ../../source/tutorial-quickstart-jax.rst:177 msgid "evaluate the updated model on the local test set" -msgstr "" +msgstr "在本地测试集上评估更新后的模型" #: ../../source/example-jax-from-centralized-to-federated.rst:172 #: ../../source/tutorial-quickstart-jax.rst:178 msgid "return the local loss to the server" -msgstr "" +msgstr "向服务器返回本地损失" #: ../../source/example-jax-from-centralized-to-federated.rst:174 #: ../../source/tutorial-quickstart-jax.rst:180 @@ -2224,6 +2457,8 @@ msgid "" ":code:`DeviceArray` to :code:`NumPy ndarray` to make them compatible with" " `NumPyClient`." msgstr "" +"具有挑战性的部分是将 JAX 模型参数从 :code:`DeviceArray` 转换为 :code:`NumPy " +"ndarray`,使其与 `NumPyClient` 兼容。" #: ../../source/example-jax-from-centralized-to-federated.rst:176 #: ../../source/tutorial-quickstart-jax.rst:182 @@ -2236,18 +2471,23 @@ msgid "" "annotations to give you a better understanding of the data types that get" " passed around." msgstr "" +"两个 :code:`NumPyClient` 方法 :code:`fit` 和 :code:`evaluate` 使用了之前在 " +":code:`jax_training.py` 中定义的函数 :code:`train()` 和 " +":code:`evaluate()`。因此,我们在这里要做的就是通过 :code:`NumPyClient` " +"子类告诉 Flower 在训练和评估时要调用哪些已定义的函数。我们加入了类型注解,以" +"便让你更好地理解传递的数据类型。" #: ../../source/example-jax-from-centralized-to-federated.rst:245 #: ../../source/tutorial-quickstart-jax.rst:251 msgid "Having defined the federation process, we can run it." -msgstr "" +msgstr "定义了联合进程后,我们就可以运行它了。" #: ../../source/example-jax-from-centralized-to-federated.rst:268 #: ../../source/example-mxnet-walk-through.rst:347 #: ../../source/example-pytorch-from-centralized-to-federated.rst:301 #: ../../source/tutorial-quickstart-jax.rst:274 msgid "And that's it. You can now open two additional terminal windows and run" -msgstr "" +msgstr "就是这样。现在你可以打开另外两个终端窗口,运行" #: ../../source/example-jax-from-centralized-to-federated.rst:274 #: ../../source/tutorial-quickstart-jax.rst:280 @@ -2255,7 +2495,8 @@ msgid "" "in each window (make sure that the server is still running before you do " "so) and see your JAX project run federated learning across two clients. " "Congratulations!" -msgstr "" +msgstr "确保服务器仍在运行),然后就能看到你的 JAX " +"项目在两个客户端上运行联合学习了。恭喜您!" #: ../../source/example-jax-from-centralized-to-federated.rst:279 #: ../../source/tutorial-quickstart-jax.rst:285 @@ -2265,6 +2506,9 @@ msgid "" "/quickstart-jax>`_. Our example is somewhat over-simplified because both " "clients load the same dataset." msgstr "" +"此示例的源代码经过长期改进,可在此处找到: `Quickstart JAX `_。我们的示例有些过于简单,因为两个客户端都加载了相同的数据集。" #: ../../source/example-jax-from-centralized-to-federated.rst:282 #: ../../source/tutorial-quickstart-jax.rst:288 @@ -2272,11 +2516,12 @@ msgid "" "You're now prepared to explore this topic further. How about using a more" " sophisticated model or using a different dataset? How about adding more " "clients?" -msgstr "" +msgstr "现在,您已准备好进一步探讨这一主题。使用更复杂的模型或使用不同的数据集如何?" +"增加更多客户如何?" #: ../../source/example-mxnet-walk-through.rst:2 msgid "Example: MXNet - Run MXNet Federated" -msgstr "" +msgstr "示例: MXNet - 联合运行 MXNet" #: ../../source/example-mxnet-walk-through.rst:4 msgid "" @@ -2294,16 +2539,26 @@ msgid "" " tutorial. Then, we build upon the centralized training code to run the " "training in a federated fashion." msgstr "" +"本教程将向您展示如何使用 Flower 构建现有 MXNet 工作负载的联合版本。" +"我们将使用 MXNet 在 MNIST 数据集上训练一个序列模型。我们将采用与我们的 " +"\"PyTorch - 从集中到联合 `_ 演练 \"类似的示例结构。MXNet 和 " +"PyTorch 非常相似,\"此处 `_\"对 MXNet 和 PyTorch " +"进行了很好的比较。首先,我们根据 \"手写数字识别 `_ " +"教程 \"建立了一种集中式训练方法" +"。然后,我们在集中式训练代码的基础上,以联合方式运行训练。" #: ../../source/example-mxnet-walk-through.rst:10 msgid "" "Before we start setting up our MXNet example, we install the " ":code:`mxnet` and :code:`flwr` packages:" -msgstr "" +msgstr "在开始设置 MXNet 示例之前,我们先安装 :code:`mxnet` 和 :code:`flwr` 软件包:" #: ../../source/example-mxnet-walk-through.rst:19 msgid "MNIST Training with MXNet" -msgstr "" +msgstr "使用 MXNet 进行 MNIST 训练" #: ../../source/example-mxnet-walk-through.rst:21 msgid "" @@ -2312,6 +2567,9 @@ msgid "" " what's going on then have a look at the official `MXNet tutorial " "`_." msgstr "" +"首先,我们将简要介绍基于 :code:`Sequential` " +"模型的集中式训练代码。如果您想获得更深入的解释,请参阅官方的 `MXNet教程 " +"`_。" #: ../../source/example-mxnet-walk-through.rst:24 msgid "" @@ -2321,10 +2579,13 @@ msgid "" "that we do not yet import the :code:`flwr` package for federated " "learning. This will be done later." msgstr "" +"让我们创建一个名为:code:`mxnet_mnist.py`的新文件,其中包含传统(集中式)" +"MNIST 训练所需的所有组件。首先,需要导入 MXNet 包 :code:`mxnet`。您可以看到," +"我们尚未导入用于联合学习的 :code:`flwr` 包。这将在稍后完成。" #: ../../source/example-mxnet-walk-through.rst:42 msgid "The :code:`load_data()` function loads the MNIST training and test sets." -msgstr "" +msgstr ":code:`load_data()` 函数加载 MNIST 训练集和测试集。" #: ../../source/example-mxnet-walk-through.rst:57 msgid "" @@ -2332,20 +2593,24 @@ msgid "" "learning workload. The model architecture (a very simple " ":code:`Sequential` model) is defined in :code:`model()`." msgstr "" +"如前所述,我们将使用 MNIST 数据集进行机器学习。模型架构(一个非常简单的 " +":code:`Sequential` 模型)在 :code:`model()` 中定义。" #: ../../source/example-mxnet-walk-through.rst:70 msgid "" "We now need to define the training (function :code:`train()`) which loops" " over the training set and measures the loss for each batch of training " "examples." -msgstr "" +msgstr "现在,我们需要定义训练(函数 " +":code:`train()`),该函数在训练集上循环,并测量每批训练示例的损失。" #: ../../source/example-mxnet-walk-through.rst:123 msgid "" "The evaluation of the model is defined in function :code:`test()`. The " "function loops over all test samples and measures the loss and accuracy " "of the model based on the test dataset." -msgstr "" +msgstr "模型的评估在函数 :code:`test()` " +"中定义。该函数循环遍历所有测试样本,并根据测试数据集测量模型的损失和准确度。" #: ../../source/example-mxnet-walk-through.rst:158 msgid "" @@ -2354,10 +2619,13 @@ msgid "" "Note that the GPU/CPU device for the training and testing is defined " "within the :code:`ctx` (context)." msgstr "" +"在定义了数据加载、模型架构、训练和评估之后,我们就可以把所有东西放在一起,在 " +"MNIST 上训练我们的模型。请注意,用于训练和测试的 GPU/CPU 设备是在 " +":code:`ctx`(上下文)中定义的。" #: ../../source/example-mxnet-walk-through.rst:184 msgid "You can now run your (centralized) MXNet machine learning workload:" -msgstr "" +msgstr "现在,您可以运行(集中式)MXNet 机器学习工作负载:" #: ../../source/example-mxnet-walk-through.rst:190 msgid "" @@ -2366,10 +2634,13 @@ msgid "" "create a simple federated learning system consisting of one server and " "two clients." msgstr "" +"到目前为止,如果你以前使用过 MXNet(甚至 PyTorch),这一切看起来应该相当熟悉" +"。下一步,让我们利用已构建的内容创建一个由一个服务器和两个客户端组成的简单联" +"合学习系统。" #: ../../source/example-mxnet-walk-through.rst:194 msgid "MXNet meets Flower" -msgstr "" +msgstr "MXNet 遇见 Flower" #: ../../source/example-mxnet-walk-through.rst:196 msgid "" @@ -2380,6 +2651,10 @@ msgid "" "workloads. This section will show you how Flower can be used to federate " "our centralized MXNet workload." msgstr "" +"迄今为止,由于 MXNet 不支持联合学习,因此无法轻松地将 MXNet " +"工作负载用于联合学习。由于 Flower " +"与底层机器学习框架完全无关,因此它可用于联合任意机器学习工作负载。" +"本节将向你展示如何使用 Flower 联合我们的集中式 MXNet 工作负载。" #: ../../source/example-mxnet-walk-through.rst:198 msgid "" @@ -2392,6 +2667,11 @@ msgid "" "parameter updates. This describes one round of the federated learning " "process and we repeat this for multiple rounds." msgstr "" +"联合现有工作负载的概念始终是相同的,也很容易理解。我们必须启动一个*服务器*," +"然后对连接到*服务器*的*客户端*使用 :code:`mxnet_mnist.py`中的代码。服务器*向" +"客户端*发送模型参数。客户端*运行训练并更新参数。更新后的参数被发回*服务器,*" +"服务器会对所有收到的参数更新进行平均。以上描述的是一轮联合学习过程,我们将重" +"复进行多轮学习。" #: ../../source/example-mxnet-walk-through.rst:220 msgid "" @@ -2400,6 +2680,10 @@ msgid "" "Our *client* needs to import :code:`flwr`, but also :code:`mxnet` to " "update the parameters on our MXNet model:" msgstr "" +"最后,我们将在 :code:`client.py` 中定义我们的 *client* 逻辑,并以之前在 " +":code:`mxnet_mnist.py` 中定义的 MXNet 训练为基础。我们的 *client* " +"不仅需要导入 :code:`flwr`,还需要导入 :code:`mxnet`,以更新 MXNet " +"模型的参数:" #: ../../source/example-mxnet-walk-through.rst:235 msgid "" @@ -2414,26 +2698,35 @@ msgid "" "parameters, one method for training the model, and one method for testing" " the model:" msgstr "" +"实现 Flower *client*基本上意味着实现 :code:`flwr.client.Client` 或 " +":code:`flwr.client.NumPyClient` 的子类。我们的实现将基于 :code:`flwr.client." +"NumPyClient`,并将其命名为 :code:`MNISTClient`。如果使用具有良好 NumPy " +"互操作性的框架(如 PyTorch 或 MXNet),:code:`NumPyClient` 比 " +":code:`Client`更容易实现,因为它避免了一些必要的模板。:code:`MNISTClient` 需" +"要实现四个方法,两个用于获取/设置模型参数,一个用于训练模型,一个用于测试模型" +":" #: ../../source/example-mxnet-walk-through.rst:242 msgid "transform MXNet :code:`NDArray`'s to NumPy :code:`ndarray`'s" -msgstr "" +msgstr "将 MXNet :code:`NDArray`'s 转换为 NumPy :code:`ndarray`'s" #: ../../source/example-mxnet-walk-through.rst:249 #: ../../source/example-pytorch-from-centralized-to-federated.rst:226 msgid "get the updated local model weights and return them to the server" -msgstr "" +msgstr "获取更新后的本地模型权重并返回给服务器" #: ../../source/example-mxnet-walk-through.rst:253 #: ../../source/example-pytorch-from-centralized-to-federated.rst:230 msgid "return the local loss and accuracy to the server" -msgstr "" +msgstr "向服务器返回本地损耗和精确度" #: ../../source/example-mxnet-walk-through.rst:255 msgid "" "The challenging part is to transform the MXNet parameters from " ":code:`NDArray` to :code:`NumPy Arrays` to make it readable for Flower." msgstr "" +"具有挑战性的部分是将 MXNet 参数从 :code:`NDArray` 转换为 :code:`NumPy Arrays`" +" 以便 Flower 可以读取。" #: ../../source/example-mxnet-walk-through.rst:257 msgid "" @@ -2445,6 +2738,10 @@ msgid "" "annotations to give you a better understanding of the data types that get" " passed around." msgstr "" +"两个 :code:`NumPyClient` 方法 :code:`fit` 和 :code:`evaluate` 使用了之前在 " +":code:`mxnet_mnist.py` 中定义的函数 :code:`train()` 和 :code:`test()`。因此," +"我们在这里要做的就是通过 :code:`NumPyClient` 子类告诉 Flower 在训练和评估时要" +"调用哪些已定义的函数。我们加入了类型注解,以便让你更好地理解传递的数据类型。" #: ../../source/example-mxnet-walk-through.rst:319 msgid "" @@ -2452,13 +2749,16 @@ msgid "" " we can put everything together and train our :code:`Sequential` model on" " MNIST." msgstr "" +"在定义了数据加载、模型架构、训练和评估之后,我们就可以将所有内容整合在一起," +"在 MNIST 上训练我们的 :code:`Sequential` 模型。" #: ../../source/example-mxnet-walk-through.rst:353 msgid "" "in each window (make sure that the server is still running before you do " "so) and see your MXNet project run federated learning across two clients." " Congratulations!" -msgstr "" +msgstr "在每个窗口中查看 (确保服务器仍在运行),然后就能看到 MXNet " +"项目在两个客户端上运行联合学习。恭喜您!" #: ../../source/example-mxnet-walk-through.rst:358 msgid "" @@ -2470,10 +2770,15 @@ msgid "" " further. How about using a CNN or using a different dataset? How about " "adding more clients?" msgstr "" +"此示例的完整源代码:\"MXNet: From Centralized To Federated (Code) " +"`_。当然,我们的示例有些过于简单,因为两个客户端都加载了完全相同的" +"数据集,这并不现实。现在,您已经准备好进一步探讨这一主题了。使用 CNN " +"或使用不同的数据集如何?添加更多客户端如何?" #: ../../source/example-pytorch-from-centralized-to-federated.rst:2 msgid "Example: PyTorch - From Centralized To Federated" -msgstr "" +msgstr "实例: PyTorch - 从集中到联合" #: ../../source/example-pytorch-from-centralized-to-federated.rst:4 msgid "" @@ -2486,6 +2791,11 @@ msgid "" "tutorial. Then, we build upon the centralized training code to run the " "training in a federated fashion." msgstr "" +"本教程将向您展示如何使用 Flower 构建现有机器学习工作负载的联合版本。我们使用 " +"PyTorch 在 CIFAR-10 数据集上训练一个卷积神经网络。首先,我们基于 \"Deep " +"Learning with PyTorch `_\"教程,采用集中式训练方法介绍了这项机器学习任务。然" +"后,我们在集中式训练代码的基础上以联盟方式运行训练。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:12 msgid "" @@ -2494,6 +2804,9 @@ msgid "" "look at the official `PyTorch tutorial " "`_." msgstr "" +"我们首先简要介绍一下集中式 CNN 训练代码。如果您想获得更深入的解释,请参阅 " +"PyTorch 官方教程 `_。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:15 msgid "" @@ -2504,6 +2817,10 @@ msgid "" "federated learning. You can keep all these imports as they are even when " "we add the federated learning components at a later point." msgstr "" +"让我们创建一个名为 :code:`cifar.py` 的新文件,其中包含 CIFAR-10 " +"传统(集中)培训所需的所有组件。首先,需要导入所有必需的软件包(如 " +":code:`torch` 和 :code:`torchvision`)。您可以看到,我们没有导入任何用于联合" +"学习的软件包。即使在以后添加联合学习组件时,也可以保留所有这些导入。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:32 msgid "" @@ -2511,12 +2828,17 @@ msgid "" "learning workload. The model architecture (a very simple Convolutional " "Neural Network) is defined in :code:`class Net()`." msgstr "" +"如前所述,我们将使用 CIFAR-10 " +"数据集进行机器学习。模型架构(一个非常简单的卷积神经网络)在 :code:`class " +"Net()` 中定义。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:56 msgid "" "The :code:`load_data()` function loads the CIFAR-10 training and test " "sets. The :code:`transform` normalized the data after loading." msgstr "" +":code:`load_data()` 函数加载 CIFAR-10 " +"训练集和测试集。加载数据后,:code:`transform`函数对数据进行了归一化处理。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:74 msgid "" @@ -2524,19 +2846,23 @@ msgid "" " over the training set, measures the loss, backpropagates it, and then " "takes one optimizer step for each batch of training examples." msgstr "" +"现在,我们需要定义训练(函数 :code:`train()`),该函数在训练集上循环、测量损" +"失、反向传播损失,然后为每批训练示例执行一个优化步骤。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:76 msgid "" "The evaluation of the model is defined in the function :code:`test()`. " "The function loops over all test samples and measures the loss of the " "model based on the test dataset." -msgstr "" +msgstr "模型的评估在函数 :code:`test()` " +"中定义。该函数循环遍历所有测试样本,并根据测试数据集测量模型的损失。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:136 msgid "" "Having defined the data loading, model architecture, training, and " "evaluation we can put everything together and train our CNN on CIFAR-10." -msgstr "" +msgstr "在确定了数据加载、模型架构、训练和评估之后,我们就可以将所有内容整合在一起," +"在 CIFAR-10 上训练我们的 CNN。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:163 msgid "" @@ -2545,6 +2871,9 @@ msgid "" "simple federated learning system consisting of one server and two " "clients." msgstr "" +"到目前为止,如果你以前用过 PyTorch,这一切看起来应该相当熟悉。让我们进行下一" +"步,利用我们所构建的内容创建一个由一个服务器和两个客户端组成的简单联合学习系" +"统。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:169 msgid "" @@ -2556,12 +2885,16 @@ msgid "" "a federated fashion, then you'd have to change most of your code and set " "everything up from scratch. This can be a considerable effort." msgstr "" +"上一节讨论的简单机器学习项目在单一数据集(CIFAR-10)上训练模型,我们称之为集" +"中学习。如上一节所示,集中学习的概念可能为大多数人所熟知,而且很多人以前都使" +"用过。通常情况下,如果要以联合方式运行机器学习工作负载,就必须更改大部分代码" +",并从头开始设置一切。这可能是一个相当大的工作量。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:173 msgid "" "However, with Flower you can evolve your pre-existing code into a " "federated learning setup without the need for a major rewrite." -msgstr "" +msgstr "不过,有了 Flower,你可以将已有的代码演化成联合学习设置,而无需进行重大重写。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:175 msgid "" @@ -2573,6 +2906,10 @@ msgid "" "parameter updates. This describes one round of the federated learning " "process and we repeat this for multiple rounds." msgstr "" +"这个概念很容易理解。我们必须启动一个*服务器*,然后对连接到*服务器*的*客户端*" +"使用 :code:`cifar.py`中的代码。服务器*向客户端发送模型参数。客户端*运行训练并" +"更新参数。更新后的参数被发回*服务器,*服务器会对所有收到的参数更新进行平均。" +"以上描述的是一轮联合学习过程,我们将重复进行多轮学习。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:197 msgid "" @@ -2581,6 +2918,9 @@ msgid "" "Our *client* needs to import :code:`flwr`, but also :code:`torch` to " "update the paramters on our PyTorch model:" msgstr "" +"最后,我们将在 :code:`client.py` 中定义我们的 *client* 逻辑,并以之前在 " +":code:`cifar.py` 中定义的集中式训练为基础。我们的 *client* 不仅需要导入 " +":code:`flwr`,还需要导入 :code:`torch`,以更新 PyTorch 模型的参数:" #: ../../source/example-pytorch-from-centralized-to-federated.rst:213 msgid "" @@ -2595,10 +2935,17 @@ msgid "" "getting/setting model parameters, one method for training the model, and " "one method for testing the model:" msgstr "" +"实现 Flower *client*基本上意味着实现 :code:`flwr.client.Client` 或 " +":code:`flwr.client.NumPyClient` 的子类。我们的实现将基于 :code:`flwr.client." +"NumPyClient`,并将其命名为 :code:`CifarClient`。如果使用具有良好 NumPy " +"互操作性的框架(如 PyTorch 或 TensorFlow/Keras),:code:`NumPyClient`" +"的实现比 :code:`Client`略微容易一些,因为它避免了一些原本需要的模板。:code:`C" +"ifarClient` 需要实现四个方法,两个用于获取/设置模型参数,一个用于训练模型,一" +"个用于测试模型:" #: ../../source/example-pytorch-from-centralized-to-federated.rst:219 msgid ":code:`set_parameters`" -msgstr "" +msgstr ":code:`set_parameters`" #: ../../source/example-pytorch-from-centralized-to-federated.rst:232 msgid "" @@ -2610,6 +2957,10 @@ msgid "" "annotations to give you a better understanding of the data types that get" " passed around." msgstr "" +"两个 :code:`NumPyClient` 方法 :code:`fit` 和 :code:`evaluate` 使用了之前在 " +":code:`cifar.py` 中定义的函数 :code:`train()` 和 :code:`test()`。因此," +"我们在这里要做的就是通过 :code:`NumPyClient` 子类告诉 Flower 在训练和评估时要" +"调用哪些已定义的函数。我们加入了类型注解,以便让你更好地理解传递的数据类型。" #: ../../source/example-pytorch-from-centralized-to-federated.rst:280 msgid "" @@ -2619,13 +2970,18 @@ msgid "" "with the function :code:`fl.client.start_numpy_client()` by pointing it " "at the same IP adress we used in :code:`server.py`:" msgstr "" +"剩下要做的就是定义一个加载模型和数据的函数,创建一个 :code:`CifarClient` " +"并启动该客户端。使用 :code:`cifar.py` 加载数据和模型。使用函数 :code:`fl." +"client.start_numpy_client()` 启动 :code:`CifarClient`,将其指向我们在 " +":code:`server.py` 中使用的相同 IP 地址:" #: ../../source/example-pytorch-from-centralized-to-federated.rst:307 msgid "" "in each window (make sure that the server is running before you do so) " "and see your (previously centralized) PyTorch project run federated " "learning across two clients. Congratulations!" -msgstr "" +msgstr "在各窗口查看(做之前确保服务器正在运行),然后就能看到你的 PyTorch " +"项目(之前是集中式的)在两个客户端上运行联合学习。恭喜你!" #: ../../source/example-pytorch-from-centralized-to-federated.rst:312 msgid "" @@ -2637,16 +2993,22 @@ msgid "" " further. How about using different subsets of CIFAR-10 on each client? " "How about adding more clients?" msgstr "" +"本示例的完整源代码:`PyTorch: 从集中到联合(代码) `_。当然,我" +"们的示例有些过于简单,因为两个客户端都加载了完全相同的数据集,这并不现实。现" +"在,您已经准备好进一步探讨这一主题了。在每个客户端使用不同的 CIFAR-10 " +"子集如何?增加更多客户端如何?" #: ../../source/example-walkthrough-pytorch-mnist.rst:2 msgid "Example: Walk-Through PyTorch & MNIST" -msgstr "" +msgstr "实例: PyTorch 和 MNIST 演练" #: ../../source/example-walkthrough-pytorch-mnist.rst:4 msgid "" "In this tutorial we will learn, how to train a Convolutional Neural " "Network on MNIST using Flower and PyTorch." -msgstr "" +msgstr "在本教程中,我们将学习如何使用 Flower 和 PyTorch 在 MNIST " +"上训练卷积神经网络。" #: ../../source/example-walkthrough-pytorch-mnist.rst:6 #: ../../source/tutorial-quickstart-mxnet.rst:14 @@ -2655,7 +3017,7 @@ msgstr "" msgid "" "Our example consists of one *server* and two *clients* all having the " "same model." -msgstr "" +msgstr "我们的例子包括一个*服务器*和两个*客户端*,它们都有相同的模型。" #: ../../source/example-walkthrough-pytorch-mnist.rst:8 #: ../../source/tutorial-quickstart-pytorch.rst:19 @@ -2666,23 +3028,28 @@ msgid "" "Finally, the *server* sends this improved version of the model back to " "each *client*. A complete cycle of weight updates is called a *round*." msgstr "" +"*客户*负责根据其本地数据集为模型生成单独的权重更新。然后,这些更新会被发送到*" +"服务器,由*服务器汇总后生成一个更好的模型。最后,*服务器*将改进后的模型发送回" +"每个*客户端*。一个完整的权重更新周期称为一个*轮*。" #: ../../source/example-walkthrough-pytorch-mnist.rst:12 #: ../../source/tutorial-quickstart-pytorch.rst:23 msgid "" "Now that we have a rough idea of what is going on, let's get started. We " "first need to install Flower. You can do this by running :" -msgstr "" +msgstr "现在,我们已经有了一个大致的概念,让我们开始吧。首先,我们需要安装 Flower。" +"可以通过运行 :" #: ../../source/example-walkthrough-pytorch-mnist.rst:18 msgid "" "Since we want to use PyTorch to solve a computer vision task, let's go " "ahead an install PyTorch and the **torchvision** library:" -msgstr "" +msgstr "既然我们想用 PyTorch 解决计算机视觉任务,那就先安装 PyTorch 和 " +"**torchvision** 库吧:" #: ../../source/example-walkthrough-pytorch-mnist.rst:26 msgid "Ready... Set... Train!" -msgstr "" +msgstr "准备...设置...训练!" #: ../../source/example-walkthrough-pytorch-mnist.rst:28 msgid "" @@ -2695,26 +3062,33 @@ msgid "" "namely *run-server.sh*, and *run-clients.sh*. Don't be afraid to look " "inside, they are simple enough =)." msgstr "" +"现在我们已经安装了所有的依赖项,让我们用两个客户端和一个服务器来运行一个简单" +"的分布式训练。我们的训练过程和网络架构基于 PyTorch 的 \"Basic MNIST Example " +"`_\"。这将让你看到用 " +"Flower " +"封装你的代码并以联合方式开始训练是多么容易。我们为您提供了两个辅助脚本,即 " +"*run-server.sh* 和 *run-clients.sh*。别害怕,它们很简单 =)。" #: ../../source/example-walkthrough-pytorch-mnist.rst:31 msgid "" "Go ahead and launch on a terminal the *run-server.sh* script first as " "follows:" -msgstr "" +msgstr "首先在终端上启动 *run-server.sh* 脚本,如下所示:" #: ../../source/example-walkthrough-pytorch-mnist.rst:38 msgid "Now that the server is up and running, go ahead and launch the clients." -msgstr "" +msgstr "现在服务器已经启动并运行,请继续启动客户端。" #: ../../source/example-walkthrough-pytorch-mnist.rst:45 msgid "" "Et voilà! You should be seeing the training procedure and, after a few " "iterations, the test accuracy for each client." -msgstr "" +msgstr "然后就可以了!你应该能看到训练过程,以及经过几次反复后,每个客户的测试准确率" +"。" #: ../../source/example-walkthrough-pytorch-mnist.rst:66 msgid "Now, let's see what is really happening inside." -msgstr "" +msgstr "现在,让我们看看里面到底发生了什么。" #: ../../source/example-walkthrough-pytorch-mnist.rst:69 #: ../../source/tutorial-quickstart-ios.rst:129 @@ -2724,13 +3098,14 @@ msgstr "" #: ../../source/tutorial-quickstart-tensorflow.rst:98 #: ../../source/tutorial-quickstart-xgboost.rst:306 msgid "Flower Server" -msgstr "" +msgstr "Flower 服务器" #: ../../source/example-walkthrough-pytorch-mnist.rst:71 msgid "" "Inside the server helper script *run-server.sh* you will find the " "following code that basically runs the :code:`server.py`" -msgstr "" +msgstr "在服务器辅助脚本 *run-server.sh* 中,你可以找到以下代码," +"这些代码基本上都是运行 :code:`server.py` 的代码" #: ../../source/example-walkthrough-pytorch-mnist.rst:78 msgid "" @@ -2741,6 +3116,11 @@ msgid "" "leave all the configuration possibilities at their default values, as " "seen below." msgstr "" +"我们可以再深入一点,看到 :code:`server.py` " +"只是启动了一个服务器,该服务器将协调三轮训练。Flower " +"服务器是非常可定制的,但对于简单的工作负载,我们可以使用 :ref:`start_server " +"`函数启动服务器,并将所有可能的配置保留为默认值,如下所示。" #: ../../source/example-walkthrough-pytorch-mnist.rst:89 #: ../../source/tutorial-quickstart-ios.rst:34 @@ -2750,23 +3130,24 @@ msgstr "" #: ../../source/tutorial-quickstart-tensorflow.rst:29 #: ../../source/tutorial-quickstart-xgboost.rst:52 msgid "Flower Client" -msgstr "" +msgstr "Flower 客户端" #: ../../source/example-walkthrough-pytorch-mnist.rst:91 msgid "" "Next, let's take a look at the *run-clients.sh* file. You will see that " "it contains the main loop that starts a set of *clients*." -msgstr "" +msgstr "接下来,让我们看看 *run-clients.sh* 文件。你会看到它包含了启动一组 *clients* " +"的主循环。" #: ../../source/example-walkthrough-pytorch-mnist.rst:100 msgid "" "**cid**: is the client ID. It is an integer that uniquely identifies " "client identifier." -msgstr "" +msgstr "**cid**:是客户 ID。它是一个整数,可唯一标识客户标识符。" #: ../../source/example-walkthrough-pytorch-mnist.rst:101 msgid "**sever_address**: String that identifies IP and port of the server." -msgstr "" +msgstr "**sever_address**: 标识服务器 IP 和端口的字符串。" #: ../../source/example-walkthrough-pytorch-mnist.rst:102 msgid "" @@ -2775,6 +3156,9 @@ msgid "" "partition the original MNIST dataset to make sure that every client is " "working on unique subsets of both *training* and *test* sets." msgstr "" +"**nb_clients**: 这定义了正在创建的客户端数量。客户端并不需要这一信息," +"但它有助于我们对原始 MNIST 数据集进行分区,以确保每个客户端都在 *training* " +"和 *test* 集的独特子集上工作。" #: ../../source/example-walkthrough-pytorch-mnist.rst:104 msgid "" @@ -2788,6 +3172,13 @@ msgid "" "DataLoaders, the number of epochs in each round, and which device we want" " to use for training (CPU or GPU)." msgstr "" +"我们可以再次深入 :code:`flwr_example/quickstart-pytorch/client.py`。查看 " +":code:`main` 函数开头的参数解析代码后,你会发现一个对 :code:`mnist.load_data`" +" 的调用。该函数负责分割原始 MNIST 数据集(*training* 和 *test*)," +"并为每个数据集返回一个 :code:`torch.utils.data.DataLoader` s。然后," +"我们实例化一个 :code:`PytorchMNISTClient` 对象,其中包含我们的客户端 ID、" +"我们的 DataLoader、每一轮中的历时数,以及我们希望用于训练的设备(CPU 或 " +"GPU)。" #: ../../source/example-walkthrough-pytorch-mnist.rst:119 msgid "" @@ -2795,22 +3186,26 @@ msgid "" ":code:`fl.client.start_client` along with the server's address as the " "training process begins." msgstr "" +"当训练过程开始时,:code:`PytorchMNISTClient` " +"对象会连同服务器地址一起最终传递给 :code:`fl.client.start_client`。" #: ../../source/example-walkthrough-pytorch-mnist.rst:123 msgid "A Closer Look" -msgstr "" +msgstr "近距离观察" #: ../../source/example-walkthrough-pytorch-mnist.rst:125 msgid "" "Now, let's look closely into the :code:`PytorchMNISTClient` inside " ":code:`flwr_example.quickstart-pytorch.mnist` and see what it is doing:" msgstr "" +"现在,让我们仔细研究一下 :code:`flwr_example.quickstart-pytorch.mnist` 中的 " +":code:`PytorchMNISTClient`,看看它在做什么:" #: ../../source/example-walkthrough-pytorch-mnist.rst:226 msgid "" "The first thing to notice is that :code:`PytorchMNISTClient` instantiates" " a CNN model inside its constructor" -msgstr "" +msgstr "首先要注意的是 :code:`PytorchMNISTClient` 在其构造函数中实例化了一个 CNN 模型" #: ../../source/example-walkthrough-pytorch-mnist.rst:244 msgid "" @@ -2818,6 +3213,9 @@ msgid "" "and it is reproduced below. It is the same network found in `Basic MNIST " "Example `_." msgstr "" +"CNN 的代码可在 :code:`quickstart-pytorch.mnist` 下找到,现复制如下。它与 " +"\"Basic MNIST Example `_\"中的网络相同。" #: ../../source/example-walkthrough-pytorch-mnist.rst:290 msgid "" @@ -2825,6 +3223,8 @@ msgid "" "inherits from the :code:`fl.client.Client`, and hence it must implement " "the following methods:" msgstr "" +"第二件要注意的事是 :code:`PytorchMNISTClient` 类继承自 :code:`fl.client." +"Client`,因此它必须实现以下方法:" #: ../../source/example-walkthrough-pytorch-mnist.rst:315 msgid "" @@ -2833,12 +3233,15 @@ msgid "" ":code:`train` function and that :code:`evaluate` calls a :code:`test`: " "function." msgstr "" +"将抽象类与其派生类 :code:`PytorchMNISTClient` 进行比较时,您会发现 " +":code:`fit` 调用了一个 :code:`train` 函数,而 :code:`evaluate` 则调用了一个 " +":code:`test`: 函数。" #: ../../source/example-walkthrough-pytorch-mnist.rst:317 msgid "" "These functions can both be found inside the same :code:`quickstart-" "pytorch.mnist` module:" -msgstr "" +msgstr "这些函数都可以在同一个 :code:`quickstart-pytorch.mnist` 模块中找到:" #: ../../source/example-walkthrough-pytorch-mnist.rst:437 msgid "" @@ -2849,10 +3252,14 @@ msgid "" "still work flawlessly. As a matter of fact, why not try and modify the " "code to an example of your liking?" msgstr "" +"请注意,这些函数封装了常规的训练和测试循环,并为 :code:`fit` 和 " +":code:`evaluate` 提供了每轮的最终统计数据。您可以用自定义的训练和测试循环来替" +"代它们,并改变网络结构,整个示例仍然可以完美运行。事实上,为什么不按照自己的" +"喜好修改代码呢?" #: ../../source/example-walkthrough-pytorch-mnist.rst:444 msgid "Give It a Try" -msgstr "" +msgstr "试试看" #: ../../source/example-walkthrough-pytorch-mnist.rst:445 msgid "" @@ -2862,36 +3269,41 @@ msgid "" "a few things you could try on your own and get more experience with " "Flower:" msgstr "" +"通过上面的快速入门代码描述,你将对 Flower 中*客户端*和*服务器*的工作方式、如" +"何运行一个简单的实验以及客户端封装器的内部结构有一个很好的了解。下面是一些你" +"可以自己尝试的东西,以获得更多使用 Flower 的经验:" #: ../../source/example-walkthrough-pytorch-mnist.rst:448 msgid "" "Try and change :code:`PytorchMNISTClient` so it can accept different " "architectures." -msgstr "" +msgstr "尝试修改 :code:`PytorchMNISTClient`,使其可以接受不同的架构。" #: ../../source/example-walkthrough-pytorch-mnist.rst:449 msgid "Modify the :code:`train` function so that it accepts different optimizers" -msgstr "" +msgstr "修改 :code:`train` 函数,使其接受不同的优化器" #: ../../source/example-walkthrough-pytorch-mnist.rst:450 msgid "" "Modify the :code:`test` function so that it proves not only the top-1 " "(regular accuracy) but also the top-5 accuracy?" -msgstr "" +msgstr "修改 :code:`test` 函数,使其不仅能证明前 1 名(常规精确度),还能证明前 5 " +"名的精确度?" #: ../../source/example-walkthrough-pytorch-mnist.rst:451 msgid "" "Go larger! Try to adapt the code to larger images and datasets. Why not " "try training on ImageNet with a ResNet-50?" -msgstr "" +msgstr "变大!尝试让代码适应更大的图像和数据集。为什么不尝试使用 ResNet-50 在 " +"ImageNet 上进行训练呢?" #: ../../source/example-walkthrough-pytorch-mnist.rst:453 msgid "You are ready now. Enjoy learning in a federated way!" -msgstr "" +msgstr "您现在已经准备就绪。享受联合学习的乐趣!" #: ../../source/explanation-differential-privacy.rst:2 msgid "Differential privacy" -msgstr "" +msgstr "差别隐私" #: ../../source/explanation-differential-privacy.rst:4 msgid "" @@ -2900,43 +3312,49 @@ msgid "" "training pipelines defined in any of the various ML frameworks that " "Flower is compatible with." msgstr "" +"Flower 提供了差分隐私 (DP) 封装类,可将 DP-FedAvg 提供的核心 DP " +"保证轻松集成到 Flower 兼容的各种 ML 框架中定义的训练管道中。" #: ../../source/explanation-differential-privacy.rst:7 msgid "" "Please note that these components are still experimental, the correct " "configuration of DP for a specific task is still an unsolved problem." -msgstr "" +msgstr "请注意,这些组件仍处于试验阶段,如何为特定任务正确配置 DP " +"仍是一个尚未解决的问题。" #: ../../source/explanation-differential-privacy.rst:10 msgid "" "The name DP-FedAvg is misleading since it can be applied on top of any FL" " algorithm that conforms to the general structure prescribed by the " "FedOpt family of algorithms." -msgstr "" +msgstr "DP-FedAvg 这个名称容易引起误解,因为它可以应用于任何符合 FedOpt " +"系列算法规定的一般结构的 FL 算法之上。" #: ../../source/explanation-differential-privacy.rst:13 msgid "DP-FedAvg" -msgstr "" +msgstr "DP-FedAvg" #: ../../source/explanation-differential-privacy.rst:15 msgid "" "DP-FedAvg, originally proposed by McMahan et al. [mcmahan]_ and extended " "by Andrew et al. [andrew]_, is essentially FedAvg with the following " "modifications." -msgstr "" +msgstr "DP-FedAvg 最初由麦克马汉等人[mcmahan]_提出,并由安德鲁等人[andrew]_加以扩展。" #: ../../source/explanation-differential-privacy.rst:17 msgid "" "**Clipping** : The influence of each client's update is bounded by " "clipping it. This is achieved by enforcing a cap on the L2 norm of the " "update, scaling it down if needed." -msgstr "" +msgstr "**裁剪** : 每个客户端更新的影响力都会受到限制。具体做法是对更新的 L2 " +"准则设置上限,必要时将其缩减。" #: ../../source/explanation-differential-privacy.rst:18 msgid "" "**Noising** : Gaussian noise, calibrated to the clipping threshold, is " "added to the average computed at the server." -msgstr "" +msgstr "**噪声** : " +"在服务器计算出的平均值中加入高斯噪声,该噪声根据剪切阈值进行校准。" #: ../../source/explanation-differential-privacy.rst:20 msgid "" @@ -2945,10 +3363,13 @@ msgid "" "approach [andrew]_ that continuously adjusts the clipping threshold to " "track a prespecified quantile of the update norm distribution." msgstr "" +"事实证明,更新准则的分布会随着任务的不同而变化,并随着训练的进展而演变。因此" +",我们采用了一种自适应方法 " +"[andrew]_,该方法会不断调整剪切阈值,以跟踪更新准则分布的预设量化值。" #: ../../source/explanation-differential-privacy.rst:23 msgid "Simplifying Assumptions" -msgstr "" +msgstr "简化假设" #: ../../source/explanation-differential-privacy.rst:25 msgid "" @@ -2957,12 +3378,16 @@ msgid "" ":math:`(\\epsilon, \\delta)` guarantees the user has in mind when " "configuring the setup." msgstr "" +"我们提出(并试图执行)了一系列必须满足的假设," +"以确保训练过程真正实现用户在配置设置时所想到的 :math:`(\\epsilon,\\delta)` " +"保证。" #: ../../source/explanation-differential-privacy.rst:27 msgid "" "**Fixed-size subsampling** :Fixed-size subsamples of the clients must be " "taken at each round, as opposed to variable-sized Poisson subsamples." -msgstr "" +msgstr "** 固定大小的子样本** " +":与可变大小的泊松子样本相比,每轮必须抽取固定大小的客户子样本。" #: ../../source/explanation-differential-privacy.rst:28 msgid "" @@ -2970,14 +3395,16 @@ msgid "" "weighted equally in the aggregate to eliminate the requirement for the " "server to know in advance the sum of the weights of all clients available" " for selection." -msgstr "" +msgstr "**非加权平均**: " +"所有客户的贡献必须加权相等,这样服务器就不需要事先知道所有客户的权重总和。" #: ../../source/explanation-differential-privacy.rst:29 msgid "" "**No client failures** : The set of available clients must stay constant " "across all rounds of training. In other words, clients cannot drop out or" " fail." -msgstr "" +msgstr "**没有客户失败** : " +"在各轮培训中,可用客户的数量必须保持不变。换句话说,客户不能退出或失败。" #: ../../source/explanation-differential-privacy.rst:31 msgid "" @@ -2985,17 +3412,18 @@ msgid "" "associated with calibrating the noise to the clipping threshold while the" " third one is required to comply with the assumptions of the privacy " "analysis." -msgstr "" +msgstr "前两种方法有助于消除将噪声校准为削波阈值所带来的诸多复杂问题,而第三种方法则" +"需要符合隐私分析的假设。" #: ../../source/explanation-differential-privacy.rst:34 msgid "" "These restrictions are in line with constraints imposed by Andrew et al. " "[andrew]_." -msgstr "" +msgstr "这些限制与 Andrew 等人[andrew]_所施加的限制一致。" #: ../../source/explanation-differential-privacy.rst:37 msgid "Customizable Responsibility for Noise injection" -msgstr "" +msgstr "可定制的噪声注入责任" #: ../../source/explanation-differential-privacy.rst:38 msgid "" @@ -3007,6 +3435,10 @@ msgid "" "aggregating the noisy updates is equivalent to the explicit addition of " "noise to the non-noisy aggregate at the server." msgstr "" +"与其他在服务器上添加噪声的实现方法不同,您可以配置噪声注入的位置,以便更好地" +"匹配您的威胁模型。我们为用户提供了设置训练的灵活性,使每个客户端都能独立地为" +"剪切更新添加少量噪声,这样,只需聚合噪声更新,就相当于在服务器上为非噪声聚合" +"明确添加噪声。" #: ../../source/explanation-differential-privacy.rst:41 msgid "" @@ -3016,10 +3448,14 @@ msgid "" "simple maths to show that this is equivalent to each client adding noise " "with scale :math:`\\sigma_\\Delta/\\sqrt{m}`." msgstr "" +"准确地说,如果我们让 :math:`m` 为每轮采样的客户端数量,:math:`\\sigma_\\Delta" +"` 为需要添加到模型更新总和中的总高斯噪声的规模,我们就可以用简单的数学方法证" +"明,这相当于每个客户端都添加了规模为 :math:`\\sigma_\\Delta/\\sqrt{m}` " +"的噪声。" #: ../../source/explanation-differential-privacy.rst:44 msgid "Wrapper-based approach" -msgstr "" +msgstr "基于封装的方法" #: ../../source/explanation-differential-privacy.rst:46 msgid "" @@ -3035,10 +3471,17 @@ msgid "" "classes every time a new class implementing :code:`Strategy` or " ":code:`NumPyClient` is defined." msgstr "" +"在现有工作负载中引入 DP 可以被认为是在其周围增加了一层额外的安全性。受此启发" +",我们提供了额外的服务器端和客户端逻辑,分别作为 :code:`Strategy` 和 " +":code:`NumPyClient` 抽象类实例的封装器,使训练过程具有不同的私有性。" +"这种基于封装器的方法的优点是可以很容易地与将来可能会有人贡献给 Flower " +"库的其他封装器(例如用于安全聚合的封装器)进行组合。使用继承可能会比较繁琐," +"因为每次定义实现 :code:`Strategy` 或 :code:`NumPyClient` " +"的新类时,都需要创建新的子类。" #: ../../source/explanation-differential-privacy.rst:49 msgid "Server-side logic" -msgstr "" +msgstr "服务器端逻辑" #: ../../source/explanation-differential-privacy.rst:51 msgid "" @@ -3054,10 +3497,17 @@ msgid "" "parameter :code:`server_side_noising`, which, as the name suggests, " "determines where noising is to be performed." msgstr "" +"我们的第一版解决方案是定义一个装饰器,其构造函数接受一个布尔值变量,表示是否" +"启用自适应剪裁。我们很快意识到,这样会使其 :code:`__init__()` 函数中与自适应" +"裁剪超参数相对应的变量变得杂乱无章,而这些变量在自适应裁剪被禁用时将保持未使" +"用状态。要实现更简洁的功能,可以将该功能拆分为两个装饰器,即 " +":code:`DPFedAvgFixed` 和 :code:`DPFedAvgAdaptive`,后者是前者的子类。" +"这两个类的构造函数都接受一个布尔参数 " +":code:`server_side_noising`,顾名思义,它决定在哪里执行噪声。" #: ../../source/explanation-differential-privacy.rst:54 msgid "DPFedAvgFixed" -msgstr "" +msgstr "DPFedAvgFixed" #: ../../source/explanation-differential-privacy.rst:56 msgid "" @@ -3066,6 +3516,8 @@ msgid "" "captured with the help of wrapper logic for just the following two " "methods of the :code:`Strategy` abstract class." msgstr "" +"只需对 :code:`Strategy` 抽象类的以下两个方法进行封装逻辑,就能完全捕获 DP-" +"FedAvg 原始版本(即执行固定剪裁的版本)所需的服务器端功能。" #: ../../source/explanation-differential-privacy.rst:58 msgid "" @@ -3078,6 +3530,13 @@ msgid "" "entails *post*-processing of the results returned by the wrappee's " "implementation of :code:`configure_fit()`." msgstr "" +":code:`configure_fit()`:由封装的 :code:`Strategy` " +"发送给每个客户端的配置字典需要增加一个等于剪切阈值的附加值(在 .NET " +"Framework 2.0 中键入),如果 " +":code:`server_side_noising=true`,还需要增加一个等于高斯噪声比例的附加值: " +"如果 :code:`server_side_noising=true`,则需要在客户端添加另一个与高斯噪声规模" +"相等的数值(关键字为 :code:`dpfedavg_noise_stddev`)。这就需要对被包装者实现 " +":code:`configure_fit()` 时返回的结果进行*后*处理。" #: ../../source/explanation-differential-privacy.rst:59 msgid "" @@ -3094,13 +3553,22 @@ msgid "" "*pre*-processing of the arguments to this method before passing them on " "to the wrappee's implementation of :code:`aggregate_fit()`." msgstr "" +"代码:\"aggregate_fit()\": 我们会检查是否有任何取样客户端在本轮超时前退出或" +"未能上传更新。在这种情况下,我们需要中止当前一轮,丢弃已收到的任何成功更新," +"然后继续下一轮。另一方面,如果所有客户端都成功响应,我们就必须通过拦截 " +":code:`FitRes` 的 :code:`parameters` 字段并将其设置为 " +"1,强制以不加权的方式平均更新。此外,如果 :code:`server_side_noising=true`," +"每次更新都会受到一定量的噪声扰动,其扰动量相当于启用客户端噪声时的扰动量。 " +"这就需要在将本方法的参数传递给被包装者的 :code:`aggregate_fit()` " +"实现之前,对参数进行*预*处理。" #: ../../source/explanation-differential-privacy.rst:62 msgid "" "We can't directly change the aggregation function of the wrapped strategy" " to force it to add noise to the aggregate, hence we simulate client-side" " noising to implement server-side noising." -msgstr "" +msgstr "我们无法直接改变封装策略的聚合函数,迫使它在聚合中添加噪声,因此我们模拟客户" +"端噪声来实现服务器端噪声。" #: ../../source/explanation-differential-privacy.rst:64 msgid "" @@ -3114,10 +3582,14 @@ msgid "" "required to calculate the amount of noise that must be added to each " "individual update, either by the server or the clients." msgstr "" +"这些变化被整合到一个名为 :code:`DPFedAvgFixed` 的类中,其构造函数接受被装饰的" +"策略、剪切阈值和每轮采样的客户数作为必选参数。用户需要指定剪切阈值,因为更新" +"规范的数量级在很大程度上取决于正在训练的模型,提供默认值会产生误导。每轮采样" +"的客户数是计算服务器或客户在每次更新时必须添加的噪音量所必需的。" #: ../../source/explanation-differential-privacy.rst:67 msgid "DPFedAvgAdaptive" -msgstr "" +msgstr "DPFedAvgAdaptive" #: ../../source/explanation-differential-privacy.rst:69 msgid "" @@ -3126,6 +3598,8 @@ msgid "" ":code:`DPFedAvgFixed`. It overrides the above-mentioned methods to do the" " following." msgstr "" +"自适应剪裁所需的附加功能在 :code:`DPFedAvgAdaptive` 中提供,它是 " +":code:`DPFedAvgFixed` 的子类。它重写了上述方法,以实现以下功能。" #: ../../source/explanation-differential-privacy.rst:71 msgid "" @@ -3135,6 +3609,10 @@ msgid "" "interprets as an instruction to include an indicator bit (1 if update " "norm <= clipping threshold, 0 otherwise) in the results returned by it." msgstr "" +":code:`configure_fit()`:它截取由 :code:`super.configure_fit()` 返回的 " +"config dict,并在其中添加键值对 :code:`dpfedavg_adaptive_clip_enabled:True\"" +",客户端将其解释为在返回结果中包含一个指示位(如果更新规范 <= 剪裁阈值,则为 " +"1,否则为 0)的指令。" #: ../../source/explanation-differential-privacy.rst:73 msgid "" @@ -3143,10 +3621,12 @@ msgid "" " a procedure which adjusts the clipping threshold on the basis of the " "indicator bits received from the sampled clients." msgstr "" +":code:`aggregate_fit()`:在调用:code:`super.aggregate_fit()`后,再调用:code:`" +"__update_clip_norm__()`,该过程根据从采样客户端接收到的指示位调整削波阈值。" #: ../../source/explanation-differential-privacy.rst:77 msgid "Client-side logic" -msgstr "" +msgstr "客户端逻辑" #: ../../source/explanation-differential-privacy.rst:79 msgid "" @@ -3159,12 +3639,16 @@ msgid "" " work if either (or both) of the following keys are also present in the " "dict." msgstr "" +"客户端所需的功能完全可以通过 :code:`NumPyClient` 抽象类的 :code:`fit()` 方法" +"的封装逻辑来实现。准确地说,我们需要对封装客户端计算的更新进行*后处理,以便在" +"必要时将其剪切到服务器作为配置字典的一部分提供的阈值。除此之外,如果配置字典" +"中还存在以下任一(或两个)键,客户端可能还需要执行一些额外的工作。" #: ../../source/explanation-differential-privacy.rst:81 msgid "" ":code:`dpfedavg_noise_stddev` : Generate and add the specified amount of " "noise to the clipped update." -msgstr "" +msgstr "code:`dpfedavg_noise_stddev`:生成并在剪切更新中添加指定数量的噪声。" #: ../../source/explanation-differential-privacy.rst:82 msgid "" @@ -3172,10 +3656,12 @@ msgid "" ":code:`FitRes` object being returned to the server with an indicator bit," " calculated as described earlier." msgstr "" +":code:`dpfedavg_adaptive_clip_enabled`:在返回给服务器的 :code:`FitRes` " +"对象中的度量值 dict 中增加一个指标位,计算方法如前所述。" #: ../../source/explanation-differential-privacy.rst:86 msgid "Performing the :math:`(\\epsilon, \\delta)` analysis" -msgstr "" +msgstr "进行 :math:`(epsilon, \\delta)` 分析" #: ../../source/explanation-differential-privacy.rst:88 msgid "" @@ -3184,12 +3670,16 @@ msgid "" ":math:`\\epsilon` value this would result in for a particular " ":math:`\\delta`, the following script may be used." msgstr "" +"假设您已经训练了 :math:`n` 轮,采样分数为 :math:`q`,噪声乘数为 :math:`z`。" +"为了计算特定 :math:`\\delta` 的 :math:`epsilon` 值,可以使用下面的脚本。" #: ../../source/explanation-differential-privacy.rst:98 msgid "" "McMahan, H. Brendan, et al. \"Learning differentially private recurrent " "language models.\" arXiv preprint arXiv:1710.06963 (2017)." msgstr "" +"McMahan, H. Brendan, et al. \"Learning differentially private recurrent " +"language models.\" arXiv preprint arXiv:1710.06963 (2017)." #: ../../source/explanation-differential-privacy.rst:100 msgid "" @@ -3197,26 +3687,30 @@ msgid "" "clipping.\" Advances in Neural Information Processing Systems 34 (2021): " "17455-17466." msgstr "" +"Andrew, Galen, et al. \"Differentially private learning with adaptive " +"clipping.\" Advances in Neural Information Processing Systems 34 (2021): " +"17455-17466." #: ../../source/explanation-federated-evaluation.rst:2 #: ../../source/tutorial-series-what-is-federated-learning.ipynb:292 msgid "Federated evaluation" -msgstr "" +msgstr "联邦评估" #: ../../source/explanation-federated-evaluation.rst:4 msgid "" "There are two main approaches to evaluating models in federated learning " "systems: centralized (or server-side) evaluation and federated (or " "client-side) evaluation." -msgstr "" +msgstr "评估联合学习系统中的模型主要有两种方法:集中(或服务器端)评估和联合(或客户" +"端)评估。" #: ../../source/explanation-federated-evaluation.rst:8 msgid "Centralized Evaluation" -msgstr "" +msgstr "集中评估" #: ../../source/explanation-federated-evaluation.rst:11 msgid "Built-In Strategies" -msgstr "" +msgstr "内置策略" #: ../../source/explanation-federated-evaluation.rst:13 msgid "" @@ -3224,11 +3718,12 @@ msgid "" "evaluation function during initialization. An evaluation function is any " "function that can take the current global model parameters as input and " "return evaluation results:" -msgstr "" +msgstr "所有内置策略都通过在初始化过程中提供一个评估函数来支持集中评估。评估函数是任" +"何可以将当前全局模型参数作为输入并返回评估结果的函数:" #: ../../source/explanation-federated-evaluation.rst:58 msgid "Custom Strategies" -msgstr "" +msgstr "定制策略" #: ../../source/explanation-federated-evaluation.rst:60 msgid "" @@ -3238,30 +3733,33 @@ msgid "" ":code:`evaluate` after parameter aggregation and before federated " "evaluation (see next paragraph)." msgstr "" +":code:`Strategy` 抽象提供了一个名为 :code:`evaluate` " +"的方法,可直接用于评估当前的全局模型参数。" +"当前的服务器实现在参数聚合后和联合评估前调用 :code:`evaluate`(见下段)。" #: ../../source/explanation-federated-evaluation.rst:65 msgid "Federated Evaluation" -msgstr "" +msgstr "联邦评估" #: ../../source/explanation-federated-evaluation.rst:68 msgid "Implementing Federated Evaluation" -msgstr "" +msgstr "实施联邦评估" #: ../../source/explanation-federated-evaluation.rst:70 msgid "" "Client-side evaluation happens in the :code:`Client.evaluate` method and " "can be configured from the server side." -msgstr "" +msgstr "客户端评估在 :code:`Client.evaluate` 方法中进行,并可从服务器端进行配置。" #: ../../source/explanation-federated-evaluation.rst:101 msgid "Configuring Federated Evaluation" -msgstr "" +msgstr "配置联邦评估" #: ../../source/explanation-federated-evaluation.rst:103 msgid "" "Federated evaluation can be configured from the server side. Built-in " "strategies support the following arguments:" -msgstr "" +msgstr "联邦评估可从服务器端进行配置。内置策略支持以下参数:" #: ../../source/explanation-federated-evaluation.rst:105 msgid "" @@ -3272,6 +3770,10 @@ msgid "" "for evaluation. If :code:`fraction_evaluate` is set to :code:`0.0`, " "federated evaluation will be disabled." msgstr "" +":code:`fraction_evaluate`: :code:`float`,定义被选中进行评估的客户端的分数。" +"如果 :code:`fraction_evaluate` 设置为 :code:`0.1`,并且 :code:`100` " +"客户端连接到服务器,那么 :code:`10` 将被随机选中进行评估。如果 " +":code:`fraction_evaluate` 设置为 :code:`0.0`,联合评估将被禁用。" #: ../../source/explanation-federated-evaluation.rst:106 msgid "" @@ -3281,6 +3783,10 @@ msgid "" ":code:`100` clients are connected to the server, then :code:`20` clients " "will be selected for evaluation." msgstr "" +":code:`min_evaluate_clients`:一个 :code:`int`:需要评估的客户的最小数量。" +"如果 :code:`fraction_evaluate` 设置为 :code:`0." +"1`,:code:`min_evaluate_clients` 设置为 20,并且 :code:`100` " +"客户端已连接到服务器,那么 :code:`20` 客户端将被选中进行评估。" #: ../../source/explanation-federated-evaluation.rst:107 msgid "" @@ -3291,6 +3797,10 @@ msgid "" "will wait until more clients are connected before it continues to sample " "clients for evaluation." msgstr "" +":code:`min_available_clients`:一个 " +":code:`int`,定义了在一轮联合评估开始之前,需要连接到服务器的最小客户端数量。" +"如果连接到服务器的客户少于 :code:`min_available_clients`,服务器将等待更多客" +"户连接后,才继续采样客户进行评估。" #: ../../source/explanation-federated-evaluation.rst:108 msgid "" @@ -3300,21 +3810,25 @@ msgid "" "client-side evaluation from the server side, for example, to configure " "the number of validation steps performed." msgstr "" +"code:`on_evaluate_config_fn`:返回配置字典的函数,该字典将发送给选定的客户端" +"。该函数将在每一轮中被调用,并提供了一种方便的方法来从服务器端自定义客户端评" +"估,例如,配置执行的验证步骤数。" #: ../../source/explanation-federated-evaluation.rst:135 msgid "Evaluating Local Model Updates During Training" -msgstr "" +msgstr "评估训练期间的本地模型更新" #: ../../source/explanation-federated-evaluation.rst:137 msgid "" "Model parameters can also be evaluated during training. " ":code:`Client.fit` can return arbitrary evaluation results as a " "dictionary:" -msgstr "" +msgstr "模型参数也可在训练过程中进行评估。 :code:`Client." +"fit`可以字典形式返回任意评估结果:" #: ../../source/explanation-federated-evaluation.rst:177 msgid "Full Code Example" -msgstr "" +msgstr "完整代码示例" #: ../../source/explanation-federated-evaluation.rst:179 msgid "" @@ -3323,79 +3837,82 @@ msgid "" "be applied to workloads implemented in any other framework): " "https://github.com/adap/flower/tree/main/examples/advanced-tensorflow" msgstr "" +"有关同时使用集中评估和联合评估的完整代码示例,请参阅 *Advanced TensorFlow " +"Example*(同样的方法也可应用于在任何其他框架中实施的工作负载): " +"https://github.com/adap/flower/tree/main/examples/advanced-tensorflow" #: ../../source/fed/0000-20200102-fed-template.md:10 msgid "FED Template" -msgstr "" +msgstr "FED 模板" #: ../../source/fed/0000-20200102-fed-template.md:12 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:12 msgid "Table of Contents" -msgstr "" +msgstr "目录" #: ../../source/fed/0000-20200102-fed-template.md:14 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:14 msgid "[Table of Contents](#table-of-contents)" -msgstr "" +msgstr "[目录](#table-of-contents)" #: ../../source/fed/0000-20200102-fed-template.md:15 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:15 msgid "[Summary](#summary)" -msgstr "" +msgstr "[Summary](#summary)" #: ../../source/fed/0000-20200102-fed-template.md:16 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:16 msgid "[Motivation](#motivation)" -msgstr "" +msgstr "[Motivation](#motivation)" #: ../../source/fed/0000-20200102-fed-template.md:17 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:17 msgid "[Goals](#goals)" -msgstr "" +msgstr "[Goals](#goals)" #: ../../source/fed/0000-20200102-fed-template.md:18 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:18 msgid "[Non-Goals](#non-goals)" -msgstr "" +msgstr "[Non-Goals](#non-goals)" #: ../../source/fed/0000-20200102-fed-template.md:19 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:19 msgid "[Proposal](#proposal)" -msgstr "" +msgstr "[Proposal](#proposal)" #: ../../source/fed/0000-20200102-fed-template.md:20 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:23 msgid "[Drawbacks](#drawbacks)" -msgstr "" +msgstr "[Drawbacks](#drawbacks)" #: ../../source/fed/0000-20200102-fed-template.md:21 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:24 msgid "[Alternatives Considered](#alternatives-considered)" -msgstr "" +msgstr "[Alternatives Considered](#alternatives-considered)" #: ../../source/fed/0000-20200102-fed-template.md:22 msgid "[Appendix](#appendix)" -msgstr "" +msgstr "[Appendix](#appendix)" #: ../../source/fed/0000-20200102-fed-template.md:24 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:28 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:76 msgid "Summary" -msgstr "" +msgstr "内容摘要" #: ../../source/fed/0000-20200102-fed-template.md:26 msgid "\\[TODO - sentence 1: summary of the problem\\]" -msgstr "" +msgstr "\\[TODO - sentence 1: summary of the problem\\]" #: ../../source/fed/0000-20200102-fed-template.md:28 msgid "\\[TODO - sentence 2: summary of the solution\\]" -msgstr "" +msgstr "\\[TODO - sentence 2: summary of the solution\\]" #: ../../source/fed/0000-20200102-fed-template.md:30 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:47 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:77 msgid "Motivation" -msgstr "" +msgstr "动机" #: ../../source/fed/0000-20200102-fed-template.md:32 #: ../../source/fed/0000-20200102-fed-template.md:36 @@ -3405,133 +3922,142 @@ msgstr "" #: ../../source/fed/0000-20200102-fed-template.md:54 #: ../../source/fed/0000-20200102-fed-template.md:58 msgid "\\[TODO\\]" -msgstr "" +msgstr "\\[TODO\\]" #: ../../source/fed/0000-20200102-fed-template.md:34 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:53 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:78 msgid "Goals" -msgstr "" +msgstr "目标" #: ../../source/fed/0000-20200102-fed-template.md:38 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:59 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:79 msgid "Non-Goals" -msgstr "" +msgstr "非目标" #: ../../source/fed/0000-20200102-fed-template.md:42 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:65 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:80 msgid "Proposal" -msgstr "" +msgstr "提案" #: ../../source/fed/0000-20200102-fed-template.md:46 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:85 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:129 msgid "Drawbacks" -msgstr "" +msgstr "缺点" #: ../../source/fed/0000-20200102-fed-template.md:50 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:86 #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:135 msgid "Alternatives Considered" -msgstr "" +msgstr "曾考虑的替代方案" #: ../../source/fed/0000-20200102-fed-template.md:52 +#, fuzzy msgid "\\[Alternative 1\\]" -msgstr "" +msgstr "\\[Alternative 1\\]" #: ../../source/fed/0000-20200102-fed-template.md:56 +#, fuzzy msgid "\\[Alternative 2\\]" -msgstr "" +msgstr "\\[Alternative 2\\]" #: ../../source/fed/0000-20200102-fed-template.md:60 msgid "Appendix" -msgstr "" +msgstr "附录" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:10 msgid "Flower Enhancement Doc" -msgstr "" +msgstr "Flower 增效文件" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:20 +#, fuzzy msgid "[Enhancement Doc Template](#enhancement-doc-template)" -msgstr "" +msgstr "[Enhancement Doc Template](#enhancement-doc-template)" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:21 +#, fuzzy msgid "[Metadata](#metadata)" -msgstr "" +msgstr "[Metadata](#metadata)" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:22 +#, fuzzy msgid "[Workflow](#workflow)" -msgstr "" +msgstr "[Workflow](#workflow)" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:25 +#, fuzzy msgid "[GitHub Issues](#github-issues)" -msgstr "" +msgstr "[GitHub Issues](#github-issues)" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:26 +#, fuzzy msgid "[Google Docs](#google-docs)" -msgstr "" +msgstr "[Google Docs](#google-docs)" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:30 msgid "A Flower Enhancement is a standardized development process to" -msgstr "" +msgstr "增强 Flower 功能是一个标准化的开发流程,目的是" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:32 msgid "provide a common structure for proposing larger changes" -msgstr "" +msgstr "为提出更大规模的变革提供一个共同的结构" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:33 msgid "ensure that the motivation for a change is clear" -msgstr "" +msgstr "确保变革的动机明确" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:34 msgid "persist project information in a version control system" -msgstr "" +msgstr "将项目信息保存在版本控制系统中" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:35 msgid "document the motivation for impactful user-facing changes" -msgstr "" +msgstr "记录面向用户的具有影响力的变革的动机" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:36 msgid "reserve GitHub issues for tracking work in flight" -msgstr "" +msgstr "保留 GitHub 问题,用于跟踪飞行中的工作" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:37 msgid "" "ensure community participants can successfully drive changes to " "completion across one or more releases while stakeholders are adequately " "represented throughout the process" -msgstr "" +msgstr "确保社区参与者能够成功推动变革,完成一个或多个版本,同时利益相关者在整个过程" +"中得到充分代表" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:39 msgid "Hence, an Enhancement Doc combines aspects of" -msgstr "" +msgstr "因此,\"增强文件 \"将以下方面结合起来" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:41 msgid "a feature, and effort-tracking document" -msgstr "" +msgstr "功能,以及努力跟踪文件" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:42 msgid "a product requirements document" -msgstr "" +msgstr "产品要求文件" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:43 msgid "a design document" -msgstr "" +msgstr "设计文件" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:45 msgid "" "into one file, which is created incrementally in collaboration with the " "community." -msgstr "" +msgstr "该文件是与社区合作逐步创建的。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:49 msgid "" "For far-fetching changes or features proposed to Flower, an abstraction " "beyond a single GitHub issue or pull request is required to understand " "and communicate upcoming changes to the project." -msgstr "" +msgstr "对于向 Flower 提出的远期变更或功能,需要一个超越单个 GitHub " +"问题或拉请求的抽象概念,以了解和沟通项目即将发生的变更。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:51 msgid "" @@ -3540,6 +4066,9 @@ msgid "" "video calls, and hallway conversations into a well-tracked artifact, this" " process aims to enhance communication and discoverability." msgstr "" +"这一流程的目的是减少我们社区中 \"部落知识 \"的数量。通过将决策从 Slack 线程、" +"视频通话和走廊对话转移到一个跟踪良好的人工制品中,该流程旨在加强沟通和可发现" +"性。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:55 msgid "" @@ -3547,7 +4076,8 @@ msgid "" " process. If an enhancement would be described in either written or " "verbal communication to anyone besides the author or developer, then " "consider creating an Enhancement Doc." -msgstr "" +msgstr "任何较大的、面向用户的增强都应遵循增强流程。如果要以书面或口头形式向作者或开" +"发人员以外的任何人描述增强功能,则应考虑创建增强文档。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:57 msgid "" @@ -3556,6 +4086,8 @@ msgid "" "also be communicated widely. The Enhancement process is suited for this " "even if it will have zero impact on the typical user or operator." msgstr "" +"同样,任何会对开发社区的大部分人产生影响的技术工作(重构、重大架构变更)也应" +"广泛传播。即使对典型用户或操作员的影响为零,改进流程也适用于这种情况。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:61 msgid "" @@ -3564,118 +4096,124 @@ msgid "" "adding new Federated Learning algorithms, as these only add features " "without changing how Flower works or is used." msgstr "" +"对于小的改动和添加,通过 \"增强 \"程序既耗时又没有必要" +"。例如,这包括添加新的联合学习算法,因为这只会增加功能,而不会改变 \"Flower " +"\"的工作或使用方式。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:63 msgid "" "Enhancements are different from feature requests, as they are already " "providing a laid-out path for implementation and are championed by " "members of the community." -msgstr "" +msgstr "增强功能与功能请求不同,因为它们已经提供了实施路径,并得到了社区成员的支持。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:67 msgid "" "An Enhancement is captured in a Markdown file that follows a defined " "template and a workflow to review and store enhancement docs for " "reference — the Enhancement Doc." -msgstr "" +msgstr "增强功能被记录在一个 Markdown 文件中,该文件遵循已定义的模板和工作流程,用于" +"审查和存储增强功能文档(即增强功能文档)以供参考。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:69 msgid "Enhancement Doc Template" -msgstr "" +msgstr "增强文档模板" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:71 msgid "" "Each enhancement doc is provided as a Markdown file having the following " "structure" -msgstr "" +msgstr "每个改进文档都以 Markdown 文件的形式提供,其结构如下" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:73 msgid "Metadata (as [described below](#metadata) in form of a YAML preamble)" -msgstr "" +msgstr "元数据([如下所述](#metadata) 以 YAML 前言的形式出现)" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:74 msgid "Title (same as in metadata)" -msgstr "" +msgstr "标题(与元数据中的标题相同)" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:75 msgid "Table of Contents (if needed)" -msgstr "" +msgstr "目录(如有需要)" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:81 msgid "Notes/Constraints/Caveats (optional)" -msgstr "" +msgstr "注意事项/限制/注意事项(可选)" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:82 msgid "Design Details (optional)" -msgstr "" +msgstr "设计细节(可选)" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:83 msgid "Graduation Criteria" -msgstr "" +msgstr "毕业标准" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:84 msgid "Upgrade/Downgrade Strategy (if applicable)" -msgstr "" +msgstr "升级/降级策略(如适用)" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:88 msgid "As a reference, this document follows the above structure." -msgstr "" +msgstr "作为参考,本文件采用上述结构。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:90 msgid "Metadata" -msgstr "" +msgstr "元数据" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:92 msgid "" "**fed-number** (Required) The `fed-number` of the last Flower Enhancement" " Doc + 1. With this number, it becomes easy to reference other proposals." -msgstr "" +msgstr "**fed-number**(必填)上一个鲜花增强文件的 \"fed-number \"" +"+1。有了这个编号,就很容易参考其他提案。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:94 msgid "**title** (Required) The title of the proposal in plain language." -msgstr "" +msgstr "**标题** (必填)用简明语言写出提案的标题。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:96 msgid "" "**status** (Required) The current status of the proposal. See " "[workflow](#workflow) for the possible states." -msgstr "" +msgstr "**status** (必填)提案的当前状态。有关可能的状态,请参阅 " +"[工作流程](#workflow)。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:98 msgid "" "**authors** (Required) A list of authors of the proposal. This is simply " "the GitHub ID." -msgstr "" +msgstr "**作者**(必填) 提案的作者列表。这只是 GitHub ID。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:100 msgid "" "**creation-date** (Required) The date that the proposal was first " "submitted in a PR." -msgstr "" +msgstr "**创建日期**(必填) 建议书在 PR 中首次提交的日期。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:102 msgid "" "**last-updated** (Optional) The date that the proposal was last changed " "significantly." -msgstr "" +msgstr "**最后更新** (可选)提案最后一次重大修改的日期。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:104 msgid "" "**see-also** (Optional) A list of other proposals that are relevant to " "this one." -msgstr "" +msgstr "**另见** (可选)与本提案相关的其他提案清单。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:106 msgid "**replaces** (Optional) A list of proposals that this one replaces." -msgstr "" +msgstr "**取代**(可选) 这份提案所取代的提案列表。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:108 msgid "**superseded-by** (Optional) A list of proposals that this one supersedes." -msgstr "" +msgstr "**被取代者** (可选) 此提案取代的提案列表。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:111 msgid "Workflow" -msgstr "" +msgstr "工作流程" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:113 msgid "" @@ -3683,7 +4221,8 @@ msgid "" "pitched in the community. As such, it needs a champion, usually the " "author, who shepherds the enhancement. This person also has to find " "committers to Flower willing to review the proposal." -msgstr "" +msgstr "形成增强功能的想法应该已经在社区中讨论过或提出过。因此,它需要一个支持者(通" +"常是作者)来引导增强。这个人还必须找到愿意审核提案的提交者。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:115 msgid "" @@ -3693,6 +4232,10 @@ msgid "" "state as part of a pull request. Discussions are done as part of the pull" " request review." msgstr "" +"新的增强功能以 `NNNN-YYYYMMDD-enhancement-title.md` 的文件名签入,其中 `NNNN`" +" 是花朵增强文档的编号,并将其转入 `enhancements`。作为拉取请求的一部分," +"所有增强功能都从 `provisional` " +"状态开始。讨论是作为拉取请求审查的一部分进行的。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:117 msgid "" @@ -3702,65 +4245,71 @@ msgid "" "enhancement as part of their description. After the implementation is " "done, the proposal status is changed to `implemented`." msgstr "" +"一旦增强功能通过审核和批准,其状态就会变为 \"可实施\"。实际的实施工作将在单独" +"的拉取请求中完成。这些拉取请求应在其描述中提及相应的增强功能。实施完成后," +"提案状态将更改为 \"已实施\"。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:119 msgid "" "Under certain conditions, other states are possible. An Enhancement has " "the following states:" -msgstr "" +msgstr "在某些条件下,还可能出现其他状态。增强 \"具有以下状态:" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:121 msgid "" "`provisional`: The enhancement has been proposed and is actively being " "defined. This is the starting state while the proposal is being fleshed " "out and actively defined and discussed." -msgstr "" +msgstr "暂定\": 已提出改进建议并正在积极定义。这是在提案得到充实、积极定义和讨论时的" +"起始状态。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:122 msgid "`implementable`: The enhancement has been reviewed and approved." -msgstr "" +msgstr "可实施\": 增强功能已审核通过。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:123 msgid "" "`implemented`: The enhancement has been implemented and is no longer " "actively changed." -msgstr "" +msgstr "已实施`: 增强功能已实施,不再主动更改。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:124 msgid "`deferred`: The enhancement is proposed but not actively being worked on." -msgstr "" +msgstr "\"推迟\": 已提出改进建议,但尚未积极开展工作。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:125 msgid "" "`rejected`: The authors and reviewers have decided that this enhancement " "is not moving forward." -msgstr "" +msgstr "拒绝\": 作者和审稿人已决定不再推进该增强功能。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:126 msgid "`withdrawn`: The authors have withdrawn the enhancement." -msgstr "" +msgstr "撤回\": 作者已撤回增强功能。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:127 msgid "`replaced`: The enhancement has been replaced by a new enhancement." -msgstr "" +msgstr "`已替换`: 增强功能已被新的增强功能取代。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:131 msgid "" "Adding an additional process to the ones already provided by GitHub " "(Issues and Pull Requests) adds more complexity and can be a barrier for " "potential first-time contributors." -msgstr "" +msgstr "在 GitHub 已提供的流程(问题和拉取请求)之外再增加一个流程,会增加复杂性,并" +"可能成为潜在首次贡献者的障碍。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:133 msgid "" "Expanding the proposal template beyond the single-sentence description " "currently required in the features issue template may be a heavy burden " "for non-native English speakers." -msgstr "" +msgstr "对于英语非母语者来说,将提案模板扩展到目前要求的单句描述之外可能是一个沉重的" +"负担。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:137 msgid "GitHub Issues" -msgstr "" +msgstr "GitHub 问题" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:139 msgid "" @@ -3772,10 +4321,14 @@ msgid "" "parts of the doc. Managing these multiple discussions can be confusing " "when using GitHub Issues." msgstr "" +"使用 GitHub Issues 进行此类改进是可行的。例如,我们可以使用标签来区分和过滤这" +"些问题。主要的问题在于讨论和审查增强功能: GitHub 问题只有一个评论线程。而增" +"强功能通常会同时有多个讨论线程,针对文档的不同部分。在使用 GitHub " +"问题时,管理这些多重讨论会很混乱。" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:141 msgid "Google Docs" -msgstr "" +msgstr "谷歌文档" #: ../../source/fed/0001-20220311-flower-enhancement-doc.md:143 msgid "" @@ -3786,24 +4339,27 @@ msgid "" "proposals as part of Flower's repository, the potential for missing links" " is much higher." msgstr "" +"谷歌文档允许多线程讨论。但是,由于谷歌文档是在项目之外托管的,因此需要注意它" +"们是否能被社区发现。我们必须管理所有提案的链接列表,并提供给社区使用。与作为 " +"Flower 资源库一部分的提案相比,丢失链接的可能性要大得多。" #: ../../source/fed/index.md:1 msgid "FED - Flower Enhancement Doc" -msgstr "" +msgstr "FED - Flower 增强文件" #: ../../source/how-to-aggregate-evaluation-results.rst:2 msgid "Aggregate evaluation results" -msgstr "" +msgstr "总体评估结果" #: ../../source/how-to-aggregate-evaluation-results.rst:4 msgid "" "The Flower server does not prescribe a way to aggregate evaluation " "results, but it enables the user to fully customize result aggregation." -msgstr "" +msgstr "Flower 服务器没有规定汇总评估结果的方法,但用户可以完全自定义结果汇总。" #: ../../source/how-to-aggregate-evaluation-results.rst:8 msgid "Aggregate Custom Evaluation Results" -msgstr "" +msgstr "自定义评估结果汇总" #: ../../source/how-to-aggregate-evaluation-results.rst:10 msgid "" @@ -3811,16 +4367,18 @@ msgid "" " custom evaluation results coming from individual clients. Clients can " "return custom metrics to the server by returning a dictionary:" msgstr "" +"同样的 :code:`Strategy` 定制方法也可用于汇总来自单个客户端的自定义评估结果。" +"客户端可以通过返回字典的方式向服务器返回自定义指标:" #: ../../source/how-to-aggregate-evaluation-results.rst:36 msgid "" "The server can then use a customized strategy to aggregate the metrics " "provided in these dictionaries:" -msgstr "" +msgstr "然后,服务器可以使用定制的策略来汇总这些字典中提供的指标:" #: ../../source/how-to-configure-clients.rst:2 msgid "Configure clients" -msgstr "" +msgstr "配置客户端" #: ../../source/how-to-configure-clients.rst:4 msgid "" @@ -3828,11 +4386,12 @@ msgid "" "clients. Configuration values can be used for various purposes. They are," " for example, a popular way to control client-side hyperparameters from " "the server." -msgstr "" +msgstr "除了模型参数,Flower 还可以向客户端发送配置值。配置值有多种用途。例如,它们是" +"一种从服务器控制客户端超参数的常用方法。" #: ../../source/how-to-configure-clients.rst:7 msgid "Configuration values" -msgstr "" +msgstr "配置值" #: ../../source/how-to-configure-clients.rst:9 msgid "" @@ -3841,6 +4400,9 @@ msgid "" "float), ``int``, or ``str`` (or equivalent types in different languages)." " Here is an example of a configuration dictionary in Python:" msgstr "" +"配置值以字典的形式表示,字典的键为 ``str``,值的类型为 " +"``bool``、``bytes``、``double``(64 位精度浮点型)、``int``或 " +"``str`(或不同语言中的等效类型)。下面是一个 Python 配置字典的示例:" #: ../../source/how-to-configure-clients.rst:20 msgid "" @@ -3848,6 +4410,8 @@ msgid "" "short) to their ProtoBuf representation, transports them to the client " "using gRPC, and then deserializes them back to Python dictionaries." msgstr "" +"Flower 将这些配置字典(简称 *config dict*)序列化为 ProtoBuf 表示形式,使用 " +"gRPC 将其传输到客户端,然后再反序列化为 Python 字典。" #: ../../source/how-to-configure-clients.rst:24 msgid "" @@ -3857,6 +4421,9 @@ msgid "" " by converting them to one of the supported value types (and converting " "them back on the client-side)." msgstr "" +"目前,还不支持在配置字典中直接发送作为值的集合类型(例如,`Set``, `List`, `Ma" +"p``)。有几种变通方法可将集合转换为受支持的值类型之一(并在客户端将其转换回)" +",从而将集合作为值发送。" #: ../../source/how-to-configure-clients.rst:26 msgid "" @@ -3864,11 +4431,12 @@ msgid "" "string, then send the JSON string using the configuration dictionary, and" " then convert the JSON string back to a list of floating-point numbers on" " the client." -msgstr "" +msgstr "例如,可以将浮点数列表转换为 JSON 字符串,然后使用配置字典发送 JSON 字符串," +"再在客户端将 JSON 字符串转换回浮点数列表。" #: ../../source/how-to-configure-clients.rst:30 msgid "Configuration through built-in strategies" -msgstr "" +msgstr "通过内置策略进行配置" #: ../../source/how-to-configure-clients.rst:32 msgid "" @@ -3879,6 +4447,9 @@ msgid "" "the current round. It then forwards the configuration dictionary to all " "the clients selected during that round." msgstr "" +"向客户端发送配置值的最简单方法是使用内置策略,如 :code:`FedAvg`。内置策略支持" +"所谓的配置函数。配置函数是内置策略调用的函数,用于获取当前回合的配置字典。然" +"后,它会将配置字典转发给该轮中选择的所有客户端。" #: ../../source/how-to-configure-clients.rst:34 msgid "" @@ -3887,17 +4458,21 @@ msgid "" "federated learning, and (c) the number of epochs to train on the client-" "side. Our configuration function could look like this:" msgstr "" +"让我们从一个简单的例子开始。想象一下,我们想要发送(a)客户端应该使用的批次大" +"小,(b)当前联合学习的全局轮次,以及(c)客户端训练的历元数。我们的配置函数" +"可以是这样的:" #: ../../source/how-to-configure-clients.rst:47 msgid "" "To make the built-in strategies use this function, we can pass it to " "``FedAvg`` during initialization using the parameter " ":code:`on_fit_config_fn`:" -msgstr "" +msgstr "为了让内置策略使用这个函数,我们可以在初始化时使用参数 " +":code:`on_fit_config_fn` 将它传递给 ``FedAvg`` :" #: ../../source/how-to-configure-clients.rst:56 msgid "One the client side, we receive the configuration dictionary in ``fit``:" -msgstr "" +msgstr "在客户端,我们在 ``fit`` 中接收配置字典:" #: ../../source/how-to-configure-clients.rst:67 msgid "" @@ -3906,6 +4481,9 @@ msgid "" " send different configuration values to `evaluate` (for example, to use a" " different batch size)." msgstr "" +"还有一个 `on_evaluate_config_fn` " +"用于配置评估,其工作方式相同。它们是不同的函数,因为可能需要向 `evaluate` " +"发送不同的配置值(例如,使用不同的批量大小)。" #: ../../source/how-to-configure-clients.rst:69 msgid "" @@ -3916,20 +4494,24 @@ msgid "" "hyperparameter schedule, for example, to increase the number of local " "epochs during later rounds, we could do the following:" msgstr "" +"内置策略每轮(即每次运行 `Strategy.configure_fit` 或 `Strategy." +"configure_evaluate` 时)都会调用此函数。每轮调用 `on_evaluate_config_fn` 允许" +"我们在连续几轮中改变配置指令。例如,如果我们想实现一个超参数时间表,以增加后" +"几轮的本地历时次数,我们可以这样做:" #: ../../source/how-to-configure-clients.rst:82 msgid "The :code:`FedAvg` strategy will call this function *every round*." -msgstr "" +msgstr "代码:`FedAvg`策略每轮*都会调用该函数。" #: ../../source/how-to-configure-clients.rst:85 msgid "Configuring individual clients" -msgstr "" +msgstr "配置个人客户" #: ../../source/how-to-configure-clients.rst:87 msgid "" "In some cases, it is necessary to send different configuration values to " "different clients." -msgstr "" +msgstr "在某些情况下,有必要向不同的客户端发送不同的配置值。" #: ../../source/how-to-configure-clients.rst:89 msgid "" @@ -3942,17 +4524,24 @@ msgid "" "other clients in this round to not receive this \"special\" config " "value):" msgstr "" +"这可以通过定制现有策略或 \"从头开始实施一个定制策略 `_\"来实现" +"。下面是一个无厘头的例子,它通过在*单个客户端的配置指令(config " +"dict)中添加自定义的``\"hello\": \"world\"" +"``配置键/值对添加到*单个客户端*(仅列表中的第一个客户端," +"本轮中的其他客户端不会收到此 \"特殊 \"配置值)的配置 dict 中:" #: ../../source/how-to-configure-logging.rst:2 msgid "Configure logging" -msgstr "" +msgstr "配置日志记录" #: ../../source/how-to-configure-logging.rst:4 msgid "" "The Flower logger keeps track of all core events that take place in " "federated learning workloads. It presents information by default " "following a standard message format:" -msgstr "" +msgstr "Flower 日志记录器会跟踪联合学习工作负载中发生的所有核心事件。它默认按照标准信" +"息格式提供信息:" #: ../../source/how-to-configure-logging.rst:13 msgid "" @@ -3961,10 +4550,12 @@ msgid "" "took place from, as well as the log message itself. In this way, the " "logger would typically display information on your terminal as follows:" msgstr "" +"包含的相关信息包括:日志信息级别(例如 :code:`INFO`、:code:`DEBUG`)、时间戳" +"、日志记录的行以及日志信息本身。这样,日志记录器通常会在终端上显示如下信息:" #: ../../source/how-to-configure-logging.rst:34 msgid "Saving log to file" -msgstr "" +msgstr "将日志保存到文件" #: ../../source/how-to-configure-logging.rst:36 msgid "" @@ -3978,6 +4569,12 @@ msgid "" "`_" " function. For example:" msgstr "" +"默认情况下,Flower 日志会输出到启动联合学习工作负载的终端。这既适用于基于 " +"gRPC 的联合(即执行 :code:`fl.server.start_server` 时),也适用于使用 " +":code:`VirtualClientEngine` 时(即执行 :code:`fl.simulation.start_simulation`" +" 时)。在某些情况下,您可能希望将此日志保存到磁盘。为此,您可以调用 `fl." +"common.logger.configure() `_ 函数。例如:" #: ../../source/how-to-configure-logging.rst:53 msgid "" @@ -3986,27 +4583,31 @@ msgid "" "you are running the code from. If we inspect we see the log above is also" " recorded but prefixing with :code:`identifier` each line:" msgstr "" +"通过上述操作,Flower 会将您在终端上看到的日志记录到 :code:`log.txt`。该文件将" +"创建在运行代码的同一目录下。如果我们检查一下,就会发现上面的日志也被记录了下" +"来,但每一行都以 :code:`identifier` 作为前缀:" #: ../../source/how-to-configure-logging.rst:74 msgid "Log your own messages" -msgstr "" +msgstr "记录自己的信息" #: ../../source/how-to-configure-logging.rst:76 msgid "" "You might expand the information shown by default with the Flower logger " "by adding more messages relevant to your application. You can achieve " "this easily as follows." -msgstr "" +msgstr "您可以通过添加更多与应用程序相关的信息来扩展 Flower " +"日志记录器默认显示的信息。您可以通过以下方法轻松实现这一目标。" #: ../../source/how-to-configure-logging.rst:102 msgid "" "In this way your logger will show, in addition to the default messages, " "the ones introduced by the clients as specified above." -msgstr "" +msgstr "这样,除默认信息外,您的日志记录器还将显示由客户引入的信息,如上文所述。" #: ../../source/how-to-configure-logging.rst:128 msgid "Log to a remote service" -msgstr "" +msgstr "登录远程服务" #: ../../source/how-to-configure-logging.rst:130 msgid "" @@ -4020,16 +4621,23 @@ msgid "" ":code:`HTTPHandler` should you whish to backup or analyze the logs " "somewhere else." msgstr "" +"此外,:code:`fl.common.logger.configure`函数还允许指定主机,通过本地 Python " +":code:`logging.handler.HTTPHandler`,向该主机推送日志(通过 :code:`POST`)。" +"在基于 :code:`gRPC` 的联合学习工作负载中,这是一个特别有用的功能,否则从所有" +"实体(即服务器和客户端)收集日志可能会很麻烦。请注意,在 Flower " +"模拟中,服务器会自动显示所有日志。如果希望在其他地方备份或分析日志,仍可指定 " +":code:`HTTPHandler`。" #: ../../source/how-to-enable-ssl-connections.rst:2 msgid "Enable SSL connections" -msgstr "" +msgstr "启用 SSL 连接" #: ../../source/how-to-enable-ssl-connections.rst:4 msgid "" "This guide describes how to a SSL-enabled secure Flower server can be " "started and how a Flower client can establish a secure connections to it." -msgstr "" +msgstr "本指南介绍如何启动启用 SSL 的安全 Flower 服务器,以及 Flower " +"客户端如何与其建立安全连接。" #: ../../source/how-to-enable-ssl-connections.rst:7 msgid "" @@ -4037,6 +4645,8 @@ msgid "" "`here `_." msgstr "" +"有关安全连接的完整代码示例,请参见 `_ 。" #: ../../source/how-to-enable-ssl-connections.rst:10 msgid "" @@ -4045,10 +4655,12 @@ msgid "" "descriptive on how. Stick to this guide for a deeper introduction to the " "topic." msgstr "" +"代码示例附带的 README.md 文件将解释如何启动它。虽然它已经启用了 " +"SSL,但对如何启用可能描述较少。请参考本指南,了解更深入的相关介绍。" #: ../../source/how-to-enable-ssl-connections.rst:16 msgid "Certificates" -msgstr "" +msgstr "证书" #: ../../source/how-to-enable-ssl-connections.rst:18 msgid "" @@ -4058,16 +4670,19 @@ msgid "" "to ask you to run the script in :code:`examples/advanced-" "tensorflow/certificates/generate.sh`" msgstr "" +"使用支持 SSL 的连接需要向服务器和客户端传递证书。在本指南中,我们将生成自签名" +"证书。由于这可能会变得相当复杂,我们将要求你运行 :code:`examples/" +"advanced-tensorflow/certificates/generate.sh` 中的脚本" #: ../../source/how-to-enable-ssl-connections.rst:23 msgid "with the following command sequence:" -msgstr "" +msgstr "使用以下命令序列:" #: ../../source/how-to-enable-ssl-connections.rst:30 msgid "" "This will generate the certificates in :code:`examples/advanced-" "tensorflow/.cache/certificates`." -msgstr "" +msgstr "这将在 :code:`examples/advanced-tensorflow/.cache/certificates` 中生成证书。" #: ../../source/how-to-enable-ssl-connections.rst:32 msgid "" @@ -4076,23 +4691,24 @@ msgid "" "complete for production environments. Please refer to other sources " "regarding the issue of correctly generating certificates for production " "environments." -msgstr "" +msgstr "本示例中生成 SSL 证书的方法可作为一种启发和起点,但不应被视为生产环境的完整方" +"法。有关在生产环境中正确生成证书的问题,请参考其他资料。" #: ../../source/how-to-enable-ssl-connections.rst:36 msgid "" "In case you are a researcher you might be just fine using the self-signed" " certificates generated using the scripts which are part of this guide." -msgstr "" +msgstr "如果你是一名研究人员,使用本指南中的脚本生成的自签名证书就可以了。" #: ../../source/how-to-enable-ssl-connections.rst:41 msgid "Server" -msgstr "" +msgstr "服务器" #: ../../source/how-to-enable-ssl-connections.rst:43 msgid "" "We are now going to show how to write a sever which uses the previously " "generated scripts." -msgstr "" +msgstr "现在,我们将展示如何编写一个使用先前生成的脚本的 sever。" #: ../../source/how-to-enable-ssl-connections.rst:61 msgid "" @@ -4101,18 +4717,21 @@ msgid "" "those files into byte strings, which is the data type " ":code:`start_server` expects." msgstr "" +"在提供证书时,服务器希望得到由三个证书组成的元组。 :code:`Path` " +"可用于轻松地将这些文件的内容读取为字节字符串,这就是 :code:`start_server` " +"期望的数据类型。" #: ../../source/how-to-enable-ssl-connections.rst:65 #: ../../source/how-to-upgrade-to-flower-1.0.rst:37 #: ../../source/ref-api-flwr.rst:15 msgid "Client" -msgstr "" +msgstr "客户端" #: ../../source/how-to-enable-ssl-connections.rst:67 msgid "" "We are now going to show how to write a client which uses the previously " "generated scripts:" -msgstr "" +msgstr "现在我们将演示如何编写一个客户端,使用之前生成的脚本:" #: ../../source/how-to-enable-ssl-connections.rst:84 msgid "" @@ -4120,40 +4739,44 @@ msgid "" "encoded root certificates as a byte string. We are again using " ":code:`Path` to simplify reading those as byte strings." msgstr "" +"当设置 :code:`root_certificates` 时,客户端希望 PEM " +"编码的根证书是字节字符串。我们再次使用 :code:`Path` " +"来简化以字节字符串形式读取证书的过程。" #: ../../source/how-to-enable-ssl-connections.rst:89 #: ../../source/tutorial-series-what-is-federated-learning.ipynb:287 msgid "Conclusion" -msgstr "" +msgstr "结论" #: ../../source/how-to-enable-ssl-connections.rst:91 msgid "" "You should now have learned how to generate self-signed certificates " "using the given script, start a SSL-enabled server, and have a client " "establish a secure connection to it." -msgstr "" +msgstr "现在,你应该已经学会了如何使用给定的脚本生成自签名证书、启动启用 SSL " +"的服务器并让客户端与其建立安全连接。" #: ../../source/how-to-enable-ssl-connections.rst:96 msgid "Additional resources" -msgstr "" +msgstr "补充资源" #: ../../source/how-to-enable-ssl-connections.rst:98 msgid "" "These additional sources might be relevant if you would like to dive " "deeper into the topic of certificates:" -msgstr "" +msgstr "如果您想更深入地了解证书主题,这些额外的资料来源可能与您相关:" #: ../../source/how-to-enable-ssl-connections.rst:100 msgid "`Let's Encrypt `_" -msgstr "" +msgstr "让我们加密 `_" #: ../../source/how-to-enable-ssl-connections.rst:101 msgid "`certbot `_" -msgstr "" +msgstr "`certbot `_" #: ../../source/how-to-implement-strategies.rst:2 msgid "Implement strategies" -msgstr "" +msgstr "实施策略" #: ../../source/how-to-implement-strategies.rst:4 msgid "" @@ -4164,10 +4787,13 @@ msgid "" "evaluate models. Flower provides a few built-in strategies which are " "based on the same API described below." msgstr "" +"策略抽象可以实现完全定制的策略。策略基本上就是在服务器上运行的联合学习算法。" +"策略决定如何对客户端进行采样、如何配置客户端进行训练、如何聚合更新以及如何评" +"估模型。Flower 提供了一些内置策略,这些策略基于下文所述的相同 API。" #: ../../source/how-to-implement-strategies.rst:11 msgid "The :code:`Strategy` abstraction" -msgstr "" +msgstr "代码:\"策略 \"抽象" #: ../../source/how-to-implement-strategies.rst:13 msgid "" @@ -4177,12 +4803,14 @@ msgid "" "implementations have the exact same capabilities at their disposal as " "built-in ones." msgstr "" +"所有策略实现均源自抽象基类 :code:`flwr.server.strategy.Strategy`,包括内置实" +"现和第三方实现。这意味着自定义策略实现与内置实现具有完全相同的功能。" #: ../../source/how-to-implement-strategies.rst:18 msgid "" "The strategy abstraction defines a few abstract methods that need to be " "implemented:" -msgstr "" +msgstr "策略抽象定义了一些需要实现的抽象方法:" #: ../../source/how-to-implement-strategies.rst:75 msgid "" @@ -4190,18 +4818,20 @@ msgid "" "from the abstract base class :code:`Strategy`) that implements for the " "previously shown abstract methods:" msgstr "" +"创建一个新策略意味着要实现一个新的 :code:`class`(从抽象基类 :code:`Strategy`" +" 派生),该类要实现前面显示的抽象方法:" #: ../../source/how-to-implement-strategies.rst:100 msgid "The Flower server calls these methods in the following order:" -msgstr "" +msgstr "Flower 服务器按以下顺序调用这些方法:" #: ../../source/how-to-implement-strategies.rst:177 msgid "The following sections describe each of those methods in more detail." -msgstr "" +msgstr "下文将详细介绍每种方法。" #: ../../source/how-to-implement-strategies.rst:180 msgid "The :code:`initialize_parameters` method" -msgstr "" +msgstr "代码: \"初始化参数 \"方法" #: ../../source/how-to-implement-strategies.rst:182 msgid "" @@ -4209,13 +4839,17 @@ msgid "" "of an execution. It is responsible for providing the initial global model" " parameters in a serialized form (i.e., as a :code:`Parameters` object)." msgstr "" +":code:`initialize_parameters` " +"只调用一次,即在执行开始时。它负责以序列化形式(即 :code:`Parameters` " +"对象)提供初始全局模型参数。" #: ../../source/how-to-implement-strategies.rst:184 msgid "" "Built-in strategies return user-provided initial parameters. The " "following example shows how initial parameters can be passed to " ":code:`FedAvg`:" -msgstr "" +msgstr "内置策略会返回用户提供的初始参数。下面的示例展示了如何将初始参数传递给 " +":code:`FedAvg`:" #: ../../source/how-to-implement-strategies.rst:209 msgid "" @@ -4228,6 +4862,11 @@ msgid "" "useful for prototyping. In practice, it is recommended to always use " "server-side parameter initialization." msgstr "" +"Flower 服务器将调用 :code:`initialize_parameters`,返回传给 " +":code:`initial_parameters` 的参数或 :code:`None`。如果 " +":code:`initialize_parameters` 没有返回任何参数(即 :code:`None`),服务器将随" +"机选择一个客户端并要求其提供参数。这只是一个方便的功能,在实际应用中并不推荐" +"使用,但在原型开发中可能很有用。在实践中,建议始终使用服务器端参数初始化。" #: ../../source/how-to-implement-strategies.rst:213 msgid "" @@ -4237,10 +4876,14 @@ msgid "" "approaches, for example, to fine-tune a pre-trained model using federated" " learning." msgstr "" +"服务器端参数初始化是一种强大的机制。例如,它可以用来从先前保存的检查点恢复训" +"练。它也是实现混合方法所需的基本能力,例如,使用联合学习对预先训练好的模型进" +"行微调。" #: ../../source/how-to-implement-strategies.rst:216 +#, fuzzy msgid "The :code:`configure_fit` method" -msgstr "" +msgstr "代码:`configure_fit`方法" #: ../../source/how-to-implement-strategies.rst:218 msgid "" @@ -4249,13 +4892,17 @@ msgid "" "round means selecting clients and deciding what instructions to send to " "these clients. The signature of :code:`configure_fit` makes this clear:" msgstr "" +":code:`configure_fit` 负责配置即将开始的一轮训练。配置*在这里是什么意思?配置" +"一轮训练意味着选择客户并决定向这些客户发送什么指令。:code:`configure_fit` " +"的签名说明了这一点:" #: ../../source/how-to-implement-strategies.rst:231 msgid "" "The return value is a list of tuples, each representing the instructions " "that will be sent to a particular client. Strategy implementations " "usually perform the following steps in :code:`configure_fit`:" -msgstr "" +msgstr "返回值是一个元组列表,每个元组代表将发送到特定客户端的指令。策略实现通常在 " +":code:`configure_fit` 中执行以下步骤:" #: ../../source/how-to-implement-strategies.rst:233 #: ../../source/how-to-implement-strategies.rst:280 @@ -4263,12 +4910,16 @@ msgid "" "Use the :code:`client_manager` to randomly sample all (or a subset of) " "available clients (each represented as a :code:`ClientProxy` object)" msgstr "" +"使用 :code:`client_manager` 随机抽样所有(或部分)可用客户端(" +"每个客户端都表示为 :code:`ClientProxy` 对象)" #: ../../source/how-to-implement-strategies.rst:234 msgid "" "Pair each :code:`ClientProxy` with the same :code:`FitIns` holding the " "current global model :code:`parameters` and :code:`config` dict" msgstr "" +"将每个 :code:`ClientProxy` 与持有当前全局模型 :code:`parameters` 和 " +":code:`config` dict 的 :code:`FitIns` 配对" #: ../../source/how-to-implement-strategies.rst:236 msgid "" @@ -4277,6 +4928,9 @@ msgid "" "in a round if the corresponding :code:`ClientProxy` is included in the " "the list returned from :code:`configure_fit`." msgstr "" +"更复杂的实现可以使用 :code:`configure_fit` 来实现自定义的客户端选择逻辑。" +"只有当相应的 :code:`ClientProxy` 包含在 :code:`configure_fit` " +"返回的列表中时,客户端才会参加比赛。" #: ../../source/how-to-implement-strategies.rst:240 msgid "" @@ -4287,17 +4941,21 @@ msgid "" "different hyperparameters on different clients (via the :code:`config` " "dict)." msgstr "" +"该返回值的结构为用户提供了很大的灵活性。由于指令是按客户端定义的,因此可以向" +"每个客户端发送不同的指令。这使得自定义策略成为可能,例如在不同的客户端上训练" +"不同的模型,或在不同的客户端上使用不同的超参数(通过 :code:`config` dict)。" #: ../../source/how-to-implement-strategies.rst:243 msgid "The :code:`aggregate_fit` method" -msgstr "" +msgstr ":code:`aggregate_fit` 方法" #: ../../source/how-to-implement-strategies.rst:245 msgid "" ":code:`aggregate_fit` is responsible for aggregating the results returned" " by the clients that were selected and asked to train in " ":code:`configure_fit`." -msgstr "" +msgstr ":code:`aggregate_fit` 负责汇总在 :code:`configure_fit` " +"中选择并要求训练的客户端所返回的结果。" #: ../../source/how-to-implement-strategies.rst:258 msgid "" @@ -4306,6 +4964,9 @@ msgid "" ":code:`configure_fit`). :code:`aggregate_fit` therefore receives a list " "of :code:`results`, but also a list of :code:`failures`." msgstr "" +"当然,失败是有可能发生的,因此无法保证服务器会从它发送指令(通过 " +":code:`configure_fit`)的所有客户端获得结果。因此 :code:`aggregate_fit` " +"会收到 :code:`results` 的列表,但也会收到 :code:`failures` 的列表。" #: ../../source/how-to-implement-strategies.rst:260 msgid "" @@ -4314,10 +4975,14 @@ msgid "" " optional because :code:`aggregate_fit` might decide that the results " "provided are not sufficient for aggregation (e.g., too many failures)." msgstr "" +":code:`aggregate_fit` 返回一个可选的 :code:`Parameters` " +"对象和一个聚合度量的字典。:code:`Parameters` 返回值是可选的,因为 " +":code:`aggregate_fit` " +"可能会认为所提供的结果不足以进行聚合(例如,失败次数过多)。" #: ../../source/how-to-implement-strategies.rst:263 msgid "The :code:`configure_evaluate` method" -msgstr "" +msgstr "代码:\"配置评估 \"方法" #: ../../source/how-to-implement-strategies.rst:265 msgid "" @@ -4327,6 +4992,9 @@ msgid "" "instructions to send to these clients. The signature of " ":code:`configure_evaluate` makes this clear:" msgstr "" +":code:`configure_evaluate` 负责配置下一轮评估。配置*在这里是什么意思?配置一" +"轮评估意味着选择客户端并决定向这些客户端发送什么指令。:code:`configure_evalua" +"te` 的签名说明了这一点:" #: ../../source/how-to-implement-strategies.rst:278 msgid "" @@ -4334,12 +5002,16 @@ msgid "" "that will be sent to a particular client. Strategy implementations " "usually perform the following steps in :code:`configure_evaluate`:" msgstr "" +"返回值是一个元组列表,每个元组代表将发送到特定客户端的指令。策略实现通常在 " +":code:`configure_evaluate` 中执行以下步骤:" #: ../../source/how-to-implement-strategies.rst:281 msgid "" "Pair each :code:`ClientProxy` with the same :code:`EvaluateIns` holding " "the current global model :code:`parameters` and :code:`config` dict" msgstr "" +"将每个 :code:`ClientProxy` 与持有当前全局模型 :code:`parameters` 和 " +":code:`config` dict 的 :code:`EvaluateIns` 配对" #: ../../source/how-to-implement-strategies.rst:283 msgid "" @@ -4348,6 +5020,9 @@ msgid "" "in a round if the corresponding :code:`ClientProxy` is included in the " "the list returned from :code:`configure_evaluate`." msgstr "" +"更复杂的实现可以使用 :code:`configure_evaluate` " +"来实现自定义的客户端选择逻辑。只有当相应的 :code:`ClientProxy` 包含在 " +":code:`configure_evaluate` 返回的列表中时,客户端才会参与一轮比赛。" #: ../../source/how-to-implement-strategies.rst:287 msgid "" @@ -4358,10 +5033,13 @@ msgid "" "different hyperparameters on different clients (via the :code:`config` " "dict)." msgstr "" +"该返回值的结构为用户提供了很大的灵活性。由于指令是按客户端定义的,因此可以向" +"每个客户端发送不同的指令。这使得自定义策略可以在不同客户端上评估不同的模型," +"或在不同客户端上使用不同的超参数(通过 :code:`config` dict)。" #: ../../source/how-to-implement-strategies.rst:291 msgid "The :code:`aggregate_evaluate` method" -msgstr "" +msgstr "代码 :`aggregate_evaluate` 方法" #: ../../source/how-to-implement-strategies.rst:293 msgid "" @@ -4369,6 +5047,8 @@ msgid "" "returned by the clients that were selected and asked to evaluate in " ":code:`configure_evaluate`." msgstr "" +":code:`aggregate_evaluate` 负责汇总在 :code:`configure_evaluate` " +"中选择并要求评估的客户端返回的结果。" #: ../../source/how-to-implement-strategies.rst:306 msgid "" @@ -4377,6 +5057,10 @@ msgid "" ":code:`configure_evaluate`). :code:`aggregate_evaluate` therefore " "receives a list of :code:`results`, but also a list of :code:`failures`." msgstr "" +"当然,失败是有可能发生的,因此无法保证服务器会从它发送指令(通过 " +":code:`configure_evaluate`)的所有客户端获得结果。因此, " +":code:`aggregate_evaluate` 会接收 :code:`results` 的列表,但也会接收 " +":code:`failures` 的列表。" #: ../../source/how-to-implement-strategies.rst:308 msgid "" @@ -4385,10 +5069,14 @@ msgid "" "optional because :code:`aggregate_evaluate` might decide that the results" " provided are not sufficient for aggregation (e.g., too many failures)." msgstr "" +":code:`aggregate_evaluate` 返回一个可选的 " +":code:`float`(损失)和一个聚合指标字典。:code:`float` 返回值是可选的,因为 " +":code:`aggregate_evaluate` " +"可能会认为所提供的结果不足以进行聚合(例如,失败次数过多)。" #: ../../source/how-to-implement-strategies.rst:311 msgid "The :code:`evaluate` method" -msgstr "" +msgstr "代码:\"评估 \"方法" #: ../../source/how-to-implement-strategies.rst:313 msgid "" @@ -4397,6 +5085,9 @@ msgid "" ":code:`configure_evaluate`/:code:`aggregate_evaluate` enables strategies " "to perform both servers-side and client-side (federated) evaluation." msgstr "" +":code:`evaluate` 负责在服务器端评估模型参数。除了 " +":code:`configure_evaluate`/:code:`aggregate_evaluate` 之外,:code:`evaluate` " +"可以使策略同时执行服务器端和客户端(联合)评估。" #: ../../source/how-to-implement-strategies.rst:323 msgid "" @@ -4405,82 +5096,88 @@ msgid "" ":code:`evaluate` method might not complete successfully (e.g., it might " "fail to load the server-side evaluation data)." msgstr "" +"返回值也是可选的,因为策略可能不需要执行服务器端评估,或者因为用户定义的 " +":code:`evaluate` " +"方法可能无法成功完成(例如,它可能无法加载服务器端评估数据)。" #: ../../source/how-to-install-flower.rst:2 msgid "Install Flower" -msgstr "" +msgstr "安装Flower" #: ../../source/how-to-install-flower.rst:6 msgid "Python version" -msgstr "" +msgstr "Python 版本" #: ../../source/how-to-install-flower.rst:12 msgid "Install stable release" -msgstr "" +msgstr "安装稳定版" #: ../../source/how-to-install-flower.rst:14 msgid "" "Stable releases are available on `PyPI " "`_::" -msgstr "" +msgstr "稳定版本可在 `PyPI `_::" #: ../../source/how-to-install-flower.rst:18 msgid "" "For simulations that use the Virtual Client Engine, ``flwr`` should be " "installed with the ``simulation`` extra::" -msgstr "" +msgstr "对于使用虚拟客户端引擎的模拟,`flwr`` 应与`simulation`` 额外:一起安装:" #: ../../source/how-to-install-flower.rst:24 msgid "Verify installation" -msgstr "" +msgstr "验证安装" #: ../../source/how-to-install-flower.rst:26 msgid "" "The following command can be used to verfiy if Flower was successfully " "installed. If everything worked, it should print the version of Flower to" " the command line::" -msgstr "" +msgstr "可以使用以下命令来验证 Flower 是否安装成功。如果一切正常,它将在命令行中打印 " +"Flower 的版本::" #: ../../source/how-to-install-flower.rst:33 msgid "Advanced installation options" -msgstr "" +msgstr "高级安装选项" #: ../../source/how-to-install-flower.rst:36 msgid "Install pre-release" -msgstr "" +msgstr "安装预发布版本" #: ../../source/how-to-install-flower.rst:38 msgid "" "New (possibly unstable) versions of Flower are sometimes available as " "pre-release versions (alpha, beta, release candidate) before the stable " "release happens::" -msgstr "" +msgstr "在稳定版发布之前,Flower 的新版本(可能是不稳定版)有时会作为预发布版本(alph" +"a、beta、候选发布版本)提供::" #: ../../source/how-to-install-flower.rst:42 msgid "" "For simulations that use the Virtual Client Engine, ``flwr`` pre-releases" " should be installed with the ``simulation`` extra::" -msgstr "" +msgstr "对于使用虚拟客户端引擎的模拟,`flwr``预发行版应与`simulation``额外:一起安装" +":" #: ../../source/how-to-install-flower.rst:47 msgid "Install nightly release" -msgstr "" +msgstr "安装夜间版本" #: ../../source/how-to-install-flower.rst:49 msgid "" "The latest (potentially unstable) changes in Flower are available as " "nightly releases::" -msgstr "" +msgstr "Flower 中最新(可能不稳定)的更改以夜间发布的形式提供::" #: ../../source/how-to-install-flower.rst:53 msgid "" "For simulations that use the Virtual Client Engine, ``flwr-nightly`` " "should be installed with the ``simulation`` extra::" -msgstr "" +msgstr "对于使用虚拟客户端引擎的模拟,`flwr-nightly`应与`simulation`一起安装:" #: ../../source/how-to-monitor-simulation.rst:2 msgid "Monitor simulation" -msgstr "" +msgstr "监控模拟" #: ../../source/how-to-monitor-simulation.rst:4 msgid "" @@ -4490,16 +5187,20 @@ msgid "" "constrain the total usage. Insights from resource consumption can help " "you make smarter decisions and speed up the execution time." msgstr "" +"Flower 允许您在运行模拟时监控系统资源。此外,Flower 仿真引擎功能强大,能让您" +"决定如何按客户端方式分配资源并限制总使用量。从资源消耗中获得的洞察力可以帮助" +"您做出更明智的决策,并加快执行时间。" #: ../../source/how-to-monitor-simulation.rst:6 msgid "" "The specific instructions assume you are using macOS and have the " "`Homebrew `_ package manager installed." -msgstr "" +msgstr "具体说明假定你使用的是 macOS,并且安装了 `Homebrew `_ " +"软件包管理器。" #: ../../source/how-to-monitor-simulation.rst:10 msgid "Downloads" -msgstr "" +msgstr "下载" #: ../../source/how-to-monitor-simulation.rst:16 msgid "" @@ -4508,88 +5209,93 @@ msgid "" "collected data. They are both well integrated with `Ray " "`_ which Flower uses under the hood." msgstr "" +"`Prometheus `_ 用于收集数据,而 `Grafana " +"`_ 则能让你将收集到的数据可视化。它们都与 Flower " +"在引擎盖下使用的 `Ray `_ 紧密集成。" #: ../../source/how-to-monitor-simulation.rst:18 msgid "" "Overwrite the configuration files (depending on your device, it might be " "installed on a different path)." -msgstr "" +msgstr "重写配置文件(根据设备的不同,可能安装在不同的路径上)。" #: ../../source/how-to-monitor-simulation.rst:20 msgid "If you are on an M1 Mac, it should be:" -msgstr "" +msgstr "如果你使用的是 M1 Mac,应该是这样:" #: ../../source/how-to-monitor-simulation.rst:27 msgid "On the previous generation Intel Mac devices, it should be:" -msgstr "" +msgstr "在上一代英特尔 Mac 设备上,应该是这样:" #: ../../source/how-to-monitor-simulation.rst:34 msgid "" "Open the respective configuration files and change them. Depending on " "your device, use one of the two following commands:" -msgstr "" +msgstr "打开相应的配置文件并修改它们。根据设备情况,使用以下两个命令之一:" #: ../../source/how-to-monitor-simulation.rst:44 msgid "" "and then delete all the text in the file and paste a new Prometheus " "config you see below. You may adjust the time intervals to your " "requirements:" -msgstr "" +msgstr "然后删除文件中的所有文本,粘贴一个新的 Prometheus " +"配置文件,如下所示。您可以根据需要调整时间间隔:" #: ../../source/how-to-monitor-simulation.rst:59 msgid "" "Now after you have edited the Prometheus configuration, do the same with " "the Grafana configuration files. Open those using one of the following " "commands as before:" -msgstr "" +msgstr "编辑完 Prometheus 配置后,请对 Grafana " +"配置文件执行同样的操作。与之前一样,使用以下命令之一打开这些文件:" #: ../../source/how-to-monitor-simulation.rst:69 msgid "" "Your terminal editor should open and allow you to apply the following " "configuration as before." -msgstr "" +msgstr "您的终端编辑器应该会打开,并允许您像之前一样应用以下配置。" #: ../../source/how-to-monitor-simulation.rst:84 msgid "" "Congratulations, you just downloaded all the necessary software needed " "for metrics tracking. Now, let’s start it." -msgstr "" +msgstr "恭喜您,您刚刚下载了指标跟踪所需的所有软件。现在,让我们开始吧。" #: ../../source/how-to-monitor-simulation.rst:88 msgid "Tracking metrics" -msgstr "" +msgstr "跟踪指标" #: ../../source/how-to-monitor-simulation.rst:90 msgid "" "Before running your Flower simulation, you have to start the monitoring " "tools you have just installed and configured." -msgstr "" +msgstr "在运行 Flower 模拟之前,您必须启动刚刚安装和配置的监控工具。" #: ../../source/how-to-monitor-simulation.rst:97 msgid "" "Please include the following argument in your Python code when starting a" " simulation." -msgstr "" +msgstr "开始模拟时,请在 Python 代码中加入以下参数。" #: ../../source/how-to-monitor-simulation.rst:108 msgid "Now, you are ready to start your workload." -msgstr "" +msgstr "现在,您可以开始工作了。" #: ../../source/how-to-monitor-simulation.rst:110 msgid "" "Shortly after the simulation starts, you should see the following logs in" " your terminal:" -msgstr "" +msgstr "模拟启动后不久,您就会在终端中看到以下日志:" #: ../../source/how-to-monitor-simulation.rst:117 msgid "You can look at everything at ``_ ." -msgstr "" +msgstr "您可以在 ``_ 查看所有内容。" #: ../../source/how-to-monitor-simulation.rst:119 msgid "" "It's a Ray Dashboard. You can navigate to Metrics (on the left panel, the" " lowest option)." -msgstr "" +msgstr "这是一个 Ray Dashboard。您可以导航到 \"度量标准\"(左侧面板,最低选项)。" #: ../../source/how-to-monitor-simulation.rst:121 msgid "" @@ -4599,6 +5305,9 @@ msgid "" "can only use Grafana to explore the metrics. You can start Grafana by " "going to ``http://localhost:3000/``." msgstr "" +"或者,您也可以点击右上角的 \"在 Grafana 中查看\",在 Grafana " +"中查看它们。请注意,Ray 仪表盘只能在模拟期间访问。模拟结束后,您只能使用 " +"Grafana 浏览指标。您可以访问 ``http://localhost:3000/``启动 Grafana。" #: ../../source/how-to-monitor-simulation.rst:123 msgid "" @@ -4606,16 +5315,19 @@ msgid "" "important as they will otherwise block, for example port :code:`3000` on " "your machine as long as they are running." msgstr "" +"完成可视化后,请停止 Prometheus 和 " +"Grafana。这一点很重要,否则只要它们在运行,就会阻塞机器上的端口 " +":code:`3000`。" #: ../../source/how-to-monitor-simulation.rst:132 msgid "Resource allocation" -msgstr "" +msgstr "资源分配" #: ../../source/how-to-monitor-simulation.rst:134 msgid "" "You must understand how the Ray library works to efficiently allocate " "system resources to simulation clients on your own." -msgstr "" +msgstr "您必须了解 Ray 库是如何工作的,才能有效地为自己的仿真客户端分配系统资源。" #: ../../source/how-to-monitor-simulation.rst:136 msgid "" @@ -4626,27 +5338,32 @@ msgid "" "You will learn more about that in the later part of this blog. You can " "check the system resources by running the following:" msgstr "" +"最初,模拟(由 Ray 在引擎盖下处理)默认使用系统上的所有可用资源启动,并在客户" +"端之间共享。但这并不意味着它会将资源平均分配给所有客户端,也不意味着模型训练" +"会在所有客户端同时进行。您将在本博客的后半部分了解到更多相关信息。您可以运行" +"以下命令检查系统资源:" #: ../../source/how-to-monitor-simulation.rst:143 msgid "In Google Colab, the result you see might be similar to this:" -msgstr "" +msgstr "在 Google Colab 中,您看到的结果可能与此类似:" #: ../../source/how-to-monitor-simulation.rst:155 msgid "" "However, you can overwrite the defaults. When starting a simulation, do " "the following (you don't need to overwrite all of them):" -msgstr "" +msgstr "不过,您可以覆盖默认值。开始模拟时,请执行以下操作(不必全部覆盖):" #: ../../source/how-to-monitor-simulation.rst:175 msgid "Let’s also specify the resource for a single client." -msgstr "" +msgstr "我们还可以为单个客户指定资源。" #: ../../source/how-to-monitor-simulation.rst:205 msgid "" "Now comes the crucial part. Ray will start a new client only when it has " "all the required resources (such that they run in parallel) when the " "resources allow." -msgstr "" +msgstr "现在到了关键部分。只有在资源允许的情况下,Ray " +"才会在拥有所有所需资源(如并行运行)时启动新客户端。" #: ../../source/how-to-monitor-simulation.rst:207 msgid "" @@ -4657,66 +5374,78 @@ msgid "" ":code:`client_num_gpus = 2`, the simulation wouldn't start (even if you " "had 2 GPUs but decided to set 1 in :code:`ray_init_args`)." msgstr "" +"在上面的示例中,将只运行一个客户端,因此您的客户端不会并发运行。设置 :code:`" +"client_num_gpus = 0.5` 将允许运行两个客户端,从而使它们能够并发运行。请注意," +"所需的资源不要超过可用资源。如果您指定 :code:`client_num_gpus = " +"2`,模拟将无法启动(即使您有 2 个 GPU,但决定在 :code:`ray_init_args` " +"中设置为 1)。" #: ../../source/how-to-monitor-simulation.rst:212 ../../source/ref-faq.rst:2 msgid "FAQ" -msgstr "" +msgstr "常见问题" #: ../../source/how-to-monitor-simulation.rst:214 msgid "Q: I don't see any metrics logged." -msgstr "" +msgstr "问:我没有看到任何指标记录。" #: ../../source/how-to-monitor-simulation.rst:216 msgid "" "A: The timeframe might not be properly set. The setting is in the top " "right corner (\"Last 30 minutes\" by default). Please change the " "timeframe to reflect the period when the simulation was running." -msgstr "" +msgstr "答:时间范围可能没有正确设置。设置在右上角(默认为 \"最后 30 分钟\"" +")。请更改时间框架,以反映模拟运行的时间段。" #: ../../source/how-to-monitor-simulation.rst:218 msgid "" "Q: I see “Grafana server not detected. Please make sure the Grafana " "server is running and refresh this page” after going to the Metrics tab " "in Ray Dashboard." -msgstr "" +msgstr "问:我看到 \"未检测到 Grafana 服务器。请确保 Grafana " +"服务器正在运行并刷新此页面\"。" #: ../../source/how-to-monitor-simulation.rst:220 msgid "" "A: You probably don't have Grafana running. Please check the running " "services" -msgstr "" +msgstr "答:您可能没有运行 Grafana。请检查正在运行的服务" #: ../../source/how-to-monitor-simulation.rst:226 msgid "" "Q: I see \"This site can't be reached\" when going to " "``_." -msgstr "" +msgstr "问:在访问 ``_时,我看到 \"无法访问该网站\"。" #: ../../source/how-to-monitor-simulation.rst:228 msgid "" "A: Either the simulation has already finished, or you still need to start" " Prometheus." -msgstr "" +msgstr "答:要么模拟已经完成,要么您还需要启动普罗米修斯。" #: ../../source/how-to-monitor-simulation.rst:232 msgid "Resources" -msgstr "" +msgstr "资源" #: ../../source/how-to-monitor-simulation.rst:234 +#, fuzzy msgid "" "Ray Dashboard: ``_" msgstr "" +"Ray Dashboard: ``_" #: ../../source/how-to-monitor-simulation.rst:236 +#, fuzzy msgid "" "Ray Metrics: ``_" msgstr "" +"Ray Metrics: ``_" #: ../../source/how-to-run-simulations.rst:2 msgid "Run simulations" -msgstr "" +msgstr "运行模拟" #: ../../source/how-to-run-simulations.rst:8 msgid "" @@ -4733,6 +5462,13 @@ msgid "" "`_ or " "VCE." msgstr "" +"模拟联合学习工作负载可用于多种用例:您可能希望在大量客户端上运行您的工作负载" +",但无需采购、配置和管理大量物理设备;" +"您可能希望在您可以访问的计算系统上尽可能快地运行您的 FL 工作负载,而无需经过" +"复杂的设置过程;您可能希望在不同数据和系统异构性、客户端可用性、隐私预算等不" +"同水平的场景中验证您的算法。这些都是模拟 FL 工作负载有意义的一些用例。Flower " +"可以通过其 \"虚拟客户端引擎\"(VirtualClientEngine)_或 VCE 来适应这些情况。" #: ../../source/how-to-run-simulations.rst:10 msgid "" @@ -4745,6 +5481,12 @@ msgid "" "and therefore behave in an identical way. In addition to that, clients " "managed by the :code:`VirtualClientEngine` are:" msgstr "" +"代码:`VirtualClientEngine`调度、启动和管理`虚拟`客户端。这些客户端与 \"非虚拟" +" \"客户端(即通过命令 \"flwr.client.start_numpy_client `_\"启动的客户端)完全相同,它们可以通过创建一个继承自 " +"\"flwr.client.NumPyClient `_\" " +"的类来配置,因此行为方式也完全相同。除此之外,由 :code:`VirtualClientEngine` " +"管理的客户端还包括:" #: ../../source/how-to-run-simulations.rst:12 msgid "" @@ -4754,13 +5496,17 @@ msgid "" "parallelism of your Flower FL simulation. The fewer the resources per " "client, the more clients can run concurrently on the same hardware." msgstr "" +"资源感知:这意味着每个客户端都会分配到系统中的一部分计算和内存。作为用户,您" +"可以在模拟开始时对其进行控制,从而控制 Flower FL " +"模拟的并行程度。每个客户端的资源越少,在同一硬件上并发运行的客户端就越多。" #: ../../source/how-to-run-simulations.rst:13 msgid "" "self-managed: this means that you as a user do not need to launch clients" " manually, instead this gets delegated to :code:`VirtualClientEngine`'s " "internals." -msgstr "" +msgstr "自管理:这意味着用户无需手动启动客户端,而是由 :code:`VirtualClientEngine` " +"的内部人员负责。" #: ../../source/how-to-run-simulations.rst:14 msgid "" @@ -4770,6 +5516,9 @@ msgid "" " releasing the resources it was assigned and allowing in this way other " "clients to participate." msgstr "" +"ephemeral(短暂的):这意味着客户端只有在 FL 进程中需要它时才会被实体化(" +"例如执行 `fit() `_ " +")。之后该对象将被销毁,释放分配给它的资源,并允许其他客户端以这种方式参与。" #: ../../source/how-to-run-simulations.rst:16 msgid "" @@ -4779,10 +5528,14 @@ msgid "" "of `Actors `_ to " "spawn `virtual` clients and run their workload." msgstr "" +"代码:`VirtualClientEngine`使用`Ray `_来实现`虚拟`客户端,这是一个用于可扩展 Python 工作负载的开源框架。特别地," +"Flower 的 :code:`VirtualClientEngine` 使用 `Actors `_ 来生成 `virtual` 客户端并运行它们的工作负载。" #: ../../source/how-to-run-simulations.rst:20 msgid "Launch your Flower simulation" -msgstr "" +msgstr "启动 Flower 模拟" #: ../../source/how-to-run-simulations.rst:22 msgid "" @@ -4793,10 +5546,14 @@ msgid "" "flwr.html#flwr.simulation.start_simulation>`_ and a minimal example looks" " as follows:" msgstr "" +"运行 Flower 仿真仍然需要定义客户端类、策略以及下载和加载(可能还需要分割)数" +"据集的实用程序。在完成这些工作后,就可以使用 \"start_simulation `_\" " +"来启动模拟了,一个最简单的示例如下:" #: ../../source/how-to-run-simulations.rst:44 msgid "VirtualClientEngine resources" -msgstr "" +msgstr "虚拟客户端引擎资源" #: ../../source/how-to-run-simulations.rst:45 msgid "" @@ -4811,10 +5568,18 @@ msgid "" " documentation. Do not set :code:`ray_init_args` if you want the VCE to " "use all your system's CPUs and GPUs." msgstr "" +"默认情况下,VCE 可以访问所有系统资源(即所有 CPU、所有 GPU 等)," +"因为这也是启动 Ray " +"时的默认行为。不过,在某些设置中,您可能希望限制有多少系统资源用于仿真。" +"您可以通过 :code:`ray_init_args` 输入到 :code:`start_simulation` " +"的参数来做到这一点,VCE 会在内部将该参数传递给 Ray 的 :code:`ray.init` " +"命令。有关您可以配置的设置的完整列表,请查看 `ray.init `_ 文档。如果希望 VCE " +"使用系统中所有的 CPU 和 GPU,请不要设置 :code:`ray_init_args`。" #: ../../source/how-to-run-simulations.rst:62 msgid "Assigning client resources" -msgstr "" +msgstr "分配客户资源" #: ../../source/how-to-run-simulations.rst:63 msgid "" @@ -4822,6 +5587,9 @@ msgid "" " nothing else) to each virtual client. This means that if your system has" " 10 cores, that many virtual clients can be concurrently running." msgstr "" +"默认情况下,:code:`VirtualClientEngine` 会为每个虚拟客户端分配一个 CPU " +"内核(不分配其他任何内核)。这意味着,如果系统有 10 " +"个内核,那么可以同时运行这么多虚拟客户端。" #: ../../source/how-to-run-simulations.rst:65 msgid "" @@ -4833,20 +5601,25 @@ msgid "" " Two keys are internally used by Ray to schedule and spawn workloads (in " "our case Flower clients):" msgstr "" +"通常情况下,您可能希望根据 FL " +"工作负载的复杂性(即计算和内存占用)来调整分配给客户端的资源。" +"您可以在启动仿真时将参数 `client_resources` 设置为 `start_simulation `_ 。Ray " +"内部使用两个键来调度和生成工作负载(在我们的例子中是 Flower 客户端):" #: ../../source/how-to-run-simulations.rst:67 msgid ":code:`num_cpus` indicates the number of CPU cores a client would get." -msgstr "" +msgstr ":code:`num_cpus` 表示客户端将获得的 CPU 内核数量。" #: ../../source/how-to-run-simulations.rst:68 msgid "" ":code:`num_gpus` indicates the **ratio** of GPU memory a client gets " "assigned." -msgstr "" +msgstr ":code:`num_gpus` 表示分配给客户端的 GPU 内存的***比例。" #: ../../source/how-to-run-simulations.rst:70 msgid "Let's see a few examples:" -msgstr "" +msgstr "让我们来看几个例子:" #: ../../source/how-to-run-simulations.rst:89 msgid "" @@ -4860,6 +5633,12 @@ msgid "" "simulating a client sampled by the strategy) and then will execute them " "in a resource-aware manner in batches of 8." msgstr "" +"虽然 :code:`client_resources` 可用来控制 FL 模拟的并发程度,但这并不能阻止您" +"在同一轮模拟中运行几十、几百甚至上千个客户端,并拥有数量级更多的 \"休眠\"" +"(即不参与一轮模拟)客户端。比方说,您希望每轮有 100 个客户端," +"但您的系统只能同时容纳 8 个客户端。:code:`VirtualClientEngine` 将安排运行 " +"100 " +"个作业(每个作业模拟策略采样的一个客户端),然后以资源感知的方式分批执行。" #: ../../source/how-to-run-simulations.rst:91 msgid "" @@ -4868,10 +5647,13 @@ msgid "" "look at the `Ray documentation `_." msgstr "" +"要了解资源如何用于调度 FL 客户端以及如何定义自定义资源的所有复杂细节,请查看 " +"`Ray 文档 `_。" #: ../../source/how-to-run-simulations.rst:94 msgid "Simulation examples" -msgstr "" +msgstr "模拟示例" #: ../../source/how-to-run-simulations.rst:96 msgid "" @@ -4879,6 +5661,9 @@ msgid "" "Tensorflow/Keras and PyTorch are provided in the `Flower repository " "`_. You can run them on Google Colab too:" msgstr "" +"在 Tensorflow/Keras 和 PyTorch 中进行 Flower " +"仿真的几个可随时运行的完整示例已在 `Flower 存储库 `_ 中提供。您也可以在 Google Colab 上运行它们:" #: ../../source/how-to-run-simulations.rst:98 msgid "" @@ -4886,6 +5671,8 @@ msgid "" "`_: 100 clients collaboratively train a MLP model on MNIST." msgstr "" +"Tensorflow/Keras仿真 `_:100个客户端在MNIST上协作训练一个MLP模型。" #: ../../source/how-to-run-simulations.rst:99 msgid "" @@ -4893,10 +5680,12 @@ msgid "" "/simulation-pytorch>`_: 100 clients collaboratively train a CNN model on " "MNIST." msgstr "" +"PyTorch 仿真 `_:100 个客户端在 MNIST 上协作训练一个 CNN 模型。" #: ../../source/how-to-run-simulations.rst:104 msgid "Multi-node Flower simulations" -msgstr "" +msgstr "多节点 Flower 模拟" #: ../../source/how-to-run-simulations.rst:106 msgid "" @@ -4904,20 +5693,24 @@ msgid "" "across multiple compute nodes. Before starting your multi-node simulation" " ensure that you:" msgstr "" +"Flower 的 :code:`VirtualClientEngine` 允许您在多个计算节点上运行 FL " +"仿真。在开始多节点模拟之前,请确保:" #: ../../source/how-to-run-simulations.rst:108 msgid "Have the same Python environment in all nodes." -msgstr "" +msgstr "所有节点都有相同的 Python 环境。" #: ../../source/how-to-run-simulations.rst:109 msgid "Have a copy of your code (e.g. your entire repo) in all nodes." -msgstr "" +msgstr "在所有节点上都有一份代码副本(例如整个软件包)。" #: ../../source/how-to-run-simulations.rst:110 msgid "" "Have a copy of your dataset in all nodes (more about this in " ":ref:`simulation considerations `)" msgstr "" +"在所有节点中都有一份数据集副本(更多相关信息请参阅 :ref:`模拟注意事项" +"`)" #: ../../source/how-to-run-simulations.rst:111 msgid "" @@ -4925,6 +5718,9 @@ msgid "" "`_ so the " ":code:`VirtualClientEngine` attaches to a running Ray instance." msgstr "" +"将 :code:`ray_init_args={\"address\"=\"auto\"}`传递给 `start_simulation `_ ,这样 " +":code:`VirtualClientEngine`就会连接到正在运行的 Ray 实例。" #: ../../source/how-to-run-simulations.rst:112 msgid "" @@ -4932,6 +5728,8 @@ msgid "" "--head`. This command will print a few lines, one of which indicates how " "to attach other nodes to the head node." msgstr "" +"在头部节点上启动 Ray:在终端上输入 :code:`raystart--" +"head`。该命令将打印几行,其中一行说明如何将其他节点连接到头部节点。" #: ../../source/how-to-run-simulations.rst:113 msgid "" @@ -4939,29 +5737,33 @@ msgid "" "starting the head and execute it on terminal of a new node: for example " ":code:`ray start --address='192.168.1.132:6379'`" msgstr "" +"将其他节点附加到头部节点:复制启动头部后显示的命令,并在新节点的终端上执行:" +"例如 :code:`ray start --address='192.168.1.132:6379'`" #: ../../source/how-to-run-simulations.rst:115 msgid "" "With all the above done, you can run your code from the head node as you " "would if the simulation was running on a single node." -msgstr "" +msgstr "完成上述所有操作后,您就可以在头部节点上运行代码了,就像在单个节点上运行模拟" +"一样。" #: ../../source/how-to-run-simulations.rst:117 msgid "" "Once your simulation is finished, if you'd like to dismantle your cluster" " you simply need to run the command :code:`ray stop` in each node's " "terminal (including the head node)." -msgstr "" +msgstr "模拟结束后,如果要拆除集群,只需在每个节点(包括头部节点)的终端运行 :code:`" +"ray stop` 命令即可。" #: ../../source/how-to-run-simulations.rst:120 msgid "Multi-node simulation good-to-know" -msgstr "" +msgstr "了解多节点模拟" #: ../../source/how-to-run-simulations.rst:122 msgid "" "Here we list a few interesting functionality when running multi-node FL " "simulations:" -msgstr "" +msgstr "在此,我们列举了运行多节点 FL 模拟时的一些有趣功能:" #: ../../source/how-to-run-simulations.rst:124 msgid "" @@ -4969,6 +5771,8 @@ msgid "" " well as the total resources available to the " ":code:`VirtualClientEngine`." msgstr "" +"使用 :code:`ray status` 查看连接到头部节点的所有节点,以及 " +":code:`VirtualClientEngine` 可用的总资源。" #: ../../source/how-to-run-simulations.rst:126 msgid "" @@ -4981,16 +5785,23 @@ msgid "" "gpus=` in any :code:`ray start` command (including " "when starting the head)" msgstr "" +"将新节点附加到头部节点时,头部节点将可见其所有资源(即所有 CPU 和 GPU)。" +"这意味着 :code:`VirtualClientEngine` 可以调度尽可能多的 \"虚拟 " +"\"客户端来运行该节点" +"。在某些设置中,您可能希望将某些资源排除在模拟之外。为此,您可以在任何 " +":code:`ray start` 命令(包括启动头部时)中添加 `--num-" +"cpus=`和/或 `--num-gpus=`" #: ../../source/how-to-run-simulations.rst:132 msgid "Considerations for simulations" -msgstr "" +msgstr "模拟的注意事项" #: ../../source/how-to-run-simulations.rst:135 msgid "" "We are actively working on these fronts so to make it trivial to run any " "FL workload with Flower simulation." -msgstr "" +msgstr "我们正在积极开展这些方面的工作,以便使 FL 工作负载与 Flower " +"仿真的运行变得轻而易举。" #: ../../source/how-to-run-simulations.rst:138 msgid "" @@ -5002,10 +5813,15 @@ msgid "" " mind when designing your FL pipeline with Flower. We also highlight a " "couple of current limitations in our implementation." msgstr "" +"当前的 VCE 允许您在模拟模式下运行 Federated Learning " +"工作负载,无论您是在个人笔记本电脑上建立简单的场景原型,还是要在多个高性能 " +"GPU 节点上训练复杂的 FL 管道。虽然我们为 VCE 增加了更多的功能," +"但以下几点强调了在使用 Flower 设计 FL " +"管道时需要注意的一些事项。我们还强调了我们的实现中目前存在的一些局限性。" #: ../../source/how-to-run-simulations.rst:141 msgid "GPU resources" -msgstr "" +msgstr "GPU 资源" #: ../../source/how-to-run-simulations.rst:143 msgid "" @@ -5013,6 +5829,8 @@ msgid "" ":code:`num_gpus` in :code:`client_resources`. This being said, Ray (used " "internally by the VCE) is by default:" msgstr "" +"VCE 会为指定 :code:`client_resources` 中 :code:`num_gpus` 关键字的客户端分配 " +"GPU 内存份额。也就是说,Ray(VCE 内部使用)是默认的:" #: ../../source/how-to-run-simulations.rst:146 msgid "" @@ -5021,12 +5839,16 @@ msgid "" "different (e.g. 32GB and 8GB) VRAM amounts, they both would run 2 clients" " concurrently." msgstr "" +"不知道 GPU 上可用的总 VRAM。这意味着,如果您设置 :code:`num_gpus=0." +"5`,而系统中有两个不同(如 32GB 和 8GB)VRAM 的 GPU,它们都将同时运行 2 " +"个客户端。" #: ../../source/how-to-run-simulations.rst:147 msgid "" "not aware of other unrelated (i.e. not created by the VCE) workloads are " "running on the GPU. Two takeaways from this are:" -msgstr "" +msgstr "不知道 GPU 上正在运行其他无关(即不是由 VCE " +"创建)的工作负载。从中可以得到以下两点启示:" #: ../../source/how-to-run-simulations.rst:149 msgid "" @@ -5034,6 +5856,8 @@ msgid "" "aggregation (by instance when making use of the `evaluate method `_)" msgstr "" +"您的 Flower 服务器可能需要 GPU 来评估聚合后的 \"全局模型\"(例如在使用 " +"\"评估方法\"`_时)" #: ../../source/how-to-run-simulations.rst:150 msgid "" @@ -5042,6 +5866,8 @@ msgid "" ":code:`CUDA_VISIBLE_DEVICES=\"\"` when launching your " "experiment." msgstr "" +"如果您想在同一台机器上运行多个独立的 Flower 仿真,则需要在启动实验时使用 " +":code:`CUDA_VISIBLE_DEVICES=\"\"` 屏蔽 GPU。" #: ../../source/how-to-run-simulations.rst:153 msgid "" @@ -5050,10 +5876,12 @@ msgid "" "situation of client using more VRAM than the ratio specified when " "starting the simulation." msgstr "" +"此外,传递给 :code:`client_resources` 的 GPU 资源限制并不是 \"强制 \"的" +"(即可以超出),这可能导致客户端使用的 VRAM 超过启动模拟时指定的比例。" #: ../../source/how-to-run-simulations.rst:156 msgid "TensorFlow with GPUs" -msgstr "" +msgstr "使用 GPU 的 TensorFlow" #: ../../source/how-to-run-simulations.rst:158 msgid "" @@ -5066,6 +5894,12 @@ msgid "" "default behavior by `enabling memory growth " "`_." msgstr "" +"在 TensorFlow `_ 中使用 GPU 时," +"几乎所有进程可见的 GPU 内存都将被映射。TensorFlow " +"这样做是出于优化目的。然而,在 FL 模拟等设置中,我们希望将 GPU 分割成多个 " +"\"虚拟 \"客户端,这并不是一个理想的机制。幸运的是,我们可以通过 " +"\"启用内存增长 `_\"来禁用这一默认行为。" #: ../../source/how-to-run-simulations.rst:160 msgid "" @@ -5076,6 +5910,10 @@ msgid "" "In this case, to enable GPU growth for TF workloads. It would look as " "follows:" msgstr "" +"这需要在主进程(也就是服务器运行的地方)和 VCE 创建的每个角色中完成。通过 " +":code:`actor_kwargs`,我们可以传递保留关键字`\"on_actor_init_fn\"" +"`,以指定在角色初始化时执行的函数。在本例中,为了使 TF 工作负载的 GPU " +"增长。它看起来如下:" #: ../../source/how-to-run-simulations.rst:179 msgid "" @@ -5083,10 +5921,12 @@ msgid "" "`_ example." msgstr "" +"这正是 \"Tensorflow/Keras 仿真 `_\"示例中使用的机制。" #: ../../source/how-to-run-simulations.rst:183 msgid "Multi-node setups" -msgstr "" +msgstr "多节点设置" #: ../../source/how-to-run-simulations.rst:185 msgid "" @@ -5100,6 +5940,11 @@ msgid "" "nodes or a dataset serving mechanism (e.g. using nfs, a database) to " "circumvent data duplication." msgstr "" +"VCE 目前不提供控制特定 \"虚拟 \"客户端在哪个节点上执行的方法。换句话说,如果" +"不止一个节点拥有客户端运行所需的资源,那么这些节点中的任何一个都可能被调度到" +"客户端工作负载上。在 FL 进程的稍后阶段(即在另一轮中),同一客户端可以由不同" +"的节点执行。根据客户访问数据集的方式,这可能需要在所有节点上复制所有数据集分" +"区,或采用数据集服务机制(如使用 nfs 或数据库)来避免数据重复。" #: ../../source/how-to-run-simulations.rst:187 msgid "" @@ -5111,21 +5956,26 @@ msgid "" " above also since, in some way, the client's dataset could be seen as a " "type of `state`." msgstr "" +"根据定义,虚拟客户端是 \"无状态 \"的,因为它们具有短暂性。客户机状态可以作为 " +"Flower 客户机类的一部分来实现,但用户需要确保将其保存到持久存储(如数据库、磁" +"盘)中,而且无论客户机在哪个节点上运行,都能在以后检索到。这也与上述观点有关" +",因为在某种程度上,客户端的数据集可以被视为一种 \"状态\"。" #: ../../source/how-to-save-and-load-model-checkpoints.rst:2 msgid "Save and load model checkpoints" -msgstr "" +msgstr "保存和加载模型检查点" #: ../../source/how-to-save-and-load-model-checkpoints.rst:4 msgid "" "Flower does not automatically save model updates on the server-side. This" " how-to guide describes the steps to save (and load) model checkpoints in" " Flower." -msgstr "" +msgstr "Flower 不会在服务器端自动保存模型更新。本指南将介绍在 Flower " +"中保存(和加载)模型检查点的步骤。" #: ../../source/how-to-save-and-load-model-checkpoints.rst:8 msgid "Model checkpointing" -msgstr "" +msgstr "模型检查点" #: ../../source/how-to-save-and-load-model-checkpoints.rst:10 msgid "" @@ -5140,10 +5990,16 @@ msgid "" " before it returns those aggregated weights to the caller (i.e., the " "server):" msgstr "" +"模型更新可通过自定义 :code:`Strategy` 方法在服务器端持久化。实现自定义策略始" +"终是一种选择,但在许多情况下,简单地自定义现有策略可能更方便。" +"下面的代码示例定义了一个新的 :code:`SaveModelStrategy`,它自定义了现有的内置 " +":code:`FedAvg` 策略。特别是,它通过调用基类(:code:`FedAvg`)中的 " +":code:`aggregate_fit` 来定制 :code:`aggregate_fit`。然后继续保存返回的(聚合" +")权重,然后再将这些聚合权重返回给调用者(即服务器):" #: ../../source/how-to-save-and-load-model-checkpoints.rst:47 msgid "Save and load PyTorch checkpoints" -msgstr "" +msgstr "保存和加载 PyTorch 检查点" #: ../../source/how-to-save-and-load-model-checkpoints.rst:49 msgid "" @@ -5154,17 +6010,22 @@ msgid "" "transformed into the PyTorch ``state_dict`` following the ``OrderedDict``" " class structure." msgstr "" +"与前面的例子类似,但多了几个步骤,我们将展示如何存储一个 PyTorch 检查点," +"我们将使用 ``torch.save`` 函数。首先,``aggregate_fit`` 返回一个 " +"``Parameters`` 对象,它必须被转换成一个 NumPy ``ndarray`` 的列表," +"然后这些对象按照 ``OrderedDict`` 类结构被转换成 PyTorch `state_dict` 对象。" #: ../../source/how-to-save-and-load-model-checkpoints.rst:85 msgid "" "To load your progress, you simply append the following lines to your " "code. Note that this will iterate over all saved checkpoints and load the" " latest one:" -msgstr "" +msgstr "要加载进度,只需在代码中添加以下几行。请注意,这将遍历所有已保存的检查点,并" +"加载最新的检查点:" #: ../../source/how-to-upgrade-to-flower-1.0.rst:2 msgid "Upgrade to Flower 1.0" -msgstr "" +msgstr "升级至 Flower 1.0" #: ../../source/how-to-upgrade-to-flower-1.0.rst:4 msgid "" @@ -5173,32 +6034,35 @@ msgid "" "series releases), there are a few breaking changes that make it necessary" " to change the code of existing 0.x-series projects." msgstr "" +"Flower 1.0 正式发布。除了新功能,Flower 1.0 还为未来的发展奠定了稳定的基础。" +"与 Flower 0.19(以及其他 0.x 系列版本)相比,有一些破坏性改动需要修改现有 " +"0.x 系列项目的代码。" #: ../../source/how-to-upgrade-to-flower-1.0.rst:8 msgid "Install update" -msgstr "" +msgstr "安装更新" #: ../../source/how-to-upgrade-to-flower-1.0.rst:10 msgid "" "Here's how to update an existing installation to Flower 1.0 using either " "pip or Poetry:" -msgstr "" +msgstr "下面介绍如何使用 pip 或 Poetry 将现有安装更新到 Flower 1.0:" #: ../../source/how-to-upgrade-to-flower-1.0.rst:12 msgid "pip: add ``-U`` when installing." -msgstr "" +msgstr "pip: 安装时添加 ``-U``." #: ../../source/how-to-upgrade-to-flower-1.0.rst:14 msgid "" "``python -m pip install -U flwr`` (when using ``start_server`` and " "``start_client``)" -msgstr "" +msgstr "`python -m pip install -U flwr``(当使用`start_server`和`start_client`时)" #: ../../source/how-to-upgrade-to-flower-1.0.rst:15 msgid "" "``python -m pip install -U flwr[simulation]`` (when using " "``start_simulation``)" -msgstr "" +msgstr "`python -m pip install -U flwr[simulation]``(当使用`start_simulation``时)" #: ../../source/how-to-upgrade-to-flower-1.0.rst:17 msgid "" @@ -5206,40 +6070,45 @@ msgid "" "reinstall (don't forget to delete ``poetry.lock`` via ``rm poetry.lock`` " "before running ``poetry install``)." msgstr "" +"诗歌:更新 ``pyproject.toml`` 中的 ``flwr`` 依赖关系,然后重新安装(运行 ``" +"poetry install`` 前,别忘了通过 ``rm poetry.lock` 删除 ``poetry.lock`)。" #: ../../source/how-to-upgrade-to-flower-1.0.rst:19 msgid "``flwr = \"^1.0.0\"`` (when using ``start_server`` and ``start_client``)" -msgstr "" +msgstr "``flwr = \"^1.0.0\"`` (当使用 ``start_server` 和 ``start_client` 时)" #: ../../source/how-to-upgrade-to-flower-1.0.rst:20 msgid "" "``flwr = { version = \"^1.0.0\", extras = [\"simulation\"] }`` (when " "using ``start_simulation``)" msgstr "" +"``flwr = { version = \"^1.0.0\", extras = [\"simulation\"] " +"}``(当使用``start_simulation``时)" #: ../../source/how-to-upgrade-to-flower-1.0.rst:24 msgid "Required changes" -msgstr "" +msgstr "所需变更" #: ../../source/how-to-upgrade-to-flower-1.0.rst:26 msgid "The following breaking changes require manual updates." -msgstr "" +msgstr "以下更改需要手动更新。" #: ../../source/how-to-upgrade-to-flower-1.0.rst:29 msgid "General" -msgstr "" +msgstr "一般情况" #: ../../source/how-to-upgrade-to-flower-1.0.rst:31 msgid "" "Pass all arguments as keyword arguments (not as positional arguments). " "Here's an example:" -msgstr "" +msgstr "将所有参数作为关键字参数传递(而不是位置参数)。下面是一个例子:" #: ../../source/how-to-upgrade-to-flower-1.0.rst:33 msgid "" "Flower 0.19 (positional arguments): ``start_client(\"127.0.0.1:8080\", " "FlowerClient())``" msgstr "" +"Flower 0.19 (位置参数): ``start_client(\"127.0.0.1:8080\", FlowerClient())``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:34 msgid "" @@ -5247,47 +6116,62 @@ msgid "" "``start_client(server_address=\"127.0.0.1:8080\", " "client=FlowerClient())``" msgstr "" +"Flower 1.0(关键字参数): ``start_client(server_address=\"127.0.0.1:8080\", " +"client=FlowerClient())``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:39 msgid "" "Subclasses of ``NumPyClient``: change ``def get_parameters(self):``` to " "``def get_parameters(self, config):``" msgstr "" +"NumPyClient的子类:将``def get_parameters(self):```改为``def " +"get_parameters(self,config):``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:40 msgid "" "Subclasses of ``Client``: change ``def get_parameters(self):``` to ``def " "get_parameters(self, ins: GetParametersIns):``" msgstr "" +"客户端 \"的子类:将 \"get_parameters(self): \"改为 \"get_parameters(self, " +"ins: GetParametersIns):\"" #: ../../source/how-to-upgrade-to-flower-1.0.rst:43 msgid "Strategies / ``start_server`` / ``start_simulation``" -msgstr "" +msgstr "策略 / ``start_server`` / ``start_simulation``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:45 msgid "" "Pass ``ServerConfig`` (instead of a dictionary) to ``start_server`` and " "``start_simulation``. Here's an example:" msgstr "" +"向 ``start_server`` 和 ``start_simulation` 传递 ``ServerConfig``(而不是 " +"dictionary)。下面是一个例子:" #: ../../source/how-to-upgrade-to-flower-1.0.rst:47 msgid "" "Flower 0.19: ``start_server(..., config={\"num_rounds\": 3, " "\"round_timeout\": 600.0}, ...)``" msgstr "" +"Flower 0.19: ``start_server(..., config={\"num_rounds\": 3, \"round_timeout\"" +": 600.0}, ...)``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:48 +#, fuzzy msgid "" "Flower 1.0: ``start_server(..., " "config=flwr.server.ServerConfig(num_rounds=3, round_timeout=600.0), " "...)``" msgstr "" +"Flower 1.0: ``start_server(..., config=flwr.server.ServerConfig(" +"num_rounds=3, round_timeout=600.0), ...)``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:50 msgid "" "Replace ``num_rounds=1`` in ``start_simulation`` with the new " "``config=ServerConfig(...)`` (see previous item)" msgstr "" +"将`start_simulation``中的`num_rounds=1``替换为新的`config=ServerConfig(...)`" +"(参见前一项)" #: ../../source/how-to-upgrade-to-flower-1.0.rst:51 msgid "" @@ -5296,18 +6180,23 @@ msgid "" " configuring the strategy to sample all clients for evaluation after the " "last round of training." msgstr "" +"删除调用 ``start_server`` 时的 ``force_final_distributed_eval` 参数。可以通过" +"配置策略,在最后一轮训练后对所有客户端进行抽样评估,从而启用对所有客户端的分" +"布式评估。" #: ../../source/how-to-upgrade-to-flower-1.0.rst:52 msgid "Rename parameter/ndarray conversion functions:" -msgstr "" +msgstr "重命名参数/数组转换函数:" #: ../../source/how-to-upgrade-to-flower-1.0.rst:54 +#, fuzzy msgid "``parameters_to_weights`` --> ``parameters_to_ndarrays``" -msgstr "" +msgstr "``parameters_to_weights`` --> ``parameters_to_ndarrays``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:55 +#, fuzzy msgid "``weights_to_parameters`` --> ``ndarrays_to_parameters``" -msgstr "" +msgstr "``weights_to_parameters`` --> ``ndarrays_to_parameters``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:57 msgid "" @@ -5318,22 +6207,28 @@ msgid "" "without passing a strategy instance) should now manually initialize " "FedAvg with ``fraction_fit`` and ``fraction_evaluate`` set to ``0.1``." msgstr "" +"策略初始化:如果策略依赖于 ``fraction_fit`` 和 ``fraction_evaluate`` " +"的默认值,请手动将 ``fraction_fit`` 和 ``fraction_evaluate`` 设置为 ``0." +"1``。未手动创建策略的项目(调用 ``start_server` 或 ``start_simulation` " +"时未传递策略实例)现在应手动初始化 FedAvg,并将 `fraction_fit` 和 " +"`fraction_evaluate` 设为 `0.1``。" #: ../../source/how-to-upgrade-to-flower-1.0.rst:58 msgid "Rename built-in strategy parameters (e.g., ``FedAvg``):" -msgstr "" +msgstr "重命名内置策略参数(例如,`FedAvg``):" #: ../../source/how-to-upgrade-to-flower-1.0.rst:60 +#, fuzzy msgid "``fraction_eval`` --> ``fraction_evaluate``" -msgstr "" +msgstr "``fraction_eval`` --> ``fraction_evaluate``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:61 msgid "``min_eval_clients`` --> ``min_evaluate_clients``" -msgstr "" +msgstr "``min_eval_clients`` --> ``min_evaluate_clients``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:62 msgid "``eval_fn`` --> ``evaluate_fn``" -msgstr "" +msgstr "``eval_fn`` --> ``evaluate_fn``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:64 msgid "" @@ -5341,27 +6236,36 @@ msgid "" "functions, for example, ``configure_fit``, ``aggregate_fit``, " "``configure_evaluate``, ``aggregate_evaluate``, and ``evaluate_fn``." msgstr "" +"将 `rnd` 更名为 `server_round`。这会影响多个方法和函数,例如 ``configure_fit`" +"`、``aggregate_fit``、``configure_evaluate``、`aggregate_evaluate`` 和 " +"``evaluate_fn``。" #: ../../source/how-to-upgrade-to-flower-1.0.rst:65 msgid "Add ``server_round`` and ``config`` to ``evaluate_fn``:" -msgstr "" +msgstr "在 ``evaluate_fn` 中添加 ``server_round` 和 ``config`:" #: ../../source/how-to-upgrade-to-flower-1.0.rst:67 +#, fuzzy msgid "" "Flower 0.19: ``def evaluate(parameters: NDArrays) -> " "Optional[Tuple[float, Dict[str, Scalar]]]:``" msgstr "" +"Flower 0.19: ``def evaluate(parameters: NDArrays) -> Optional[Tuple[float, " +"Dict[str, Scalar]]]:``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:68 +#, fuzzy msgid "" "Flower 1.0: ``def evaluate(server_round: int, parameters: NDArrays, " "config: Dict[str, Scalar]) -> Optional[Tuple[float, Dict[str, " "Scalar]]]:``" msgstr "" +"Flower 1.0: ``def evaluate(server_round: int, parameters: NDArrays, config: " +"Dict[str, Scalar]) -> Optional[Tuple[float, Dict[str, Scalar]]]:``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:71 msgid "Custom strategies" -msgstr "" +msgstr "定制战略" #: ../../source/how-to-upgrade-to-flower-1.0.rst:73 msgid "" @@ -5371,34 +6275,43 @@ msgid "" "``List[Union[Tuple[ClientProxy, EvaluateRes], BaseException]]`` (in " "``aggregate_evaluate``)" msgstr "" +"参数``failures``的类型已从``List[BaseException]``变为``List[Union[Tuple[" +"ClientProxy, FitRes], " +"BaseException]]``(在``agregate_fit``中)和``List[Union[Tuple[ClientProxy, " +"EvaluateRes], BaseException]]``(在``agregate_evaluate``中)" #: ../../source/how-to-upgrade-to-flower-1.0.rst:74 msgid "" "The ``Strategy`` method ``evaluate`` now receives the current round of " "federated learning/evaluation as the first parameter:" -msgstr "" +msgstr "策略 \"方法 \"评估 \"现在会接收当前一轮联合学习/评估作为第一个参数:" #: ../../source/how-to-upgrade-to-flower-1.0.rst:76 msgid "" "Flower 0.19: ``def evaluate(self, parameters: Parameters) -> " "Optional[Tuple[float, Dict[str, Scalar]]]:``" msgstr "" +"Flower 0.19: ``def evaluate(self, parameters: 参数) -> Optional[Tuple[float, " +"Dict[str, Scalar]]]:```" #: ../../source/how-to-upgrade-to-flower-1.0.rst:77 +#, fuzzy msgid "" "Flower 1.0: ``def evaluate(self, server_round: int, parameters: " "Parameters) -> Optional[Tuple[float, Dict[str, Scalar]]]:``" msgstr "" +"Flower 1.0: ``def evaluate(self, server_round: int, parameters: Parameters) -" +"> Optional[Tuple[float, Dict[str, Scalar]]]:``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:80 msgid "Optional improvements" -msgstr "" +msgstr "可选的改进措施" #: ../../source/how-to-upgrade-to-flower-1.0.rst:82 msgid "" "Along with the necessary changes above, there are a number of potential " "improvements that just became possible:" -msgstr "" +msgstr "除了上述必要的改动之外,还有一些潜在的改进措施刚刚成为可能:" #: ../../source/how-to-upgrade-to-flower-1.0.rst:84 msgid "" @@ -5406,6 +6319,9 @@ msgid "" "``NumPyClient``. If you, for example, use server-side evaluation, then " "empy placeholder implementations of ``evaluate`` are no longer necessary." msgstr "" +"删除 ``Client`` 或 ``NumPyClient`` 子类中的 \"占位符 \"方法" +"。例如,如果你使用服务器端评估,那么就不再需要``evaluate``的 \"repy占位符 " +"\"实现。" #: ../../source/how-to-upgrade-to-flower-1.0.rst:85 msgid "" @@ -5413,10 +6329,12 @@ msgid "" "``start_simulation(..., config=flwr.server.ServerConfig(num_rounds=3, " "round_timeout=600.0), ...)``" msgstr "" +"通过 ``start_simulation`` 配置循环超时: ``start_simulation(..., config=flwr." +"server.ServerConfig(num_rounds=3, round_timeout=600.0), ...)``" #: ../../source/how-to-upgrade-to-flower-1.0.rst:89 msgid "Further help" -msgstr "" +msgstr "更多帮助" #: ../../source/how-to-upgrade-to-flower-1.0.rst:91 msgid "" @@ -5426,65 +6344,72 @@ msgid "" "API. If there are further questionsm, `join the Flower Slack " "`_ and use the channgel ``#questions``." msgstr "" +"大多数官方的 \"Flower 代码示例 `_ 已经更新到 Flower 1.0,它们可以作为使用 Flower 1.0 API " +"的参考。如果还有其他问题,请加入 Flower Slack `_ 并使用 \"#questions``\"。" #: ../../source/how-to-use-strategies.rst:2 msgid "Use strategies" -msgstr "" +msgstr "使用策略" #: ../../source/how-to-use-strategies.rst:4 msgid "" "Flower allows full customization of the learning process through the " ":code:`Strategy` abstraction. A number of built-in strategies are " "provided in the core framework." -msgstr "" +msgstr "Flower 允许通过 :code:`Strategy` " +"抽象对学习过程进行完全定制。核心框架中提供了许多内置策略。" #: ../../source/how-to-use-strategies.rst:6 msgid "" "There are three ways to customize the way Flower orchestrates the " "learning process on the server side:" -msgstr "" +msgstr "有三种方法可以自定义 Flower 在服务器端协调学习过程的方式:" #: ../../source/how-to-use-strategies.rst:8 msgid "Use an existing strategy, for example, :code:`FedAvg`" -msgstr "" +msgstr "使用现有策略,例如 :code:`FedAvg`" #: ../../source/how-to-use-strategies.rst:9 #: ../../source/how-to-use-strategies.rst:40 msgid "Customize an existing strategy with callback functions" -msgstr "" +msgstr "使用回调函数定制现有策略" #: ../../source/how-to-use-strategies.rst:10 #: ../../source/how-to-use-strategies.rst:87 msgid "Implement a novel strategy" -msgstr "" +msgstr "实施新颖战略" #: ../../source/how-to-use-strategies.rst:14 msgid "Use an existing strategy" -msgstr "" +msgstr "使用现有战略" #: ../../source/how-to-use-strategies.rst:16 msgid "" "Flower comes with a number of popular federated learning strategies " "built-in. A built-in strategy can be instantiated as follows:" -msgstr "" +msgstr "Flower 内置了许多流行的联合学习策略。内置策略的实例化方法如下:" #: ../../source/how-to-use-strategies.rst:25 msgid "" "This creates a strategy with all parameters left at their default values " "and passes it to the :code:`start_server` function. It is usually " "recommended to adjust a few parameters during instantiation:" -msgstr "" +msgstr "这会创建一个所有参数都保持默认值的策略,并将其传递给 :code:`start_server` " +"函数。通常建议在实例化过程中调整一些参数:" #: ../../source/how-to-use-strategies.rst:42 msgid "" "Existing strategies provide several ways to customize their behaviour. " "Callback functions allow strategies to call user-provided code during " "execution." -msgstr "" +msgstr "现有的策略提供了多种自定义行为的方法。回调函数允许策略在执行过程中调用用户提" +"供的代码。" #: ../../source/how-to-use-strategies.rst:45 msgid "Configuring client fit and client evaluate" -msgstr "" +msgstr "配置客户匹配和客户评估" #: ../../source/how-to-use-strategies.rst:47 msgid "" @@ -5496,6 +6421,10 @@ msgid "" " and :code:`client.evaluate` functions during each round of federated " "learning." msgstr "" +"服务器可以通过向 :code:`on_fit_config_fn` 提供一个函数,在每一轮向客户端传递" +"新的配置值。提供的函数将被策略调用,并且必须返回一个配置键值对的字典,该字典" +"将被发送到客户端。在每一轮联合学习期间,它必须返回一个任意配置值 dictionary " +":code:`client.fit`和 :code:`client.evaluate`函数。" #: ../../source/how-to-use-strategies.rst:75 msgid "" @@ -5505,6 +6434,9 @@ msgid "" "the dictionary returned by the :code:`on_fit_config_fn` in its own " ":code:`client.fit()` function." msgstr "" +":code:`on_fit_config_fn`可用于将任意配置值从服务器传递到客户端,并在每一轮诗" +"意地改变这些值,例如,调整学习率。客户端将在自己的 :code:`client.fit()` " +"函数中接收 :code:`on_fit_config_fn` 返回的字典。" #: ../../source/how-to-use-strategies.rst:78 msgid "" @@ -5512,16 +6444,18 @@ msgid "" ":code:`on_evaluate_config_fn` to customize the configuration sent to " ":code:`client.evaluate()`" msgstr "" +"与 :code:`on_fit_config_fn` 类似,还有 :code:`on_evaluate_config_fn` " +"用于定制发送到 :code:`client.evaluate()` 的配置" #: ../../source/how-to-use-strategies.rst:81 msgid "Configuring server-side evaluation" -msgstr "" +msgstr "配置服务器端评估" #: ../../source/how-to-use-strategies.rst:83 msgid "" "Server-side evaluation can be enabled by passing an evaluation function " "to :code:`evaluate_fn`." -msgstr "" +msgstr "服务器端评估可通过向 :code:`evaluate_fn` 传递评估函数来启用。" #: ../../source/how-to-use-strategies.rst:89 msgid "" @@ -5529,85 +6463,89 @@ msgid "" "the most flexibility. Read the `Implementing Strategies `_ guide to learn more." msgstr "" +"编写完全自定义的策略涉及的内容较多,但灵活性最高。阅读 \"实施策略\"_ 指南,了解更多信息。" #: ../../source/index.rst:34 msgid "Tutorial" -msgstr "" +msgstr "教程" #: ../../source/index.rst:44 msgid "Quickstart tutorials" -msgstr "" +msgstr "快速入门教程" #: ../../source/index.rst:75 ../../source/index.rst:79 msgid "How-to guides" -msgstr "" +msgstr "操作指南" #: ../../source/index.rst:95 msgid "Legacy example guides" -msgstr "" +msgstr "旧版指南范例" #: ../../source/index.rst:106 ../../source/index.rst:110 msgid "Explanations" -msgstr "" +msgstr "说明" #: ../../source/index.rst:122 msgid "API reference" -msgstr "" +msgstr "应用程序接口参考" #: ../../source/index.rst:129 msgid "Reference docs" -msgstr "" +msgstr "参考文档" #: ../../source/index.rst:145 msgid "Contributor tutorials" -msgstr "" +msgstr "贡献者教程" #: ../../source/index.rst:152 msgid "Contributor how-to guides" -msgstr "" +msgstr "投稿指南" #: ../../source/index.rst:164 msgid "Contributor explanations" -msgstr "" +msgstr "贡献者解释" #: ../../source/index.rst:170 msgid "Contributor references" -msgstr "" +msgstr "贡献者参考资料" #: ../../source/index.rst:-1 msgid "" "Check out the documentation of the main Flower Framework enabling easy " "Python development for Federated Learning." -msgstr "" +msgstr "查看主 Flower Framework 的文档,轻松实现联合学习的 Python 开发。" #: ../../source/index.rst:2 msgid "Flower Framework Documentation" -msgstr "" +msgstr "Flower 框架文档" #: ../../source/index.rst:7 msgid "" "Welcome to Flower's documentation. `Flower `_ is a " "friendly federated learning framework." -msgstr "" +msgstr "欢迎访问 Flower 文档。`Flower `_ " +"是一个友好的联合学习框架。" #: ../../source/index.rst:11 msgid "Join the Flower Community" -msgstr "" +msgstr "加入 Flower 社区" #: ../../source/index.rst:13 msgid "" "The Flower Community is growing quickly - we're a friendly group of " "researchers, engineers, students, professionals, academics, and other " "enthusiasts." -msgstr "" +msgstr "Flower 社区发展迅速--我们是一个由研究人员、工程师、学生、专业人士、学者和其他" +"爱好者组成的友好团体。" #: ../../source/index.rst:15 msgid "Join us on Slack" -msgstr "" +msgstr "在 Slack 上加入我们" #: ../../source/index.rst:23 msgid "Flower Framework" -msgstr "" +msgstr "Flower 框架" #: ../../source/index.rst:25 msgid "" @@ -5616,18 +6554,22 @@ msgid "" "setting. One of Flower's design goals was to make this simple. Read on to" " learn more." msgstr "" +"该用户指南面向希望使用 Flower " +"将现有机器学习工作负载引入联合环境的研究人员和开发人员。Flower " +"的设计目标之一就是让这一切变得简单。请继续阅读,了解更多信息。" #: ../../source/index.rst:30 msgid "Tutorials" -msgstr "" +msgstr "教程" #: ../../source/index.rst:32 msgid "" "A learning-oriented series of federated learning tutorials, the best " "place to start." -msgstr "" +msgstr "以学习为导向的联合学习教程系列,最好的起点。" #: ../../source/index.rst:62 +#, fuzzy msgid "" "QUICKSTART TUTORIALS: :doc:`PyTorch ` | " ":doc:`TensorFlow ` | :doc:`🤗 Transformers" @@ -5639,81 +6581,93 @@ msgid "" "` | :doc:`Android ` | :doc:`iOS `" msgstr "" +"QUICKSTART TUTORIALS: :doc:`PyTorch ` | :doc:`" +"TensorFlow ` | :doc:`🤗 Transformers " +"` | :doc:`JAX ` | " +":doc:`Pandas ` | :doc:`fastai ` | :doc:`PyTorch Lightning ` | :doc:`MXNet ` | :doc:`scikit-learn " +"` | :doc:`XGBoost ` | :doc:`Android ` | :doc:`iOS " +"`" #: ../../source/index.rst:64 msgid "We also made video tutorials for PyTorch:" -msgstr "" +msgstr "我们还为 PyTorch 制作了视频教程:" #: ../../source/index.rst:69 msgid "And TensorFlow:" -msgstr "" +msgstr "还有 TensorFlow:" #: ../../source/index.rst:77 msgid "" "Problem-oriented how-to guides show step-by-step how to achieve a " "specific goal." -msgstr "" +msgstr "以问题为导向的 \"如何做 \"指南逐步展示如何实现特定目标。" #: ../../source/index.rst:108 msgid "" "Understanding-oriented concept guides explain and discuss key topics and " "underlying ideas behind Flower and collaborative AI." -msgstr "" +msgstr "以理解为导向的概念指南解释并讨论了花朵和协作式人工智能背后的关键主题和基本思" +"想。" #: ../../source/index.rst:118 msgid "References" -msgstr "" +msgstr "参考资料" #: ../../source/index.rst:120 msgid "Information-oriented API reference and other reference material." -msgstr "" +msgstr "以信息为导向的 API 参考资料和其他参考资料。" #: ../../source/index.rst:140 msgid "Contributor docs" -msgstr "" +msgstr "投稿文档" #: ../../source/index.rst:142 msgid "" "The Flower community welcomes contributions. The following docs are " "intended to help along the way." -msgstr "" +msgstr "Flower 社区欢迎您的贡献。以下文档旨在为您提供帮助。" #: ../../source/ref-api-cli.rst:2 msgid "Flower CLI reference" -msgstr "" +msgstr "Flower CLI 参考" #: ../../source/ref-api-cli.rst:7 msgid "flower-server" -msgstr "" +msgstr "Flower 服务器" #: ../../source/ref-api-cli.rst:17 +#, fuzzy msgid "flower-driver-api" -msgstr "" +msgstr "flower-driver-api" #: ../../source/ref-api-cli.rst:27 +#, fuzzy msgid "flower-fleet-api" -msgstr "" +msgstr "flower-fleet-api" #: ../../source/ref-api-flwr.rst:2 msgid "flwr (Python API reference)" -msgstr "" +msgstr "flwr(Python API 参考)" #: ../../source/ref-api-flwr.rst:8 msgid "client" -msgstr "" +msgstr "客户端" #: flwr.client:1 of msgid "Flower client." -msgstr "" +msgstr "Flower 客户端。" #: flwr.client.client.Client:1 of msgid "Abstract base class for Flower clients." -msgstr "" +msgstr "Flower 客户端的抽象基类。" #: flwr.client.client.Client.evaluate:1 #: flwr.client.numpy_client.NumPyClient.evaluate:1 of msgid "Evaluate the provided parameters using the locally held dataset." -msgstr "" +msgstr "使用本地数据集评估所提供的参数。" #: flwr.client.app.start_client flwr.client.app.start_numpy_client #: flwr.client.client.Client.evaluate flwr.client.client.Client.fit @@ -5745,14 +6699,15 @@ msgstr "" #: flwr.server.strategy.strategy.Strategy.initialize_parameters #: flwr.simulation.app.start_simulation of msgid "Parameters" -msgstr "" +msgstr "参数" #: flwr.client.client.Client.evaluate:3 of msgid "" "The evaluation instructions containing (global) model parameters received" " from the server and a dictionary of configuration values used to " "customize the local evaluation process." -msgstr "" +msgstr "评估指令包含从服务器接收的(全局)模型参数,以及用于定制本地评估流程的配置值" +"字典。" #: flwr.client.client.Client.evaluate flwr.client.client.Client.fit #: flwr.client.client.Client.get_parameters @@ -5772,13 +6727,14 @@ msgstr "" #: flwr.server.strategy.strategy.Strategy.initialize_parameters #: flwr.simulation.app.start_simulation of msgid "Returns" -msgstr "" +msgstr "返回" #: flwr.client.client.Client.evaluate:8 of msgid "" "The evaluation result containing the loss on the local dataset and other " "details such as the number of local data examples used for evaluation." -msgstr "" +msgstr "评估结果包含本地数据集上的损失和其他详细信息,如用于评估的本地数据示例的数量" +"。" #: flwr.client.client.Client.evaluate flwr.client.client.Client.fit #: flwr.client.client.Client.get_parameters @@ -5796,65 +6752,66 @@ msgstr "" #: flwr.server.strategy.strategy.Strategy.initialize_parameters #: flwr.simulation.app.start_simulation of msgid "Return type" -msgstr "" +msgstr "返回类型" #: flwr.client.client.Client.fit:1 of msgid "Refine the provided parameters using the locally held dataset." -msgstr "" +msgstr "利用本地数据集完善所提供的参数。" #: flwr.client.client.Client.fit:3 of msgid "" "The training instructions containing (global) model parameters received " "from the server and a dictionary of configuration values used to " "customize the local training process." -msgstr "" +msgstr "训练指令,包含从服务器接收的(全局)模型参数,以及用于定制本地训练过程的配置" +"值字典。" #: flwr.client.client.Client.fit:8 of msgid "" "The training result containing updated parameters and other details such " "as the number of local training examples used for training." -msgstr "" +msgstr "训练结果包含更新的参数和其他详细信息,如用于训练的本地训练示例的数量。" #: flwr.client.client.Client.get_parameters:1 #: flwr.client.numpy_client.NumPyClient.get_parameters:1 of msgid "Return the current local model parameters." -msgstr "" +msgstr "返回当前本地模型参数。" #: flwr.client.client.Client.get_parameters:3 of msgid "" "The get parameters instructions received from the server containing a " "dictionary of configuration values." -msgstr "" +msgstr "从服务器接收的获取参数指令包含配置值字典。" #: flwr.client.client.Client.get_parameters:7 of msgid "The current local model parameters." -msgstr "" +msgstr "当前的本地模型参数。" #: flwr.client.client.Client.get_properties:1 of msgid "Return set of client's properties." -msgstr "" +msgstr "返回客户端的属性集。" #: flwr.client.client.Client.get_properties:3 of msgid "" "The get properties instructions received from the server containing a " "dictionary of configuration values." -msgstr "" +msgstr "从服务器接收的获取属性指令包含配置值字典。" #: flwr.client.client.Client.get_properties:7 of msgid "The current client properties." -msgstr "" +msgstr "当前客户端属性。" #: flwr.client.client.Client.to_client:1 of msgid "Return client (itself)." -msgstr "" +msgstr "返回客户端(本身)。" #: ../../source/ref-api-flwr.rst:24 msgid "start_client" -msgstr "" +msgstr "启动客户端" #: flwr.client.app.start_client:1 of msgid "Start a Flower client node which connects to a Flower server." -msgstr "" +msgstr "启动一个 Flower 客户节点,连接到 Flower 服务器。" #: flwr.client.app.start_client:3 flwr.client.app.start_numpy_client:3 of msgid "" @@ -5862,20 +6819,22 @@ msgid "" "same machine on port 8080, then `server_address` would be " "`\"[::]:8080\"`." msgstr "" +"服务器的 IPv4 或 IPv6 地址。如果 Flower 服务器在同一台机器上运行,端口为 " +"8080,则`server_address`应为`\"[::]:8080\"`。" #: flwr.client.app.start_client:7 of msgid "..." -msgstr "" +msgstr "..." #: flwr.client.app.start_client:9 of msgid "A callable that instantiates a Client. (default: None)" -msgstr "" +msgstr "用于实例化客户端的可调用程序。(默认值:无)" #: flwr.client.app.start_client:11 of msgid "" "An implementation of the abstract base class `flwr.client.Client` " "(default: None)" -msgstr "" +msgstr "抽象基类 `flwr.client.Client` 的实现(默认值:无)" #: flwr.client.app.start_client:14 flwr.client.app.start_numpy_client:9 of msgid "" @@ -5886,13 +6845,18 @@ msgid "" "`flwr.server.start_server`), otherwise it will not know about the " "increased limit and block larger messages." msgstr "" +"可与 Flower 服务器交换的 gRPC 信息的最大长度。默认值对大多数模型都足够了。训" +"练超大模型的用户可能需要增加该值。请注意,Flower 服务器需要以相同的值启动(" +"请参阅 `flwr.server." +"start_server`),否则它将不知道增加的限制并阻止更大的消息。" #: flwr.client.app.start_client:21 flwr.client.app.start_numpy_client:16 of msgid "" "The PEM-encoded root certificates as a byte string or a path string. If " "provided, a secure connection using the certificates will be established " "to an SSL-enabled Flower server." -msgstr "" +msgstr "字节字符串或路径字符串形式的 PEM 编码根证书。如果提供,将使用这些证书与启用 " +"SSL 的 Flower 服务器建立安全连接。" #: flwr.client.app.start_client:25 flwr.client.app.start_numpy_client:20 of msgid "" @@ -5900,27 +6864,30 @@ msgid "" "bidirectional streaming - 'grpc-rere': gRPC, request-response " "(experimental) - 'rest': HTTP (experimental)" msgstr "" +"配置传输层。允许值 - grpc-bidi\":gRPC,双向流 - \"grpc-rere\"" +":gRPC,请求-响应(实验性) - \"rest\":HTTP(实验性) - \"grpc-rere\"" +":gRPC,双向流 HTTP(试验性)" #: flwr.client.app.start_client:32 flwr.client.app.start_numpy_client:27 #: flwr.server.app.start_server:41 of msgid "Examples" -msgstr "" +msgstr "实例" #: flwr.client.app.start_client:33 of msgid "Starting a gRPC client with an insecure server connection:" -msgstr "" +msgstr "使用不安全的服务器连接启动 gRPC 客户端:" #: flwr.client.app.start_client:43 flwr.client.app.start_numpy_client:35 of msgid "Starting an SSL-enabled gRPC client:" -msgstr "" +msgstr "启动支持 SSL 的 gRPC 客户端:" #: ../../source/ref-api-flwr.rst:32 msgid "NumPyClient" -msgstr "" +msgstr "NumPyClient" #: flwr.client.numpy_client.NumPyClient:1 of msgid "Abstract base class for Flower clients using NumPy." -msgstr "" +msgstr "使用 NumPy 的 Flower 客户端的抽象基类。" #: flwr.client.numpy_client.NumPyClient.evaluate:3 #: flwr.client.numpy_client.NumPyClient.fit:3 @@ -5930,7 +6897,7 @@ msgstr "" #: flwr.server.strategy.strategy.Strategy.configure_fit:5 #: flwr.server.strategy.strategy.Strategy.evaluate:8 of msgid "The current (global) model parameters." -msgstr "" +msgstr "当前(全局)模型参数。" #: flwr.client.numpy_client.NumPyClient.evaluate:5 of msgid "" @@ -5938,7 +6905,8 @@ msgid "" "on the client. It can be used to communicate arbitrary values from the " "server to the client, for example, to influence the number of examples " "used for evaluation." -msgstr "" +msgstr "允许服务器影响客户端评估的配置参数。它可用于将任意值从服务器传送到客户端,例" +"如,影响用于评估的示例数量。" #: flwr.client.numpy_client.NumPyClient.evaluate:11 of msgid "" @@ -5948,16 +6916,20 @@ msgid "" "arbitrary string keys to values of type bool, bytes, float, int, or " "str. It can be used to communicate arbitrary values back to the server." msgstr "" +"**loss** (*float*) -- 模型在本地数据集上的评估损失。**num_examples** (*int*) " +"-- 用于评估的示例数量。**metrics** (*Dict[str, Scalar]*) -- " +"将任意字符串键映射到 bool、bytes、float、int 或 str " +"类型值的字典。它可用于将任意值传回服务器。" #: flwr.client.numpy_client.NumPyClient.evaluate:11 of msgid "" "**loss** (*float*) -- The evaluation loss of the model on the local " "dataset." -msgstr "" +msgstr "**loss** (*float*) -- 模型在本地数据集上的评估损失。" #: flwr.client.numpy_client.NumPyClient.evaluate:12 of msgid "**num_examples** (*int*) -- The number of examples used for evaluation." -msgstr "" +msgstr "**num_examples** (*int*) -- 用于评估的示例数量。" #: flwr.client.numpy_client.NumPyClient.evaluate:13 #: flwr.client.numpy_client.NumPyClient.fit:13 of @@ -5966,6 +6938,8 @@ msgid "" "string keys to values of type bool, bytes, float, int, or str. It can be " "used to communicate arbitrary values back to the server." msgstr "" +"**metrics** (*Dict[str, Scalar]*) -- 将任意字符串键映射到 " +"bool、bytes、float、int 或 str 类型值的字典。它可用于将任意值传回服务器。" #: flwr.client.numpy_client.NumPyClient.evaluate:19 of msgid "" @@ -5973,10 +6947,12 @@ msgid "" "format (int, float, float, Dict[str, Scalar]) have been deprecated and " "removed since Flower 0.19." msgstr "" +"自 Flower 0.19 起,之前的返回类型格式(int、float、float)和扩展格式(int、fl" +"oat、float、Dict[str, Scalar])已被弃用和移除。" #: flwr.client.numpy_client.NumPyClient.fit:1 of msgid "Train the provided parameters using the locally held dataset." -msgstr "" +msgstr "使用本地数据集训练所提供的参数。" #: flwr.client.numpy_client.NumPyClient.fit:5 of msgid "" @@ -5984,7 +6960,8 @@ msgid "" "the client. It can be used to communicate arbitrary values from the " "server to the client, for example, to set the number of (local) training " "epochs." -msgstr "" +msgstr "允许服务器影响客户端训练的配置参数。它可用于将任意值从服务器传送到客户端,例" +"如设置(本地)训练历元数。" #: flwr.client.numpy_client.NumPyClient.fit:11 of msgid "" @@ -5994,36 +6971,40 @@ msgid "" "string keys to values of type bool, bytes, float, int, or str. It can " "be used to communicate arbitrary values back to the server." msgstr "" +"**parameters** (*NDArrays*) -- 本地更新的模型参数。**num_examples** (*int*) " +"-- 用于训练的示例数量。**metrics** (*Dict[str, Scalar]*) -- " +"将任意字符串键映射到 bool、bytes、float、int 或 str " +"类型值的字典。它可用于将任意值传回服务器。" #: flwr.client.numpy_client.NumPyClient.fit:11 of msgid "**parameters** (*NDArrays*) -- The locally updated model parameters." -msgstr "" +msgstr "**parameters** (*NDArrays*) -- 本地更新的模型参数。" #: flwr.client.numpy_client.NumPyClient.fit:12 of msgid "**num_examples** (*int*) -- The number of examples used for training." -msgstr "" +msgstr "**num_examples** (*int*) -- 用于训练的示例数量。" #: flwr.client.numpy_client.NumPyClient.get_parameters:3 of msgid "" "Configuration parameters requested by the server. This can be used to " "tell the client which parameters are needed along with some Scalar " "attributes." -msgstr "" +msgstr "服务器请求的配置参数。这可以用来告诉客户端需要哪些参数以及一些标量属性。" #: flwr.client.numpy_client.NumPyClient.get_parameters:8 of msgid "**parameters** -- The local model parameters as a list of NumPy ndarrays." -msgstr "" +msgstr "**parameters** -- NumPy ndarrays 的本地模型参数列表。" #: flwr.client.numpy_client.NumPyClient.get_properties:1 of msgid "Return a client's set of properties." -msgstr "" +msgstr "返回客户端的属性集。" #: flwr.client.numpy_client.NumPyClient.get_properties:3 of msgid "" "Configuration parameters requested by the server. This can be used to " "tell the client which properties are needed along with some Scalar " "attributes." -msgstr "" +msgstr "服务器请求的配置参数。这可以用来告诉客户端需要哪些属性以及一些标量属性。" #: flwr.client.numpy_client.NumPyClient.get_properties:8 of msgid "" @@ -6031,34 +7012,36 @@ msgid "" " type bool, bytes, float, int, or str. It can be used to communicate " "arbitrary property values back to the server." msgstr "" +"**properties** -- 将任意字符串键映射到 bool、bytes、float、int 或 str " +"类型值的字典。它可用于将任意属性值传回服务器。" #: flwr.client.numpy_client.NumPyClient.to_client:1 of msgid "Convert to object to Client type and return it." -msgstr "" +msgstr "将对象转换为客户类型并返回。" #: ../../source/ref-api-flwr.rst:41 msgid "start_numpy_client" -msgstr "" +msgstr "启动_numpy_客户端" #: flwr.client.app.start_numpy_client:1 of msgid "Start a Flower NumPyClient which connects to a gRPC server." -msgstr "" +msgstr "启动 Flower NumPyClient,连接到 gRPC 服务器。" #: flwr.client.app.start_numpy_client:7 of msgid "An implementation of the abstract base class `flwr.client.NumPyClient`." -msgstr "" +msgstr "抽象基类 `flwr.client.NumPyClient` 的实现。" #: flwr.client.app.start_numpy_client:28 of msgid "Starting a client with an insecure server connection:" -msgstr "" +msgstr "使用不安全的服务器连接启动客户端:" #: ../../source/ref-api-flwr.rst:49 msgid "start_simulation" -msgstr "" +msgstr "开始模拟" #: flwr.simulation.app.start_simulation:1 of msgid "Start a Ray-based Flower simulation server." -msgstr "" +msgstr "启动基于 Ray 的花朵模拟服务器。" #: flwr.simulation.app.start_simulation:3 of msgid "" @@ -6072,12 +7055,18 @@ msgid "" "`client_fn` or the call to any of the client methods (e.g., load " "evaluation data in the `evaluate` method itself)." msgstr "" +"创建客户端实例的函数。该函数必须接受一个名为 `cid` 的 `str` 参数。" +"它应返回一个 Client 类型的客户端实例。请注意,创建的客户端实例是短暂的,通常" +"在调用一个方法后就会被销毁。由于客户机实例不是长期存在的,它们不应试图在方法" +"调用时携带状态。实例所需的任何状态(模型、数据集、超参数......)都应在调用 " +"`client_fn` 或任何客户端方法(例如,在 `evaluate` " +"方法中加载评估数据)时(重新)创建。" #: flwr.simulation.app.start_simulation:13 of msgid "" "The total number of clients in this simulation. This must be set if " "`clients_ids` is not set and vice-versa." -msgstr "" +msgstr "本次模拟的客户总数。如果未设置 `clients_ids`,则必须设置该参数,反之亦然。" #: flwr.simulation.app.start_simulation:16 of msgid "" @@ -6085,6 +7074,9 @@ msgid "" " is not set. Setting both `num_clients` and `clients_ids` with " "`len(clients_ids)` not equal to `num_clients` generates an error." msgstr "" +"列出每个客户的 `client_id`。只有在未设置 `num_clients` 时才需要这样做。同时设" +"置`num_clients`和`clients_ids`,且`len(clients_ids)`不等于`num_clients`,会产" +"生错误。" #: flwr.simulation.app.start_simulation:20 of msgid "" @@ -6093,18 +7085,23 @@ msgid "" "caused by `num_gpus`, as well as using custom resources, please consult " "the Ray documentation." msgstr "" +"\"num_gpus\": 0.0}` 单个客户端的 CPU 和 GPU 资源。支持的键值为 `num_cpus` " +"和 `num_gpus`。要了解 `num_gpus` 所导致的 GPU " +"利用率,以及使用自定义资源的情况,请查阅 Ray 文档。" #: flwr.simulation.app.start_simulation:25 of msgid "" "An implementation of the abstract base class `flwr.server.Server`. If no " "instance is provided, then `start_server` will create one." -msgstr "" +msgstr "抽象基类 `flwr.server.Server`的实现。如果没有提供实例,`start_server` " +"将创建一个。" #: flwr.server.app.start_server:9 flwr.simulation.app.start_simulation:28 of msgid "" "Currently supported values are `num_rounds` (int, default: 1) and " "`round_timeout` in seconds (float, default: None)." -msgstr "" +msgstr "目前支持的值有:`num_rounds`(int,默认值:1)和以秒为单位的`round_timeout`(" +"float,默认值:无)。" #: flwr.simulation.app.start_simulation:31 of msgid "" @@ -6112,6 +7109,8 @@ msgid "" "no strategy is provided, then `start_server` will use " "`flwr.server.strategy.FedAvg`." msgstr "" +"抽象基类 `flwr.server.strategy` 的实现。如果没有提供策略,`start_server` " +"将使用 `flwr.server.strategy.FedAvg`。" #: flwr.simulation.app.start_simulation:35 of msgid "" @@ -6119,6 +7118,9 @@ msgid "" " If no implementation is provided, then `start_simulation` will use " "`flwr.server.client_manager.SimpleClientManager`." msgstr "" +"抽象基类 `flwr.server.ClientManager` " +"的实现。如果没有提供实现,`start_simulation` 将使用 `flwr.server." +"client_manager.SimpleClientManager`。" #: flwr.simulation.app.start_simulation:39 of msgid "" @@ -6129,6 +7131,10 @@ msgid "" "(ray_init_args={}) to prevent any arguments from being passed to " "ray.init." msgstr "" +"可选字典,包含调用 `ray.init` 时的参数。如果 ray_init_args 为 " +"None(默认值),则将使用以下默认参数初始化 Ray: { \"ignore_reinit_error\": " +"True, \"include_dashboard\": False } 可以使用空字典(ray_init_args={})" +"来防止向 ray.init 传递任何参数。" #: flwr.simulation.app.start_simulation:39 of msgid "" @@ -6136,35 +7142,39 @@ msgid "" "ray_init_args is None (the default), Ray will be initialized with the " "following default args:" msgstr "" +"可选字典,包含调用 `ray.init` 时的参数。如果 ray_init_args 为 " +"None(默认值),则将使用以下默认参数初始化 Ray:" #: flwr.simulation.app.start_simulation:43 of msgid "{ \"ignore_reinit_error\": True, \"include_dashboard\": False }" -msgstr "" +msgstr "{ \"ignore_reinit_error\": True, \"include_dashboard\": False }" #: flwr.simulation.app.start_simulation:45 of msgid "" "An empty dictionary can be used (ray_init_args={}) to prevent any " "arguments from being passed to ray.init." -msgstr "" +msgstr "可以使用空字典 (ray_init_args={}) 来防止向 ray.init 传递任何参数。" #: flwr.simulation.app.start_simulation:48 of msgid "" "Set to True to prevent `ray.shutdown()` in case " "`ray.is_initialized()=True`." -msgstr "" +msgstr "设为 True 可在 `ray.is_initialized()=True` 情况下阻止 `ray.shutdown()` 。" #: flwr.simulation.app.start_simulation:50 of msgid "" "Optionally specify the type of actor to use. The actor object, which " "persists throughout the simulation, will be the process in charge of " "running the clients' jobs (i.e. their `fit()` method)." -msgstr "" +msgstr "可选择指定要使用的角色类型。角色对象将在整个模拟过程中持续存在,它将是负责运" +"行客户端作业(即其 `fit()`方法)的进程。" #: flwr.simulation.app.start_simulation:54 of msgid "" "If you want to create your own Actor classes, you might need to pass some" " input argument. You can use this dictionary for such purpose." -msgstr "" +msgstr "如果您想创建自己的 Actor " +"类,可能需要传递一些输入参数。为此,您可以使用本字典。" #: flwr.simulation.app.start_simulation:57 of msgid "" @@ -6176,36 +7186,42 @@ msgid "" " For all details, please refer to the Ray documentation: " "https://docs.ray.io/en/latest/ray-core/scheduling/index.html" msgstr "" +"(默认:\"DEFAULT\")可选字符串(\"DEFAULT \"或 \"SPREAD\"),供 VCE 选择将行" +"为体放置在哪个节点上。如果你是需要更多控制权的高级用户,可以使用低级调度策略" +"将角色固定到特定计算节点(例如,通过 " +"NodeAffinitySchedulingStrategy)。请注意,这是一项高级功能。有关详细信息," +"请参阅 Ray 文档:https://docs.ray.io/en/latest/ray-core/scheduling/index.html" #: flwr.simulation.app.start_simulation:66 of msgid "**hist** -- Object containing metrics from training." -msgstr "" +msgstr "**hist** -- 包含训练指标的对象。" #: ../../source/ref-api-flwr.rst:57 msgid "server" -msgstr "" +msgstr "服务器" #: flwr.server:1 of msgid "Flower server." -msgstr "" +msgstr "Flower 服务器。" #: ../../source/ref-api-flwr.rst:65 msgid "server.start_server" -msgstr "" +msgstr "server.start_server" #: flwr.server.app.start_server:1 of msgid "Start a Flower server using the gRPC transport layer." -msgstr "" +msgstr "使用 gRPC 传输层启动 Flower 服务器。" #: flwr.server.app.start_server:3 of msgid "The IPv4 or IPv6 address of the server. Defaults to `\"[::]:8080\"`." -msgstr "" +msgstr "服务器的 IPv4 或 IPv6 地址。默认为 `\"[::]:8080\"。" #: flwr.server.app.start_server:5 of msgid "" "A server implementation, either `flwr.server.Server` or a subclass " "thereof. If no instance is provided, then `start_server` will create one." -msgstr "" +msgstr "服务器实现,可以是 `flwr.server.Server` " +"或其子类。如果没有提供实例,`start_server` 将创建一个。" #: flwr.server.app.start_server:12 of msgid "" @@ -6213,6 +7229,9 @@ msgid "" "`flwr.server.strategy.Strategy`. If no strategy is provided, then " "`start_server` will use `flwr.server.strategy.FedAvg`." msgstr "" +"抽象基类 `flwr.server.strategy.Strategy` " +"的实现。如果没有提供策略,`start_server` 将使用 `flwr.server.strategy." +"FedAvg`。" #: flwr.server.app.start_server:16 of msgid "" @@ -6220,6 +7239,8 @@ msgid "" " If no implementation is provided, then `start_server` will use " "`flwr.server.client_manager.SimpleClientManager`." msgstr "" +"抽象基类 `flwr.server.ClientManager` 的实现。如果没有提供实现,`start_server`" +" 将使用 `flwr.server.client_manager.SimpleClientManager`。" #: flwr.server.app.start_server:21 of msgid "" @@ -6230,6 +7251,10 @@ msgid "" "`flwr.client.start_client`), otherwise clients will not know about the " "increased limit and block larger messages." msgstr "" +"可与 Flower 客户端交换的 gRPC 消息的最大长度。默认值对大多数模型都足够了。训" +"练超大模型的用户可能需要增加该值。请注意,Flower 客户端需要以相同的值启动(" +"请参阅 `flwr.client." +"start_client`),否则客户端将不知道已增加的限制并阻止更大的消息。" #: flwr.server.app.start_server:28 of msgid "" @@ -6238,57 +7263,61 @@ msgid "" "bytes elements in the following order: * CA certificate. * " "server certificate. * server private key." msgstr "" +"包含根证书、服务器证书和私钥的元组,用于启动启用 SSL " +"的安全服务器。元组应按以下顺序包含三个字节元素: * CA 证书。 * " +"服务器证书。 * 服务器私钥。" #: flwr.server.app.start_server:28 of msgid "" "Tuple containing root certificate, server certificate, and private key to" " start a secure SSL-enabled server. The tuple is expected to have three " "bytes elements in the following order:" -msgstr "" +msgstr "包含根证书、服务器证书和私钥的元组,用于启动启用 SSL " +"的安全服务器。元组应按以下顺序包含三个字节元素:" #: flwr.server.app.start_server:32 of msgid "CA certificate." -msgstr "" +msgstr "CA 证书。" #: flwr.server.app.start_server:33 of msgid "server certificate." -msgstr "" +msgstr "服务器证书。" #: flwr.server.app.start_server:34 of msgid "server private key." -msgstr "" +msgstr "服务器私人密钥。" #: flwr.server.app.start_server:37 of msgid "**hist** -- Object containing training and evaluation metrics." -msgstr "" +msgstr "**hist** -- 包含训练和评估指标的对象。" #: flwr.server.app.start_server:42 of msgid "Starting an insecure server:" -msgstr "" +msgstr "启动不安全的服务器:" #: flwr.server.app.start_server:46 of msgid "Starting an SSL-enabled server:" -msgstr "" +msgstr "启动支持 SSL 的服务器:" #: ../../source/ref-api-flwr.rst:73 msgid "server.strategy" -msgstr "" +msgstr "服务器策略" #: flwr.server.strategy:1 of msgid "Contains the strategy abstraction and different implementations." -msgstr "" +msgstr "包含策略抽象和不同的实现方法。" #: ../../source/ref-api-flwr.rst:81 msgid "server.strategy.Strategy" -msgstr "" +msgstr "server.strategy.Strategy" #: flwr.server.strategy.strategy.Strategy:1 of msgid "Abstract base class for server strategy implementations." -msgstr "" +msgstr "服务器策略实现的抽象基类。" #: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:1 of msgid "Aggregate evaluation results." -msgstr "" +msgstr "综合评估结果。" #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:3 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:6 @@ -6298,7 +7327,7 @@ msgstr "" #: flwr.server.strategy.strategy.Strategy.configure_fit:3 #: flwr.server.strategy.strategy.Strategy.evaluate:6 of msgid "The current round of federated learning." -msgstr "" +msgstr "本轮联合学习。" #: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:5 of msgid "" @@ -6309,21 +7338,25 @@ msgid "" "drop out and not submit a result. For each client that did not submit an " "update, there should be an `Exception` in `failures`." msgstr "" +"从先前选定和配置的客户端进行的成功更新。每一对\"(ClientProxy, FitRes)" +"\"都是来自先前选定客户端的一次成功更新。但并非所有先前选定的客户机都一定包含" +"在此列表中:客户机可能会退出,不提交结果。对于每个没有提交更新的客户端,`fail" +"ures`中都应该有一个`Exception`。" #: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:13 #: flwr.server.strategy.strategy.Strategy.aggregate_fit:13 of msgid "Exceptions that occurred while the server was waiting for client updates." -msgstr "" +msgstr "服务器等待客户端更新时发生的异常。" #: flwr.server.strategy.strategy.Strategy.aggregate_evaluate:16 of msgid "" "**aggregation_result** -- The aggregated evaluation result. Aggregation " "typically uses some variant of a weighted average." -msgstr "" +msgstr "**aggregation_result** -- 汇总的评估结果。聚合通常使用某种加权平均值。" #: flwr.server.strategy.strategy.Strategy.aggregate_fit:1 of msgid "Aggregate training results." -msgstr "" +msgstr "汇总培训结果。" #: flwr.server.strategy.strategy.Strategy.aggregate_fit:5 of msgid "" @@ -6334,6 +7367,10 @@ msgid "" "drop out and not submit a result. For each client that did not submit an " "update, there should be an `Exception` in `failures`." msgstr "" +"来自先前选定和配置的客户端的成功更新。每一对\"(ClientProxy、FitRes)" +"\"都构成先前选定的客户端之一的一次成功更新。但并非所有先前选定的客户机都一定" +"包含在此列表中:客户机可能会退出,不提交结果。对于每个没有提交更新的客户端," +"\"失败 \"中都应该有一个 \"异常\"。" #: flwr.server.strategy.strategy.Strategy.aggregate_fit:17 of msgid "" @@ -6345,12 +7382,16 @@ msgid "" "the updates received in this round are discarded, and the global model " "parameters remain the same." msgstr "" +"**parameters** -- 如果返回参数,那么服务器将把这些参数作为新的全局模型参数(" +"即用本方法返回的参数替换之前的参数)。如果返回 \"无\"(例如,因为只有失败而没" +"有可行的结果),那么服务器将不再更新之前的模型参数,本轮收到的更新将被丢弃," +"全局模型参数保持不变。" #: flwr.server.strategy.fedavg.FedAvg.configure_evaluate:1 #: flwr.server.strategy.qfedavg.QFedAvg.configure_evaluate:1 #: flwr.server.strategy.strategy.Strategy.configure_evaluate:1 of msgid "Configure the next round of evaluation." -msgstr "" +msgstr "配置下一轮评估。" #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:7 #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:10 @@ -6358,7 +7399,7 @@ msgstr "" #: flwr.server.strategy.strategy.Strategy.configure_fit:7 #: flwr.server.strategy.strategy.Strategy.initialize_parameters:3 of msgid "The client manager which holds all currently connected clients." -msgstr "" +msgstr "客户端管理器,用于管理当前连接的所有客户端。" #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_evaluate:10 #: flwr.server.strategy.strategy.Strategy.configure_evaluate:10 of @@ -6369,6 +7410,9 @@ msgid "" "list, it means that this `ClientProxy` will not participate in the next " "round of federated evaluation." msgstr "" +"**evaluate_configuration** -- 一个元组列表。列表中的每个元组都标识了一个`客户" +"代理'和该特定`客户代理'的`评估Ins'。如果某个特定的 `ClientProxy` " +"未包含在此列表中,则表示该 `ClientProxy` 将不参与下一轮联合评估。" #: flwr.server.strategy.dpfedavg_adaptive.DPFedAvgAdaptive.configure_fit:1 #: flwr.server.strategy.fedavg.FedAvg.configure_fit:1 @@ -6376,7 +7420,7 @@ msgstr "" #: flwr.server.strategy.qfedavg.QFedAvg.configure_fit:1 #: flwr.server.strategy.strategy.Strategy.configure_fit:1 of msgid "Configure the next round of training." -msgstr "" +msgstr "配置下一轮训练。" #: flwr.server.strategy.dpfedavg_fixed.DPFedAvgFixed.configure_fit:13 #: flwr.server.strategy.strategy.Strategy.configure_fit:10 of @@ -6387,53 +7431,57 @@ msgid "" "list, it means that this `ClientProxy` will not participate in the next " "round of federated learning." msgstr "" +"**fit_configuration** -- 一个元组列表。列表中的每个元组都标识了一个`客户代理'" +"和该特定`客户代理'的`FitIns'。如果某个特定的`客户代理'不在此列表中,则表示该`" +"客户代理'将不参加下一轮联合学习。" #: flwr.server.strategy.strategy.Strategy.evaluate:1 of msgid "Evaluate the current model parameters." -msgstr "" +msgstr "评估当前的模型参数。" #: flwr.server.strategy.strategy.Strategy.evaluate:3 of msgid "" "This function can be used to perform centralized (i.e., server-side) " "evaluation of model parameters." -msgstr "" +msgstr "该函数可用于对模型参数进行集中(即服务器端)评估。" #: flwr.server.strategy.strategy.Strategy.evaluate:11 of msgid "" "**evaluation_result** -- The evaluation result, usually a Tuple " "containing loss and a dictionary containing task-specific metrics (e.g., " "accuracy)." -msgstr "" +msgstr "**evaluation_result** -- 评估结果,通常是一个元组,包含损失和一个字典,字典中" +"包含特定任务的指标(如准确率)。" #: flwr.server.strategy.strategy.Strategy.initialize_parameters:1 of msgid "Initialize the (global) model parameters." -msgstr "" +msgstr "初始化(全局)模型参数。" #: flwr.server.strategy.strategy.Strategy.initialize_parameters:6 of msgid "" "**parameters** -- If parameters are returned, then the server will treat " "these as the initial global model parameters." -msgstr "" +msgstr "**parameters** -- 如果返回参数,服务器将把这些参数视为初始全局模型参数。" #: ../../source/ref-api-flwr.rst:90 msgid "server.strategy.FedAvg" -msgstr "" +msgstr "server.strategy.FedAvg" #: flwr.server.strategy.fedavg.FedAvg:1 of msgid "Configurable FedAvg strategy implementation." -msgstr "" +msgstr "可配置的 FedAvg 战略实施。" #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.__init__:1 #: flwr.server.strategy.fedavg.FedAvg.__init__:1 #: flwr.server.strategy.qfedavg.QFedAvg.__init__:1 of msgid "Federated Averaging strategy." -msgstr "" +msgstr "联邦平均战略。" #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.__init__:3 #: flwr.server.strategy.fedavg.FedAvg.__init__:3 #: flwr.server.strategy.qfedavg.QFedAvg.__init__:3 of msgid "Implementation based on https://arxiv.org/abs/1602.05629" -msgstr "" +msgstr "实施基于 https://arxiv.org/abs/1602.05629" #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.__init__:5 #: flwr.server.strategy.fedavg.FedAvg.__init__:5 @@ -6444,6 +7492,8 @@ msgid "" "larger than `fraction_fit * available_clients`, `min_fit_clients` will " "still be sampled. Defaults to 1.0." msgstr "" +"训练过程中使用的客户端比例。如果 `min_fit_clients` 大于 `fraction_fit * " +"available_clients`,则仍会对 `min_fit_clients` 进行采样。默认为 1.0。" #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.__init__:9 #: flwr.server.strategy.fedavg.FedAvg.__init__:9 @@ -6455,6 +7505,9 @@ msgid "" "available_clients`, `min_evaluate_clients` will still be sampled. " "Defaults to 1.0." msgstr "" +"验证过程中使用的客户端的分数。如果 `min_evaluate_clients` 大于 `" +"fraction_evaluate * available_clients`,则仍会对 `min_evaluate_clients` " +"进行采样。默认为 1.0。" #: flwr.server.strategy.bulyan.Bulyan.__init__:9 #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.__init__:13 @@ -6469,7 +7522,7 @@ msgstr "" #: flwr.server.strategy.krum.Krum.__init__:7 #: flwr.server.strategy.qfedavg.QFedAvg.__init__:13 of msgid "Minimum number of clients used during training. Defaults to 2." -msgstr "" +msgstr "培训期间使用的最少客户数。默认为 2。" #: flwr.server.strategy.bulyan.Bulyan.__init__:11 #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.__init__:15 @@ -6484,7 +7537,7 @@ msgstr "" #: flwr.server.strategy.krum.Krum.__init__:9 #: flwr.server.strategy.qfedavg.QFedAvg.__init__:15 of msgid "Minimum number of clients used during validation. Defaults to 2." -msgstr "" +msgstr "验证过程中使用的最少客户端数量。默认为 2。" #: flwr.server.strategy.bulyan.Bulyan.__init__:13 #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.__init__:17 @@ -6499,7 +7552,7 @@ msgstr "" #: flwr.server.strategy.krum.Krum.__init__:11 #: flwr.server.strategy.qfedavg.QFedAvg.__init__:17 of msgid "Minimum number of total clients in the system. Defaults to 2." -msgstr "" +msgstr "系统中客户总数的最小值。默认为 2。" #: flwr.server.strategy.bulyan.Bulyan.__init__:17 #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.__init__:19 @@ -6514,7 +7567,7 @@ msgstr "" #: flwr.server.strategy.krum.Krum.__init__:18 #: flwr.server.strategy.qfedavg.QFedAvg.__init__:19 of msgid "Optional function used for validation. Defaults to None." -msgstr "" +msgstr "用于验证的可选函数。默认为 \"无\"。" #: flwr.server.strategy.bulyan.Bulyan.__init__:19 #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.__init__:21 @@ -6529,7 +7582,7 @@ msgstr "" #: flwr.server.strategy.krum.Krum.__init__:20 #: flwr.server.strategy.qfedavg.QFedAvg.__init__:21 of msgid "Function used to configure training. Defaults to None." -msgstr "" +msgstr "用于配置训练的功能。默认为 \"无\"。" #: flwr.server.strategy.bulyan.Bulyan.__init__:21 #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.__init__:23 @@ -6544,7 +7597,7 @@ msgstr "" #: flwr.server.strategy.krum.Krum.__init__:22 #: flwr.server.strategy.qfedavg.QFedAvg.__init__:23 of msgid "Function used to configure validation. Defaults to None." -msgstr "" +msgstr "用于配置验证的函数。默认为 \"无\"。" #: flwr.server.strategy.bulyan.Bulyan.__init__:23 #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.__init__:25 @@ -6559,7 +7612,7 @@ msgstr "" #: flwr.server.strategy.krum.Krum.__init__:24 #: flwr.server.strategy.qfedavg.QFedAvg.__init__:25 of msgid "Whether or not accept rounds containing failures. Defaults to True." -msgstr "" +msgstr "是否接受包含失败的轮询。默认为 True。" #: flwr.server.strategy.bulyan.Bulyan.__init__:25 #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.__init__:27 @@ -6574,7 +7627,7 @@ msgstr "" #: flwr.server.strategy.krum.Krum.__init__:26 #: flwr.server.strategy.qfedavg.QFedAvg.__init__:27 of msgid "Initial global model parameters." -msgstr "" +msgstr "初始全球模型参数。" #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.__init__:29 #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.__init__:31 @@ -6593,13 +7646,13 @@ msgstr "" #: flwr.server.strategy.qfedavg.QFedAvg.__init__:29 #: flwr.server.strategy.qfedavg.QFedAvg.__init__:31 of msgid "Metrics aggregation function, optional." -msgstr "" +msgstr "指标汇总功能,可选。" #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_evaluate:1 #: flwr.server.strategy.fedavg.FedAvg.aggregate_evaluate:1 #: flwr.server.strategy.qfedavg.QFedAvg.aggregate_evaluate:1 of msgid "Aggregate evaluation losses using weighted average." -msgstr "" +msgstr "采用加权平均法计算评估损失总额。" #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg.aggregate_fit:1 #: flwr.server.strategy.fedadagrad.FedAdagrad.aggregate_fit:1 @@ -6610,109 +7663,111 @@ msgstr "" #: flwr.server.strategy.fedyogi.FedYogi.aggregate_fit:1 #: flwr.server.strategy.qfedavg.QFedAvg.aggregate_fit:1 of msgid "Aggregate fit results using weighted average." -msgstr "" +msgstr "使用加权平均法汇总拟合结果。" #: flwr.server.strategy.fedavg.FedAvg.evaluate:1 #: flwr.server.strategy.fedxgb_nn_avg.FedXgbNnAvg.evaluate:1 of msgid "Evaluate model parameters using an evaluation function." -msgstr "" +msgstr "使用评估函数评估模型参数。" #: flwr.server.strategy.fedavg.FedAvg.initialize_parameters:1 #: flwr.server.strategy.fedavgm.FedAvgM.initialize_parameters:1 of msgid "Initialize global model parameters." -msgstr "" +msgstr "初始化全局模型参数。" #: flwr.server.strategy.fedavg.FedAvg.num_evaluation_clients:1 #: flwr.server.strategy.qfedavg.QFedAvg.num_evaluation_clients:1 of msgid "Use a fraction of available clients for evaluation." -msgstr "" +msgstr "使用部分可用客户进行评估。" #: flwr.server.strategy.fedavg.FedAvg.num_fit_clients:1 #: flwr.server.strategy.qfedavg.QFedAvg.num_fit_clients:1 of msgid "Return the sample size and the required number of available clients." -msgstr "" +msgstr "返回样本大小和所需的可用客户数量。" #: ../../source/ref-api-flwr.rst:101 msgid "server.strategy.FedAvgM" -msgstr "" +msgstr "server.strategy.FedAvgM" #: flwr.server.strategy.fedavgm.FedAvgM:1 #: flwr.server.strategy.fedmedian.FedMedian:1 of msgid "Configurable FedAvg with Momentum strategy implementation." -msgstr "" +msgstr "可配置的 FedAvg 动量策略实施。" #: flwr.server.strategy.fedavgm.FedAvgM.__init__:1 of msgid "Federated Averaging with Momentum strategy." -msgstr "" +msgstr "联邦平均动量策略。" #: flwr.server.strategy.fedavgm.FedAvgM.__init__:3 of msgid "Implementation based on https://arxiv.org/pdf/1909.06335.pdf" -msgstr "" +msgstr "实施基于 https://arxiv.org/pdf/1909.06335.pdf" #: flwr.server.strategy.fedavgm.FedAvgM.__init__:5 #: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg.__init__:3 #: flwr.server.strategy.krum.Krum.__init__:3 of msgid "Fraction of clients used during training. Defaults to 0.1." -msgstr "" +msgstr "培训期间使用客户的比例。默认为 0.1。" #: flwr.server.strategy.fedavgm.FedAvgM.__init__:7 #: flwr.server.strategy.fedtrimmedavg.FedTrimmedAvg.__init__:5 #: flwr.server.strategy.krum.Krum.__init__:5 of msgid "Fraction of clients used during validation. Defaults to 0.1." -msgstr "" +msgstr "验证过程中使用的客户端比例。默认为 0.1。" #: flwr.server.strategy.fedavgm.FedAvgM.__init__:25 of msgid "" "Server-side learning rate used in server-side optimization. Defaults to " "1.0." -msgstr "" +msgstr "服务器端优化中使用的服务器端学习率。默认为 1.0。" #: flwr.server.strategy.fedavgm.FedAvgM.__init__:28 of msgid "Server-side momentum factor used for FedAvgM. Defaults to 0.0." -msgstr "" +msgstr "用于 FedAvgM 的服务器端动量因子。默认为 0.0。" #: ../../source/ref-api-flwr.rst:112 msgid "server.strategy.FedMedian" -msgstr "" +msgstr "server.strategy.FedMedian" #: flwr.server.strategy.fedmedian.FedMedian.aggregate_fit:1 of msgid "Aggregate fit results using median." -msgstr "" +msgstr "使用中位数汇总拟合结果。" #: ../../source/ref-api-flwr.rst:122 msgid "server.strategy.QFedAvg" -msgstr "" +msgstr "server.strategy.QFedAvg" #: flwr.server.strategy.qfedavg.QFedAvg:1 of msgid "Configurable QFedAvg strategy implementation." -msgstr "" +msgstr "可配置的 QFedAvg 策略实施。" #: ../../source/ref-api-flwr.rst:133 +#, fuzzy msgid "server.strategy.FaultTolerantFedAvg" -msgstr "" +msgstr "server.strategy.FaultTolerantFedAvg" #: flwr.server.strategy.fault_tolerant_fedavg.FaultTolerantFedAvg:1 of msgid "Configurable fault-tolerant FedAvg strategy implementation." -msgstr "" +msgstr "可配置的容错 FedAvg 策略实施。" #: ../../source/ref-api-flwr.rst:144 +#, fuzzy msgid "server.strategy.FedOpt" -msgstr "" +msgstr "server.strategy.FedOpt" #: flwr.server.strategy.fedopt.FedOpt:1 of msgid "Configurable FedAdagrad strategy implementation." -msgstr "" +msgstr "可配置的 FedAdagrad 策略实施。" #: flwr.server.strategy.fedopt.FedOpt.__init__:1 of msgid "Federated Optim strategy interface." -msgstr "" +msgstr "Federated Optim 策略界面。" #: flwr.server.strategy.fedadagrad.FedAdagrad.__init__:3 #: flwr.server.strategy.fedadam.FedAdam.__init__:3 #: flwr.server.strategy.fedopt.FedOpt.__init__:3 #: flwr.server.strategy.fedyogi.FedYogi.__init__:3 of msgid "Implementation based on https://arxiv.org/abs/2003.00295v5" -msgstr "" +msgstr "实施基于 https://arxiv.org/abs/2003.00295v5" #: flwr.server.strategy.bulyan.Bulyan.__init__:5 #: flwr.server.strategy.fedadagrad.FedAdagrad.__init__:5 @@ -6720,7 +7775,7 @@ msgstr "" #: flwr.server.strategy.fedopt.FedOpt.__init__:5 #: flwr.server.strategy.fedyogi.FedYogi.__init__:5 of msgid "Fraction of clients used during training. Defaults to 1.0." -msgstr "" +msgstr "培训期间使用客户的比例。默认为 1.0。" #: flwr.server.strategy.bulyan.Bulyan.__init__:7 #: flwr.server.strategy.fedadagrad.FedAdagrad.__init__:7 @@ -6728,85 +7783,89 @@ msgstr "" #: flwr.server.strategy.fedopt.FedOpt.__init__:7 #: flwr.server.strategy.fedyogi.FedYogi.__init__:7 of msgid "Fraction of clients used during validation. Defaults to 1.0." -msgstr "" +msgstr "验证过程中使用的客户端比例。默认为 1.0。" #: flwr.server.strategy.fedadagrad.FedAdagrad.__init__:29 #: flwr.server.strategy.fedadam.FedAdam.__init__:29 #: flwr.server.strategy.fedopt.FedOpt.__init__:29 #: flwr.server.strategy.fedyogi.FedYogi.__init__:29 of msgid "Server-side learning rate. Defaults to 1e-1." -msgstr "" +msgstr "服务器端学习率。默认为 1e-1。" #: flwr.server.strategy.fedadagrad.FedAdagrad.__init__:31 #: flwr.server.strategy.fedadam.FedAdam.__init__:31 #: flwr.server.strategy.fedopt.FedOpt.__init__:31 #: flwr.server.strategy.fedyogi.FedYogi.__init__:31 of msgid "Client-side learning rate. Defaults to 1e-1." -msgstr "" +msgstr "客户端学习率。默认为 1e-1。" #: flwr.server.strategy.fedopt.FedOpt.__init__:33 of msgid "Momentum parameter. Defaults to 0.0." -msgstr "" +msgstr "动量参数。默认为 0.0。" #: flwr.server.strategy.fedopt.FedOpt.__init__:35 of msgid "Second moment parameter. Defaults to 0.0." -msgstr "" +msgstr "第二矩参数。默认为 0.0。" #: flwr.server.strategy.fedadagrad.FedAdagrad.__init__:33 #: flwr.server.strategy.fedadam.FedAdam.__init__:37 #: flwr.server.strategy.fedopt.FedOpt.__init__:37 #: flwr.server.strategy.fedyogi.FedYogi.__init__:37 of msgid "Controls the algorithm's degree of adaptability. Defaults to 1e-9." -msgstr "" +msgstr "控制算法的适应度。默认为 1e-9。" #: ../../source/ref-api-flwr.rst:155 +#, fuzzy msgid "server.strategy.FedProx" -msgstr "" +msgstr "server.strategy.FedProx" #: flwr.server.strategy.fedprox.FedProx:1 of msgid "Configurable FedProx strategy implementation." -msgstr "" +msgstr "可配置的 FedProx 策略实施。" #: flwr.server.strategy.fedprox.FedProx.__init__:1 of msgid "Federated Optimization strategy." -msgstr "" +msgstr "联邦优化策略。" #: flwr.server.strategy.fedprox.FedProx.__init__:3 of msgid "Implementation based on https://arxiv.org/abs/1812.06127" -msgstr "" +msgstr "实施基于 https://arxiv.org/abs/1812.06127" #: flwr.server.strategy.fedprox.FedProx.__init__:5 of msgid "" "The strategy in itself will not be different than FedAvg, the client " "needs to be adjusted. A proximal term needs to be added to the loss " "function during the training:" -msgstr "" +msgstr "策略本身与 FedAvg " +"并无不同,客户需要进行调整。在训练过程中,需要在损失函数中添加一个近端项:" #: flwr.server.strategy.fedprox.FedProx.__init__:9 of msgid "" "\\\\frac{\\\\mu}{2} || w - w^t ||^2\n" "\n" msgstr "" +"\\\\frac{\\\\mu}{2} || w - w^t ||^2\n" +"\n" #: flwr.server.strategy.fedprox.FedProx.__init__:12 of msgid "" "Where $w^t$ are the global parameters and $w$ are the local weights the " "function will be optimized with." -msgstr "" +msgstr "其中,$w^t$ 是全局参数,$w$ 是优化函数的局部权重。" #: flwr.server.strategy.fedprox.FedProx.__init__:15 of msgid "In PyTorch, for example, the loss would go from:" -msgstr "" +msgstr "例如,在 PyTorch 中,损失将从:" #: flwr.server.strategy.fedprox.FedProx.__init__:21 of msgid "To:" -msgstr "" +msgstr "致:" #: flwr.server.strategy.fedprox.FedProx.__init__:30 of msgid "" "With `global_params` being a copy of the parameters before the training " "takes place." -msgstr "" +msgstr "其中,\"global_params \"是训练前的参数副本。" #: flwr.server.strategy.fedprox.FedProx.__init__:65 of msgid "" @@ -6815,37 +7874,41 @@ msgid "" "regularization will be used (that is, the client parameters will need to " "be closer to the server parameters during training)." msgstr "" +"优化中使用的近端项权重。0.0 使该策略等同于 FedAvg,系数越大,使用的正则化就越" +"多(也就是说,在训练过程中,客户端参数需要更接近服务器参数)。" #: flwr.server.strategy.fedprox.FedProx.configure_fit:3 of msgid "Sends the proximal factor mu to the clients" -msgstr "" +msgstr "向客户发送近端因子亩" #: ../../source/ref-api-flwr.rst:166 +#, fuzzy msgid "server.strategy.FedAdagrad" -msgstr "" +msgstr "server.strategy.FedAdagrad" #: flwr.server.strategy.fedadagrad.FedAdagrad:1 of msgid "FedAdagrad strategy - Adaptive Federated Optimization using Adagrad." -msgstr "" +msgstr "FedAdagrad 策略 - 使用 Adagrad 进行自适应联合优化。" #: flwr.server.strategy.fedadagrad.FedAdagrad:3 #: flwr.server.strategy.fedadam.FedAdam:3 #: flwr.server.strategy.fedyogi.FedYogi:5 of msgid "Paper: https://arxiv.org/abs/2003.00295" -msgstr "" +msgstr "论文: https://arxiv.org/abs/2003.00295" #: flwr.server.strategy.fedadagrad.FedAdagrad.__init__:1 #: flwr.server.strategy.fedadam.FedAdam.__init__:1 of msgid "Federated learning strategy using Adagrad on server-side." -msgstr "" +msgstr "在服务器端使用 Adagrad 的联邦学习策略。" #: ../../source/ref-api-flwr.rst:177 +#, fuzzy msgid "server.strategy.FedAdam" -msgstr "" +msgstr "server.strategy.FedAdam" #: flwr.server.strategy.fedadam.FedAdam:1 of msgid "FedAdam - Adaptive Federated Optimization using Adam." -msgstr "" +msgstr "FedAdam - 使用 Adam 进行自适应联合优化。" #: flwr.server.strategy.fedadam.FedAdam.__init__:33 #: flwr.server.strategy.fedyogi.FedYogi.__init__:33 of