From 59f29325d5699654cde3a2f483d4697efc41cda0 Mon Sep 17 00:00:00 2001 From: Yan Gao Date: Wed, 10 Jan 2024 19:15:53 +0000 Subject: [PATCH] Translated using Weblate (Chinese (Simplified)) Currently translated at 95.2% (2018 of 2119 strings) Translation: Flower Docs/Framework Translate-URL: https://hosted.weblate.org/projects/flower-docs/framework/zh_Hans/ --- .../zh_Hans/LC_MESSAGES/framework-docs.po | 645 ++++++++++++++---- 1 file changed, 516 insertions(+), 129 deletions(-) diff --git a/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po b/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po index 50125eeb379f..4f3d1341efdd 100644 --- a/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po +++ b/doc/locales/zh_Hans/LC_MESSAGES/framework-docs.po @@ -8,7 +8,7 @@ msgstr "" "Project-Id-Version: Flower main\n" "Report-Msgid-Bugs-To: \n" "POT-Creation-Date: 2023-11-23 18:31+0100\n" -"PO-Revision-Date: 2023-12-18 20:08+0000\n" +"PO-Revision-Date: 2024-01-11 23:06+0000\n" "Last-Translator: Yan Gao \n" "Language-Team: Chinese (Simplified) \n" @@ -17,7 +17,7 @@ msgstr "" "Content-Type: text/plain; charset=utf-8\n" "Content-Transfer-Encoding: 8bit\n" "Plural-Forms: nplurals=1; plural=0;\n" -"X-Generator: Weblate 5.3\n" +"X-Generator: Weblate 5.4-dev\n" "Generated-By: Babel 2.13.1\n" #: ../../source/contributor-explanation-architecture.rst:2 @@ -14647,17 +14647,22 @@ msgid "" " a lot of flexibility that we didn't have before, but we'll also have to " "do a few things the we didn't have to do before." msgstr "" +"在本笔记中,我们将重温 ``NumPyClient`` 并引入一个用于构建客户端的新基类," +"简单命名为 ``Client``。在本教程的前几部分中,我们的客户端基于``NumPyClient``" +",这是一个方便类,可以让我们轻松地与具有良好 NumPy " +"互操作性的机器学习库协同工作。有了 ``Client``,我们获得了很多以前没有的灵活性" +",但我们也必须做一些以前不需要做的事情。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:18 msgid "" "Let's go deeper and see what it takes to move from ``NumPyClient`` to " "``Client``!" -msgstr "" +msgstr "让我们深入了解一下从 ``NumPyClient`` 到 ``Client`` 的过程!" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:30 #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:29 msgid "Step 0: Preparation" -msgstr "" +msgstr "步骤 0:准备工作" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:117 msgid "" @@ -14665,10 +14670,12 @@ msgid "" "ten smaller datasets (each split into training and validation set), and " "wrap everything in their own ``DataLoader``." msgstr "" +"现在,让我们加载 CIFAR-10 训练集和测试集,将它们分割成十个较小的数据集(每个" +"数据集又分为训练集和验证集),并将所有数据都封装在各自的 ``DataLoader`` 中。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:259 msgid "Step 1: Revisiting NumPyClient" -msgstr "" +msgstr "步骤 1:重温 NumPyClient" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:261 msgid "" @@ -14677,6 +14684,9 @@ msgid "" "``get_parameters``, ``fit``, and ``evaluate``. Finally, we wrap the " "creation of instances of this class in a function called ``client_fn``:" msgstr "" +"到目前为止,我们通过子类化 ``flwr.client.NumPyClient`` " +"实现了我们的客户端。我们实现了三个方法:``get_parameters``, ``fit`, " +"和``evaluate``。最后,我们用一个名为 ``client_fn`` 的函数来创建该类的实例:" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:309 msgid "" @@ -14685,12 +14695,15 @@ msgid "" "``FlowerClient`` to ``FlowerNumPyClient`` and ``client_fn`` to " "``numpyclient_fn``. Let's run it to see the output we get:" msgstr "" +"我们以前见过这种情况,目前没有什么新东西。与之前的笔记本相比,唯一*小*的不同" +"是命名,我们把 ``FlowerClient`` 改成了 ``FlowerNumPyClient``,把 `client_fn` " +"改成了 ``numpyclient_fn``。让我们运行它看看输出结果:" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:339 msgid "" "This works as expected, two clients are training for three rounds of " "federated learning." -msgstr "" +msgstr "结果不出所料,两个客户端正在进行三轮联合学习培训。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:341 msgid "" @@ -14700,6 +14713,10 @@ msgid "" "instance of our ``FlowerNumPyClient`` (along with loading the model and " "the data)." msgstr "" +"让我们再深入一点,讨论一下 Flower " +"是如何执行模拟的。每当一个客户端被选中进行工作时,`start_simulation`` " +"就会调用函数 `numpyclient_fn` 来创建我们的 ``FlowerNumPyClient`` " +"实例(同时加载模型和数据)。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:343 msgid "" @@ -14711,34 +14728,41 @@ msgid "" "``Client``'s. ``NumPyClient`` is just a convenience abstraction built on " "top of ``Client``." msgstr "" +"但令人惊讶的部分也许就在这里: Flower 实际上并不直接使用 " +"``FlowerNumPyClient`` 对象。相反,它封装了该对象,使其看起来像 ``flwr.client." +"Client`` 的子类,而不是 ``flwr.client.NumPyClient``。事实上,Flower " +"核心框架不知道如何处理 ``NumPyClient``,它只知道如何处理 " +"``Client``。``NumPyClient`` 只是建立在``Client``之上的方便抽象。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:345 msgid "" "Instead of building on top of ``NumPyClient``, we can directly build on " "top of ``Client``." -msgstr "" +msgstr "与其在 ``NumPyClient`` 上构建,我们可以直接在 ``Client`` 上构建。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:357 msgid "Step 2: Moving from ``NumPyClient`` to ``Client``" -msgstr "" +msgstr "步骤 2:从 ``NumPyClient`` 移至 ``Client``" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:359 msgid "" "Let's try to do the same thing using ``Client`` instead of " "``NumPyClient``." -msgstr "" +msgstr "让我们尝试使用 ``Client`` 代替 ``NumPyClient`` 做同样的事情。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:465 msgid "" "Before we discuss the code in more detail, let's try to run it! Gotta " "make sure our new ``Client``-based client works, right?" -msgstr "" +msgstr "在详细讨论代码之前,让我们试着运行它!必须确保我们基于 ``Client`` " +"的新客户端能正常运行,对吗?" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:490 msgid "" "That's it, we're now using ``Client``. It probably looks similar to what " "we've done with ``NumPyClient``. So what's the difference?" -msgstr "" +msgstr "就是这样,我们现在开始使用 ``Client``。它看起来可能与我们使用 ``NumPyClient``" +" 所做的类似。那么有什么不同呢?" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:492 msgid "" @@ -14755,6 +14779,14 @@ msgid "" "back to the server, which (finally!) deserializes them again in order to " "aggregate them with the updates received from other clients." msgstr "" +"首先,它的代码更多。但为什么呢?区别在于 ``Client`` " +"希望我们处理参数的序列化和反序列化。Flower 要想通过网络发送参数," +"最终需要将这些参数转化为 ``字节``。把参数(例如 NumPy 的 ``ndarray``'s " +"参数)变成原始字节叫做序列化。将原始字节转换成更有用的东西(如 NumPy " +"``ndarray``s)称为反序列化。Flower 需要同时做这两件事:它需要在服务器端序列化" +"参数并将其发送到客户端,客户端需要反序列化参数以便将其用于本地训练,然后再次" +"序列化更新后的参数并将其发送回服务器,服务器(最后!)再次反序列化参数以便将" +"其与从其他客户端接收到的更新汇总在一起。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:495 msgid "" @@ -14764,6 +14796,10 @@ msgid "" " and it knows how to handle these. This makes working with machine " "learning libraries that have good NumPy support (most of them) a breeze." msgstr "" +"Client 与 NumPyClient 之间的唯一**真正区别在于,NumPyClient " +"会为你处理序列化和反序列化。NumPyClient之所以能做到这一点," +"是因为它预计你会以NumPy ndarray的形式返回参数,而且它知道如何处理这些参数。" +"这使得与具有良好 NumPy 支持的机器学习库(大多数)一起工作变得轻而易举。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:497 msgid "" @@ -14777,16 +14813,24 @@ msgid "" "``*Ins`` and ``*Res`` objects in ``Client`` wrap all the individual " "values you're used to from ``NumPyClient``." msgstr "" +"在 API 方面,有一个主要区别:Client " +"中的所有方法都只接受一个参数(例如,``Client.fit`` 中的 " +"``FitIns``),并只返回一个值(例如,``Client.fit`` 中的 ``FitRes``)。另一方" +"面,``NumPyClient``中的方法有多个参数(例如,``NumPyClient." +"fit``中的``parameters``和``config``)和多个返回值(例如,``NumPyClient." +"fit``中的``parameters``、``num_example``和``metrics``)。在 ``Client`` " +"中的这些 ``*Ins`` 和 ``*Res`` 对象封装了你在 ``NumPyClient`` " +"中习惯使用的所有单个值。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:510 msgid "Step 3: Custom serialization" -msgstr "" +msgstr "步骤 3:自定义序列化" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:512 msgid "" "Here we will explore how to implement custom serialization with a simple " "example." -msgstr "" +msgstr "下面我们将通过一个简单的示例来探讨如何实现自定义序列化。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:514 msgid "" @@ -14796,6 +14840,9 @@ msgid "" "object. This is very useful for network communication. Indeed, without " "serialization, you could not just a Python object through the internet." msgstr "" +"首先,什么是序列化?序列化只是将对象转换为原始字节的过程,同样重要的是,反序" +"列化是将原始字节转换回对象的过程。这对网络通信非常有用。事实上,如果没有序列" +"化,你就无法通过互联网传输一个 Python 对象。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:516 msgid "" @@ -14804,6 +14851,9 @@ msgid "" "server. This means that serialization is an essential part of Federated " "Learning." msgstr "" +"通过在客户端和服务器之间来回发送 Python " +"对象,联合学习在很大程度上依赖于互联网通信进行训练。这意味着序列化是 " +"Federated Learning 的重要组成部分。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:518 msgid "" @@ -14815,10 +14865,14 @@ msgid "" "entries), converting them to a sparse matrix can greatly improve their " "bytesize." msgstr "" +"在下面的章节中,我们将编写一个基本示例,在发送包含参数的 ``ndarray`` s 之前," +"我们将首先把 ``ndarray`` 转换为稀疏矩阵,而不是发送序列化版本。这种技术可以用" +"来节省带宽,因为在某些情况下,模型的权重是稀疏的(包含许多 0 " +"条目),将它们转换成稀疏矩阵可以大大提高它们的字节数。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:521 msgid "Our custom serialization/deserialization functions" -msgstr "" +msgstr "我们的定制序列化/反序列化功能" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:523 msgid "" @@ -14826,16 +14880,19 @@ msgid "" "especially in ``ndarray_to_sparse_bytes`` for serialization and " "``sparse_bytes_to_ndarray`` for deserialization." msgstr "" +"这才是真正的序列化/反序列化,尤其是在用于序列化的 " +"``ndarray_too_sparse_bytes`` 和用于反序列化的 ``sparse_bytes_too_ndarray`` " +"中。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:525 msgid "" "Note that we imported the ``scipy.sparse`` library in order to convert " "our arrays." -msgstr "" +msgstr "请注意,为了转换数组,我们导入了 ``scipy.sparse`` 库。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:613 msgid "Client-side" -msgstr "" +msgstr "客户端" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:615 msgid "" @@ -14843,6 +14900,8 @@ msgid "" "parameters, we will just have to call our custom functions in our " "``flwr.client.Client``." msgstr "" +"为了能够将我们的 ``ndarray``s 序列化为稀疏参数,我们只需在 ``flwr.client." +"Client`` 中调用我们的自定义函数。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:617 msgid "" @@ -14850,6 +14909,8 @@ msgid "" "from our network using our custom ``ndarrays_to_sparse_parameters`` " "defined above." msgstr "" +"事实上,在 `get_parameters` 中,我们需要使用上文定义的自定义 " +"`ndarrays_too_sparse_parameters` 序列化从网络中获取的参数。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:619 msgid "" @@ -14858,16 +14919,19 @@ msgid "" "need to serialize our local results with " "``ndarrays_to_sparse_parameters``." msgstr "" +"在 ``fit`` 中,我们首先需要使用自定义的 ``sparse_parameters_too_ndarrays`` " +"反序列化来自服务器的参数,然后使用 ``ndarrays_too_sparse_parameters`` " +"序列化本地结果。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:621 msgid "" "In ``evaluate``, we will only need to deserialize the global parameters " "with our custom function." -msgstr "" +msgstr "在 ``evaluate`` 中,我们只需要用自定义函数反序列化全局参数。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:725 msgid "Server-side" -msgstr "" +msgstr "服务器端" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:727 msgid "" @@ -14877,24 +14941,27 @@ msgid "" " functions of the strategy will be inherited from the super class " "``FedAvg``." msgstr "" +"在本例中,我们将只使用 ``FedAvg`` 作为策略。要改变这里的序列化和反序列化," +"我们只需重新实现 ``FedAvg`` 的 ``evaluate`` 和 ``aggregate_fit`` 函数。" +"策略的其他函数将从超类 ``FedAvg`` 继承。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:729 msgid "As you can see only one line as change in ``evaluate``:" -msgstr "" +msgstr "正如你所看到的,``evaluate``中只修改了一行:" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:735 msgid "" "And for ``aggregate_fit``, we will first deserialize every result we " "received:" -msgstr "" +msgstr "而对于 ``aggregate_fit``,我们将首先反序列化收到的每个结果:" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:744 msgid "And then serialize the aggregated result:" -msgstr "" +msgstr "然后将汇总结果序列化:" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:903 msgid "We can now run our custom serialization example!" -msgstr "" +msgstr "现在我们可以运行自定义序列化示例!" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:934 msgid "" @@ -14906,6 +14973,10 @@ msgid "" "possible in ``NumPyClient``. In order to do so, it requires us to handle " "parameter serialization and deserialization ourselves." msgstr "" +"在本部分教程中,我们已经了解了如何通过子类化 ``NumPyClient`` 或 ``Client`` " +"来构建客户端。NumPyClient \"是一个方便的抽象,可以让我们更容易地与具有良好Num" +"Py互操作性的机器学习库一起工作。Client``是一个更灵活的抽象,允许我们做一些在`" +"NumPyClient``中做不到的事情。为此,它要求我们自己处理参数序列化和反序列化。" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:952 msgid "" @@ -14914,37 +14985,39 @@ msgid "" "documentation. There are many topics we didn't cover in the tutorial, we " "recommend the following resources:" msgstr "" +"这是 Flower 教程的最后一部分(暂时!),恭喜你!你现在已经具备了理解其余文档" +"的能力。本教程还有许多内容没有涉及,我们推荐您参考以下资源:" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:954 msgid "`Read Flower Docs `__" -msgstr "" +msgstr "阅读花朵文档 `__" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:955 msgid "" "`Check out Flower Code Examples " "`__" -msgstr "" +msgstr "查看 Flower 代码示例 `__" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:956 msgid "" "`Use Flower Baselines for your research " "`__" -msgstr "" +msgstr "使用 \"Flower基准 \"进行研究 `__" #: ../../source/tutorial-series-customize-the-client-pytorch.ipynb:957 msgid "" "`Watch Flower Summit 2023 videos `__" -msgstr "" +msgstr "观看 2023 年Flower峰会视频 `__" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:9 msgid "Get started with Flower" -msgstr "" +msgstr "开始使用Flower" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:11 #: ../../source/tutorial-series-what-is-federated-learning.ipynb:11 msgid "Welcome to the Flower federated learning tutorial!" -msgstr "" +msgstr "欢迎阅读Flower联合学习教程!" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:13 msgid "" @@ -14953,26 +15026,31 @@ msgid "" "and data loading. In part 2, we continue to federate the PyTorch-based " "pipeline using Flower." msgstr "" +"在本笔记本中,我们将使用 Flower 和 PyTorch " +"构建一个联合学习系统。在第一部分中,我们使用 PyTorch " +"进行模型训练和数据加载。在第二部分中,我们将继续使用 Flower 联合基于 PyTorch " +"的管道。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:17 msgid "Let's get stated!" -msgstr "" +msgstr "让我们开始吧!" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:31 msgid "" "Before we begin with any actual code, let's make sure that we have " "everything we need." -msgstr "" +msgstr "在开始编写实际代码之前,让我们先确保我们已经准备好了所需的一切。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:45 msgid "" "Next, we install the necessary packages for PyTorch (``torch`` and " "``torchvision``) and Flower (``flwr``):" -msgstr "" +msgstr "接下来,我们为 PyTorch(`torch`` 和`torchvision``)和 " +"Flower(`flwr`)安装必要的软件包:" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:117 msgid "Loading the data" -msgstr "" +msgstr "加载数据" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:119 msgid "" @@ -14982,6 +15060,9 @@ msgid "" "CIFAR-10 dataset. CIFAR-10 can be used to train image classifiers that " "distinguish between images from ten different classes:" msgstr "" +"联邦学习可应用于不同领域的多种不同类型任务。在本教程中,我们将通过在流行的 " +"CIFAR-10 数据集上训练一个简单的卷积神经网络 (CNN) 来介绍联合学习。CIFAR-10 " +"可用于训练图像分类器,以区分来自十个不同类别的图像:" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:150 msgid "" @@ -14993,13 +15074,18 @@ msgid "" "splitting because each organization already has their own data (so the " "data is naturally partitioned)." msgstr "" +"我们通过将原始 CIFAR-10 数据集拆分成多个分区来模拟来自多个组织的多个数据集(" +"也称为联合学习中的 \"跨分区 \"设置)。每个分区代表一个组织的数据。我们这样做" +"纯粹是为了实验目的,在现实世界中不需要拆分数据,因为每个组织都已经有了自己的" +"数据(所以数据是自然分区的)。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:152 msgid "" "Each organization will act as a client in the federated learning system. " "So having ten organizations participate in a federation means having ten " "clients connected to the federated learning server:" -msgstr "" +msgstr "每个组织都将充当联合学习系统中的客户端。因此,有十个组织参与联邦学习,就意味" +"着有十个客户端连接到联邦学习服务器:" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:172 msgid "" @@ -15008,6 +15094,9 @@ msgid "" "wrap the resulting partitions by creating a PyTorch ``DataLoader`` for " "each of them:" msgstr "" +"现在,让我们加载 CIFAR-10 训练集和测试集,将它们分割成 10 " +"个较小的数据集(每个数据集又分为训练集和验证集),并通过为每个数据集创建 " +"PyTorch ``DataLoader`` 来包装由此产生的分割集:" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:222 msgid "" @@ -15020,12 +15109,18 @@ msgid "" "federated learning systems have their data naturally distributed across " "multiple partitions." msgstr "" +"现在,我们有一个包含十个训练集和十个验证集(`trainloaders`` " +"和`valloaders``)的列表,代表十个不同组织的数据。每对 " +"``trainloader``/``valloader`` 都包含 4500 个训练示例和 500 个验证示例。" +"还有一个单独的 ``测试加载器``(我们没有拆分测试集)。同样,这只有在构建研究或" +"教育系统时才有必要,实际的联合学习系统的数据自然分布在多个分区中。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:225 msgid "" "Let's take a look at the first batch of images and labels in the first " "training set (i.e., ``trainloaders[0]``) before we move on:" -msgstr "" +msgstr "在继续之前,让我们先看看第一个训练集中的第一批图像和标签(即 " +"``trainloaders[0]``):" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:264 msgid "" @@ -15035,10 +15130,13 @@ msgid "" "we've seen above). If you run the cell again, you should see another " "batch of images." msgstr "" +"上面的输出显示了来自十个 \"trainloader \"列表中第一个 \"trainloader " +"\"的随机图像。它还打印了与每幅图像相关的标签(即我们上面看到的十个可能标签之" +"一)。如果您再次运行该单元,应该会看到另一批图像。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:276 msgid "Step 1: Centralized Training with PyTorch" -msgstr "" +msgstr "步骤 1:使用 PyTorch 进行集中培训" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:287 msgid "" @@ -15049,10 +15147,15 @@ msgid "" "MINUTE BLITZ " "`__." msgstr "" +"接下来,我们将使用 PyTorch 来定义一个简单的卷积神经网络。本介绍假定您对 " +"PyTorch 有基本的了解,因此不会详细介绍与 PyTorch 相关的内容。" +"如果你想更深入地了解 PyTorch,我们推荐你阅读 \"DEEP LEARNING WITH PYTORCH: " +"a 60 minute blitz `__。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:299 msgid "Defining the model" -msgstr "" +msgstr "确定模式" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:301 msgid "" @@ -15060,14 +15163,17 @@ msgid "" "`__:" msgstr "" +"我们使用 PyTorch 教程 `__ 中描述的简单 " +"CNN:" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:338 msgid "Let's continue with the usual training and test functions:" -msgstr "" +msgstr "让我们继续进行常规的训练和测试功能:" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:398 msgid "Training the model" -msgstr "" +msgstr "训练模型" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:400 msgid "" @@ -15078,6 +15184,10 @@ msgid "" "learning projects today: each organization has their own data and trains " "models only on this internal data:" msgstr "" +"现在我们拥有了所需的所有基本构件:数据集、模型、训练函数和测试函数。让我们把" +"它们放在一起,在我们其中一个组织的数据集(``trainloaders[0]``)上训练模型。这" +"模拟了当今大多数机器学习项目的实际情况:每个组织都有自己的数据,并且只在这些" +"内部数据上训练模型:" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:430 msgid "" @@ -15087,10 +15197,13 @@ msgid "" "intent was just to show a simplistic centralized training pipeline that " "sets the stage for what comes next - federated learning!" msgstr "" +"在我们的 CIFAR-10 分片上对简单 CNN 进行 5 个历元的训练后,测试集的准确率应为 " +"41%,这并不理想,但同时对本教程而言也并不重要。我们只是想展示一个简单的集中式" +"训练管道,为接下来的联合学习做好铺垫!" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:442 msgid "Step 2: Federated Learning with Flower" -msgstr "" +msgstr "步骤 2:与 Flower 联合学习" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:444 msgid "" @@ -15100,10 +15213,13 @@ msgid "" "multiple organizations and where we train a model over these " "organizations using federated learning." msgstr "" +"步骤 1 演示了一个简单的集中式训练管道。所有数据都在一个地方(即一个 " +"\"trainloader \"和一个 \"valloader\")。接下来,我们将模拟在多个组织中拥有多" +"个数据集的情况,并使用联合学习在这些组织中训练一个模型。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:456 msgid "Updating model parameters" -msgstr "" +msgstr "更新模型参数" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:458 msgid "" @@ -15115,6 +15231,10 @@ msgid "" "it sends just the gradients back to the server, not the full model " "parameters)." msgstr "" +"在联邦学习中,服务器将全局模型参数发送给客户端,客户端根据从服务器接收到的参" +"数更新本地模型。然后,客户端根据本地数据对模型进行训练(在本地更改模型参数)" +",并将更新/更改后的模型参数发回服务器(或者,客户端只将梯度参数发回服务器,而" +"不是全部模型参数)。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:460 msgid "" @@ -15123,6 +15243,9 @@ msgid "" " local model: ``set_parameters`` and ``get_parameters``. The following " "two functions do just that for the PyTorch model above." msgstr "" +"我们需要两个辅助函数,用从服务器接收到的参数更新本地模型,并从本地模型获取更" +"新后的模型参数: set_parameters```和`get_parameters``。" +"下面两个函数就是为上面的 PyTorch 模型做这些工作的。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:462 msgid "" @@ -15132,10 +15255,14 @@ msgid "" " The parameter tensors are then converted to/from a list of NumPy " "ndarray's (which Flower knows how to serialize/deserialize):" msgstr "" +"在这里,如何工作的细节并不重要(如果你想了解更多,请随时查阅 PyTorch " +"文档)。本质上,我们使用 ``state_dict`` 访问 PyTorch " +"模型参数张量。然后,参数张量会被转换成/转换成 NumPy ndarray 列表(Flower " +"知道如何序列化/反序列化):" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:490 msgid "Implementing a Flower client" -msgstr "" +msgstr "实施 Flower 客户端" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:492 msgid "" @@ -15146,6 +15273,10 @@ msgid "" "``NumPyClient`` in this tutorial because it is easier to implement and " "requires us to write less boilerplate." msgstr "" +"说完这些,让我们进入有趣的部分。联合学习系统由一个服务器和多个客户端组成。在 " +"Flower 中,我们通过实现 ``flwr.client.Client`` 或 ``flwr.client.NumPyClient``" +" 的子类来创建客户端。在本教程中,我们使用``NumPyClient``,因为它更容易实现," +"需要我们编写的模板也更少。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:494 msgid "" @@ -15153,24 +15284,28 @@ msgid "" "``flwr.client.NumPyClient`` and implement the three methods " "``get_parameters``, ``fit``, and ``evaluate``:" msgstr "" +"为实现 Flower 客户端,我们创建了 ``flwr.client.NumPyClient`` 的子类," +"并实现了 ``get_parameters``、``fit`` 和``evaluate`` 三个方法:" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:496 msgid "``get_parameters``: Return the current local model parameters" -msgstr "" +msgstr "`get_parameters``: 返回当前本地模型参数" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:497 msgid "" "``fit``: Receive model parameters from the server, train the model " "parameters on the local data, and return the (updated) model parameters " "to the server" -msgstr "" +msgstr "`fit``: 从服务器接收模型参数,在本地数据上训练模型参数,并将(更新的)模型参" +"数返回服务器" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:498 msgid "" "``evaluate``: Receive model parameters from the server, evaluate the " "model parameters on the local data, and return the evaluation result to " "the server" -msgstr "" +msgstr "`评估``: " +"从服务器接收模型参数,在本地数据上评估模型参数,并将评估结果返回服务器" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:500 msgid "" @@ -15178,6 +15313,8 @@ msgid "" "components for model training and evaluation. Let's see a simple Flower " "client implementation that brings everything together:" msgstr "" +"我们提到,我们的客户端将使用之前定义的 PyTorch 组件进行模型训练和评估。" +"让我们来看看一个简单的 Flower 客户端实现,它将一切都整合在一起:" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:537 msgid "" @@ -15193,10 +15330,18 @@ msgid "" " particular client for training (and ``FlowerClient.evaluate`` for " "evaluation)." msgstr "" +"我们的类 ``FlowerClient`` 定义了本地训练/评估的执行方式,并允许 Flower 通过 " +"``fit`` 和 ``evaluate`` 调用本地训练/评估。每个 ``FlowerClient`` 实例都代表联" +"合学习系统中的*单个客户端*。联合学习系统有多个客户端(否则就没有什么可联合的" +"),因此每个客户端都将由自己的 ``FlowerClient`` " +"实例来代表。例如,如果我们的工作负载中有三个客户端,那么我们就会有三个 " +"``FlowerClient`` 实例。当服务器选择特定客户端进行训练时,Flower " +"会调用相应实例上的 ``FlowerClient.fit`` (评估时调用 ``FlowerClient." +"evaluate``)。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:541 msgid "Using the Virtual Client Engine" -msgstr "" +msgstr "使用虚拟客户端引擎" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:543 msgid "" @@ -15208,6 +15353,11 @@ msgid "" "exhaust the available memory resources, even if only a subset of these " "clients participates in a single round of federated learning." msgstr "" +"在本笔记本中,我们要模拟一个联合学习系统,在一台机器上有 10 个客户端。" +"这意味着服务器和所有 10 个客户端都将位于一台机器上,并共享 CPU、GPU " +"和内存等资源。有 10 个客户端就意味着内存中有 10 个 ``FlowerClient`` 实例。在" +"单台机器上这样做会很快耗尽可用的内存资源,即使这些客户端中只有一个子集参与了" +"一轮联合学习。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:545 msgid "" @@ -15224,10 +15374,18 @@ msgid "" "be used, for example, to load different local data partitions for " "different clients, as can be seen below:" msgstr "" +"除了服务器和客户端在多台机器上运行的常规功能外,Flower " +"还提供了特殊的模拟功能,即只有在训练或评估实际需要时才创建 ``FlowerClient`` " +"实例。为了让 Flower 框架能在必要时创建客户端,我们需要实现一个名为 " +"``client_fn`` 的函数,它能按需创建一个 ``FlowerClient`` 实例。每当 Flower " +"需要一个特定的客户端实例来调用 ``fit`` 或 ``evaluate`` 时,它就会调用 " +"``client_fn``(这些实例在使用后通常会被丢弃,因此它们不应保留任何本地状态)。" +"客户端由一个客户端 ID 或简短的 ``cid`` 标识。例如,可以使用 ``cid`` " +"为不同的客户端加载不同的本地数据分区,如下所示:" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:580 msgid "Starting the training" -msgstr "" +msgstr "开始训练" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:582 msgid "" @@ -15237,6 +15395,10 @@ msgid "" "``evaluate`` on one particular client. The last step is to start the " "actual simulation using ``flwr.simulation.start_simulation``." msgstr "" +"现在我们有了定义客户端训练/评估的类 ``FlowerClient`` 和允许 Flower " +"在需要调用某个客户端的 ``fit` 或 ``evaluate` 时创建 ``FlowerClient`` 实例的 " +"``client_fn` 类。最后一步是使用 ``flwr.simulation.start_simulation`` " +"启动实际模拟。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:584 msgid "" @@ -15247,6 +15409,9 @@ msgid "" "encapsulates the federated learning approach/algorithm, for example, " "*Federated Averaging* (FedAvg)." msgstr "" +"函数 ``start_simulation`` 接受许多参数,其中包括用于创建 ``FlowerClient`` " +"实例的 ``client_fn``、要模拟的客户端数量(``num_clients``)、联合学习轮数(``" +"num_rounds``)和策略。策略封装了联合学习方法/算法,例如*联合平均* (FedAvg)。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:586 msgid "" @@ -15257,14 +15422,18 @@ msgid "" "step is the actual call to ``start_simulation`` which - you guessed it - " "starts the simulation:" msgstr "" +"Flower 有许多内置策略,但我们也可以使用自己的策略实现来定制联合学习方法的几乎" +"所有方面。在本例中,我们使用内置的 ``FedAvg`` " +"实现,并使用一些基本参数对其进行定制。最后一步是实际调用 " +"``start_simulation``,你猜对了,就是开始模拟:" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:628 msgid "Behind the scenes" -msgstr "" +msgstr "幕后花絮" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:630 msgid "So how does this work? How does Flower execute this simulation?" -msgstr "" +msgstr "那么它是如何工作的呢?Flower \"如何进行模拟?" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:632 #, python-format @@ -15275,6 +15444,10 @@ msgid "" "select 100% of the available clients (``fraction_fit=1.0``), so it goes " "ahead and selects 10 random clients (i.e., 100% of 10)." msgstr "" +"当我们调用 ``start_simulation`` 时,我们会告诉 Flower 有 10 " +"个客户(`num_clients=10``)。然后,Flower 会要求 ``FedAvg`` " +"策略选择客户。``FedAvg`` 知道它应该选择 100%的可用客户(``fraction_fit=1." +"0``),所以它会随机选择 10 个客户(即 10 的 100%)。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:634 msgid "" @@ -15284,16 +15457,21 @@ msgid "" "strategy aggregates those updates and returns the new global model, which" " then gets used in the next round of federated learning." msgstr "" +"然后,\"Flower \"会要求选定的 10 个客户端对模型进行训练。服务器收到客户端的模" +"型参数更新后,会将这些更新交给策略(*FedAvg*)进行汇总。策略会汇总这些更新并" +"返回新的全局模型,然后将其用于下一轮联合学习。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:646 msgid "Where's the accuracy?" -msgstr "" +msgstr "准确性体现在哪里?" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:648 msgid "" "You may have noticed that all metrics except for ``losses_distributed`` " "are empty. Where did the ``{\"accuracy\": float(accuracy)}`` go?" msgstr "" +"您可能已经注意到,除了 ``losses_distributed`` 以外,所有指标都是空的。{" +"\"准确度\": float(准确度)}``去哪儿了?" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:650 msgid "" @@ -15304,6 +15482,10 @@ msgid "" "metrics at all, so the framework does not (and can not) know how to " "handle these automatically." msgstr "" +"Flower 可以自动汇总单个客户端返回的损失,但无法对通用度量字典中的度量进行同样" +"的处理(即带有 \"准确度 \"键的度量字典)。度量值字典可以包含非常不同种类的度" +"量值,甚至包含根本不是度量值的键/值对,因此框架不知道(也无法知道)如何自动处" +"理这些度量值。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:652 msgid "" @@ -15314,18 +15496,24 @@ msgid "" " are ``fit_metrics_aggregation_fn`` and " "``evaluate_metrics_aggregation_fn``." msgstr "" +"作为用户,我们需要告诉框架如何处理/聚合这些自定义指标,为此,我们将指标聚合函" +"数传递给策略。然后,只要从客户端接收到拟合或评估指标,策略就会调用这些函数。" +"两个可能的函数是 ``fit_metrics_aggregation_fn`` 和 " +"``evaluate_metrics_aggregation_fn``。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:654 msgid "" "Let's create a simple weighted averaging function to aggregate the " "``accuracy`` metric we return from ``evaluate``:" -msgstr "" +msgstr "让我们创建一个简单的加权平均函数来汇总从 ``evaluate`` 返回的 ``accuracy`` " +"指标:" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:680 msgid "" "The only thing left to do is to tell the strategy to call this function " "whenever it receives evaluation metric dictionaries from the clients:" -msgstr "" +msgstr "剩下要做的就是告诉策略,每当它从客户端接收到评估度量字典时,都要调用这个函数" +":" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:717 msgid "" @@ -15334,6 +15522,9 @@ msgid "" " evaluation metrics and calculates a single ``accuracy`` metric across " "all clients on the server side." msgstr "" +"我们现在有了一个完整的系统,可以执行联合训练和联合评估。它使用 " +"``weighted_average`` 函数汇总自定义评估指标," +"并在服务器端计算所有客户端的单一 ``accuracy`` 指标。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:719 msgid "" @@ -15342,11 +15533,14 @@ msgid "" "centralized evaluation is being used. Part two of the Flower tutorial " "will cover centralized evaluation." msgstr "" +"其他两类指标(`losses_centralized`` 和 " +"`metrics_centralized`)仍然是空的,因为它们只适用于集中评估。Flower " +"教程的第二部分将介绍集中式评估。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:731 #: ../../source/tutorial-series-what-is-federated-learning.ipynb:351 msgid "Final remarks" -msgstr "" +msgstr "结束语" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:733 msgid "" @@ -15357,6 +15551,11 @@ msgid "" " just CIFAR-10 images classification), for example NLP with Hugging Face " "Transformers or speech with SpeechBrain." msgstr "" +"恭喜你,你刚刚训练了一个由 10 个客户端组成的卷积神经网络!这样," +"你就了解了使用 Flower " +"进行联合学习的基础知识。你所看到的方法同样适用于其他机器学习框架(不只是 " +"PyTorch)和任务(不只是 CIFAR-10 图像分类),例如使用 Hugging Face " +"Transformers 的 NLP 或使用 SpeechBrain 的语音。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:735 msgid "" @@ -15365,6 +15564,9 @@ msgid "" "side? Or evaluate the aggregated model on the server side? We'll cover " "all this and more in the next tutorial." msgstr "" +"在下一个笔记本中,我们将介绍一些更先进的概念。想定制你的策略吗?在服务器端初" +"始化参数?或者在服务器端评估聚合模型?我们将在下一个教程中介绍所有这些内容以" +"及更多。" #: ../../source/tutorial-series-get-started-with-flower-pytorch.ipynb:753 msgid "" @@ -15373,10 +15575,13 @@ msgid "" "strategy-pytorch.html>`__ goes into more depth about strategies and all " "the advanced things you can build with them." msgstr "" +"Flower 联合学习教程 - 第 2 部分 `__ " +"更深入地介绍了策略以及可以使用策略构建的所有高级功能。" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:9 msgid "Use a federated learning strategy" -msgstr "" +msgstr "使用联邦学习策略" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:11 msgid "" @@ -15385,6 +15590,9 @@ msgid "" " Flower (`part 1 `__)." msgstr "" +"欢迎来到联合学习教程的下一部分。在本教程的前几部分,我们介绍了使用 PyTorch " +"和 Flower 进行联合学习(\"第 1 部分 `___\")。" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:13 msgid "" @@ -15392,25 +15600,28 @@ msgid "" "we built in the introductory notebook (again, using `Flower " "`__ and `PyTorch `__)." msgstr "" +"在本笔记本中,我们将开始定制在入门笔记本中构建的联合学习系统(再次使用 `" +"Flower `__ 和 `PyTorch `__)。" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:17 msgid "Let's move beyond FedAvg with Flower strategies!" -msgstr "" +msgstr "让我们超越 FedAvg,采用Flower策略!" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:309 msgid "Strategy customization" -msgstr "" +msgstr "战略定制" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:311 msgid "" "So far, everything should look familiar if you've worked through the " "introductory notebook. With that, we're ready to introduce a number of " "new features." -msgstr "" +msgstr "到目前为止,如果您已经阅读过入门笔记本,那么一切都应该很熟悉了。接下来,我们" +"将介绍一些新功能。" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:323 msgid "Server-side parameter **initialization**" -msgstr "" +msgstr "服务器端参数 **初始化**" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:325 msgid "" @@ -15419,6 +15630,9 @@ msgid "" "over parameter initialization though. Flower therefore allows you to " "directly pass the initial parameters to the Strategy:" msgstr "" +"默认情况下,Flower 会通过向一个随机客户端询问初始参数来初始化全局模型。但在许" +"多情况下,我们需要对参数初始化进行更多控制。因此,Flower " +"允许您直接将初始参数传递给策略:" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:370 msgid "" @@ -15427,10 +15641,13 @@ msgid "" "closely, we can see that the logs do not show any calls to the " "``FlowerClient.get_parameters`` method." msgstr "" +"向 ``FedAvg`` 策略传递 ``initial_parameters`` 可以防止 Flower " +"向其中一个客户端询问初始参数。如果我们仔细观察,就会发现日志中没有显示对 " +"``FlowerClient.get_parameters`` 方法的任何调用。" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:382 msgid "Starting with a customized strategy" -msgstr "" +msgstr "从定制战略开始" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:384 msgid "" @@ -15439,24 +15656,29 @@ msgid "" "``FlowerClient`` instances, the number of clients to simulate " "``num_clients``, the number of rounds ``num_rounds``, and the strategy." msgstr "" +"我们以前见过函数 ``start_simulation``。它接受许多参数,其中包括用于创建 " +"``FlowerClient`` 实例的 ``client_fn``、要模拟的客户数量 ``num_clients``、" +"回合数 ``num_rounds``和策略。" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:386 msgid "" "The strategy encapsulates the federated learning approach/algorithm, for " "example, ``FedAvg`` or ``FedAdagrad``. Let's try to use a different " "strategy this time:" -msgstr "" +msgstr "该策略封装了联合学习方法/算法,例如`FedAvg``或`FedAdagrad``。这次让我们尝试使" +"用不同的策略:" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:424 msgid "Server-side parameter **evaluation**" -msgstr "" +msgstr "服务器端参数**评估**" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:426 msgid "" "Flower can evaluate the aggregated model on the server-side or on the " "client-side. Client-side and server-side evaluation are similar in some " "ways, but different in others." -msgstr "" +msgstr "Flower 可以在服务器端或客户端评估聚合模型。客户端和服务器端评估在某些方面相似" +",但也有不同之处。" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:428 msgid "" @@ -15468,6 +15690,10 @@ msgid "" "model to clients. We're also fortunate in the sense that our entire " "evaluation dataset is available at all times." msgstr "" +"**集中评估**(或*服务器端评估*)在概念上很简单:它的工作方式与集中式机器学习" +"中的评估方式相同。如果有一个服务器端数据集可用于评估目的,那就太好了。我们可" +"以在每一轮训练后对新聚合的模型进行评估,而无需将模型发送给客户端。我们也很幸" +"运,因为我们的整个评估数据集随时可用。" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:430 msgid "" @@ -15484,6 +15710,13 @@ msgid "" "that are not stable, so even if we would not change the model, we'd see " "our evaluation results fluctuate over consecutive rounds." msgstr "" +"**联邦评估**(或*客户端评估*)更为复杂,但也更为强大:它不需要集中的数据集," +"允许我们在更大的数据集上对模型进行评估,这通常会产生更真实的评估结果。事实上" +",如果我们想得到有代表性的评估结果,很多情况下都需要使用**联邦评估**。但是," +"这种能力是有代价的:一旦我们开始在客户端进行评估,我们就应该意识到,如果这些" +"客户端并不总是可用,我们的评估数据集可能会在连续几轮学习中发生变化。此外,每" +"个客户端所拥有的数据集也可能在连续几轮学习中发生变化。这可能会导致评估结果不" +"稳定,因此即使我们不改变模型,也会看到评估结果在连续几轮中波动。" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:433 msgid "" @@ -15491,10 +15724,12 @@ msgid "" "implementing the ``evaluate`` method in ``FlowerClient``). Now let's see " "how we can evaluate aggregated model parameters on the server-side:" msgstr "" +"我们已经了解了联合评估如何在客户端工作(即通过在 ``FlowerClient`` 中实现 " +"``evaluate`` 方法)。现在让我们看看如何在服务器端评估聚合模型参数:" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:490 msgid "Sending/receiving arbitrary values to/from clients" -msgstr "" +msgstr "向/从客户端发送/接收任意值" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:492 msgid "" @@ -15510,6 +15745,13 @@ msgid "" " it reads ``server_round`` and ``local_epochs`` and uses those values to " "improve the logging and configure the number of local training epochs:" msgstr "" +"在某些情况下,我们希望从服务器端配置客户端的执行(训练、评估)。其中一个例子" +"就是服务器要求客户端训练一定数量的本地历元。Flower " +"提供了一种使用字典从服务器向客户端发送配置值的方法。让我们来看一个例子:" +"客户端通过 ``fit`` 中的 ``config`` 参数从服务器接收配置值(``evaluate`` " +"中也有 ``config`` 参数)。``fit`` 方法通过 ``config`` " +"参数接收配置字典,然后从字典中读取值。在本例中,它读取了 ``server_round`` 和 " +"``local_epochs``,并使用这些值来改进日志记录和配置本地训练历元的数量:" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:546 msgid "" @@ -15519,12 +15761,15 @@ msgid "" "strategy, and the strategy calls this function for every round of " "federated learning:" msgstr "" +"那么,如何将配置字典从服务器发送到客户端呢?内置的 \"Flower策略\"(Flower Str" +"ategies)提供了这样的方法,其工作原理与服务器端评估的工作原理类似。我们为策略" +"提供一个函数,策略会在每一轮联合学习中调用这个函数:" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:576 msgid "" "Next, we'll just pass this function to the FedAvg strategy before " "starting the simulation:" -msgstr "" +msgstr "接下来,我们只需在开始模拟前将此函数传递给 FedAvg 策略即可:" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:613 msgid "" @@ -15534,6 +15779,9 @@ msgid "" " round of federated learning, and then for two epochs during the third " "round." msgstr "" +"我们可以看到,客户端日志现在包含了当前一轮的联合学习(从 ``config`` 字典中读" +"取)。我们还可以将本地训练配置为在第一轮和第二轮联合学习期间运行一个历元,然" +"后在第三轮联合学习期间运行两个历元。" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:615 msgid "" @@ -15543,16 +15791,20 @@ msgid "" "explicitly: our ``FlowerClient`` returns a dictionary containing a custom" " key/value pair as the third return value in ``evaluate``." msgstr "" +"客户端还可以向服务器返回任意值。为此,它们会从 ``fit`` 和/或 ``evaluate`` " +"返回一个字典。我们在本笔记本中看到并使用了这一概念,但并未明确提及:我们的 " +"``FlowerClient`` 返回一个包含自定义键/值对的 dictionary,作为 ``evaluate`` " +"中的第三个返回值。" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:627 msgid "Scaling federated learning" -msgstr "" +msgstr "扩大联合学习的规模" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:629 msgid "" "As a last step in this notebook, let's see how we can use Flower to " "experiment with a large number of clients." -msgstr "" +msgstr "作为本笔记本的最后一步,让我们看看如何使用 Flower 对大量客户进行实验。" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:651 #, python-format @@ -15567,6 +15819,12 @@ msgid "" "available clients (so 50 clients) will be selected for training each " "round:" msgstr "" +"现在我们有 1000 个分区,每个分区有 45 个训练示例和 5 个验证示例。鉴于每个客户" +"端上的训练示例数量较少,我们可能需要对模型进行更长时间的训练," +"因此我们将客户端配置为执行 3 " +"个本地训练历元。我们还应该调整每轮训练中被选中的客户端的比例(" +"我们不希望每轮训练都有 1000 个客户端参与),因此我们将 ``fraction_fit`` " +"调整为 ``0.05``,这意味着每轮训练只选中 5%的可用客户端(即 50 个客户端):" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:699 msgid "" @@ -15575,6 +15833,9 @@ msgid "" "choosing a different strategy, and evaluating models on the server-side. " "That's quite a bit of flexibility with so little code, right?" msgstr "" +"在本笔记本中,我们看到了如何通过自定义策略、在服务器端初始化参数、选择不同的" +"策略以及在服务器端评估模型来逐步增强我们的系统。用这么少的代码就能实现这么大" +"的灵活性,不是吗?" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:701 msgid "" @@ -15584,6 +15845,10 @@ msgid "" "simulation using the Flower Virtual Client Engine and ran an experiment " "involving 1000 clients in the same workload - all in a Jupyter Notebook!" msgstr "" +"在后面的章节中,我们将看到如何在服务器和客户端之间传递任意值,以完全自定义客" +"户端执行。有了这种能力,我们使用 Flower " +"虚拟客户端引擎构建了一个大规模的联合学习模拟,并在 Jupyter Notebook " +"中进行了一次实验,在相同的工作负载中运行了 1000 个客户端!" #: ../../source/tutorial-series-use-a-federated-learning-strategy-pytorch.ipynb:719 msgid "" @@ -15592,10 +15857,13 @@ msgid "" "scratch-pytorch.html>`__ shows how to build a fully custom ``Strategy`` " "from scratch." msgstr "" +"Flower 联合学习教程 - 第 3 部分 `__ " +"展示了如何从头开始构建完全自定义的 \"策略\"。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:9 msgid "What is Federated Learning?" -msgstr "" +msgstr "什么是联合学习?" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:13 msgid "" @@ -15604,13 +15872,17 @@ msgid "" "parts of the tutorial, you will be able to build advanced federated " "learning systems that approach the current state of the art in the field." msgstr "" +"在本教程中,你将了解什么是联合学习,用 Flower 搭建第一个系统,并逐步对其进行" +"扩展。如果你能完成本教程的所有部分,你就能构建高级的联合学习系统,从而接近该" +"领域当前的技术水平。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:15 msgid "" "🧑‍🏫 This tutorial starts at zero and expects no familiarity with " "federated learning. Only a basic understanding of data science and Python" " programming is assumed." -msgstr "" +msgstr "🧑‍🏫 本教程从零开始,不要求熟悉联合学习。仅假定对数据科学和 Python " +"编程有基本了解。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:17 msgid "" @@ -15620,71 +15892,80 @@ msgid "" "hear from you in the ``#introductions`` channel! And if anything is " "unclear, head over to the ``#questions`` channel." msgstr "" +"`Star Flower on GitHub `__ ⭐️ 并加入 Slack " +"上的开源 Flower 社区,进行交流、提问并获得帮助: 加入 Slack `__ 🌼 我们希望在 ``#introductions`` " +"频道听到您的声音!如果有任何不清楚的地方,请访问 ``#questions`` 频道。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:19 msgid "Let's get started!" -msgstr "" +msgstr "让我们开始吧!" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:31 msgid "Classic machine learning" -msgstr "" +msgstr "经典机器学习" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:33 msgid "" "Before we begin to discuss federated learning, let us quickly recap how " "most machine learning works today." -msgstr "" +msgstr "在开始讨论联合学习之前,让我们先快速回顾一下目前大多数机器学习的工作原理。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:35 msgid "" "In machine learning, we have a model, and we have data. The model could " "be a neural network (as depicted here), or something else, like classical" " linear regression." -msgstr "" +msgstr "在机器学习中,我们有一个模型和数据。模型可以是一个神经网络(如图所示),也可" +"以是其他东西,比如经典的线性回归。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:41 +#, fuzzy msgid "|e1dd4b4129b040bea23a894266227080|" -msgstr "" +msgstr "|e1dd4b4129b040bea23a894266227080|" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:109 msgid "Model and data" -msgstr "" +msgstr "模型和数据" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:47 msgid "" "We train the model using the data to perform a useful task. A task could " "be to detect objects in images, transcribe an audio recording, or play a " "game like Go." -msgstr "" +msgstr "我们使用数据来训练模型,以完成一项有用的任务。任务可以是检测图像中的物体、转" +"录音频或玩围棋等游戏。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:53 msgid "|c0d4cc6a442948dca8da40d2440068d9|" -msgstr "" +msgstr "|c0d4cc6a442948dca8da40d2440068d9|" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:111 msgid "Train model using data" -msgstr "" +msgstr "使用数据训练模型" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:59 msgid "" "Now, in practice, the training data we work with doesn't originate on the" " machine we train the model on. It gets created somewhere else." -msgstr "" +msgstr "实际上,我们使用的训练数据并不来自我们训练模型的机器。它是在其他地方创建的。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:61 msgid "" "It originates on a smartphone by the user interacting with an app, a car " "collecting sensor data, a laptop receiving input via the keyboard, or a " "smart speaker listening to someone trying to sing a song." -msgstr "" +msgstr "它源于智能手机上用户与应用程序的交互、汽车上传感器数据的收集、笔记本电脑上键" +"盘输入的接收,或者智能扬声器上某人试着唱的歌。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:67 +#, fuzzy msgid "|174e1e4fa1f149a19bfbc8bc1126f46a|" -msgstr "" +msgstr "|174e1e4fa1f149a19bfbc8bc1126f46a|" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:113 msgid "Data on a phone" -msgstr "" +msgstr "手机上的数据" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:73 msgid "" @@ -15693,14 +15974,17 @@ msgid "" " the same app. But it could also be several organizations, all generating" " data for the same task." msgstr "" +"值得一提的是,这个 \"其他地方 \"通常不只是一个地方,而是很多地方。它可能是多" +"个运行同一应用程序的设备。但也可能是多个组织,都在为同一任务生成数据。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:79 +#, fuzzy msgid "|4e021a3dc08249d2a89daa3ab03c2714|" -msgstr "" +msgstr "|4e021a3dc08249d2a89daa3ab03c2714|" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:115 msgid "Data is on many devices" -msgstr "" +msgstr "数据存在于多种设备中" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:85 msgid "" @@ -15709,33 +15993,38 @@ msgid "" "server. This server can be somewhere in a data center, or somewhere in " "the cloud." msgstr "" +"因此,要使用机器学习或任何类型的数据分析,过去使用的方法是在中央服务器上收集" +"所有数据。这个服务器可以在数据中心的某个地方,也可以在云端的某个地方。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:91 +#, fuzzy msgid "|e74a1d5ce7eb49688651f2167a59065b|" -msgstr "" +msgstr "|e74a1d5ce7eb49688651f2167a59065b|" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:117 msgid "Central data collection" -msgstr "" +msgstr "中央数据收集" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:97 msgid "" "Once all the data is collected in one place, we can finally use machine " "learning algorithms to train our model on the data. This is the machine " "learning approach that we've basically always relied on." -msgstr "" +msgstr "一旦所有数据都收集到一处,我们最终就可以使用机器学习算法在数据上训练我们的模" +"型。这就是我们基本上一直依赖的机器学习方法。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:103 +#, fuzzy msgid "|eb29ec4c7aef4e93976795ed72df647e|" -msgstr "" +msgstr "|eb29ec4c7aef4e93976795ed72df647e|" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:119 msgid "Central model training" -msgstr "" +msgstr "中央模型训练" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:130 msgid "Challenges of classical machine learning" -msgstr "" +msgstr "经典机器学习面临的挑战" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:132 msgid "" @@ -15744,36 +16033,42 @@ msgid "" "web traffic. Cases, where all the data is naturally available on a " "centralized server." msgstr "" +"我们刚刚看到的经典机器学习方法可以在某些情况下使用。很好的例子包括对假日照片" +"进行分类或分析网络流量。在这些案例中,所有数据自然都可以在中央服务器上获得。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:138 +#, fuzzy msgid "|c2f699d8ac484f5081721a6f1511f70d|" -msgstr "" +msgstr "|c2f699d8ac484f5081721a6f1511f70d|" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:173 msgid "Centralized possible" -msgstr "" +msgstr "可集中管理" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:144 msgid "" "But the approach can not be used in many other cases. Cases, where the " "data is not available on a centralized server, or cases where the data " "available on one server is not enough to train a good model." -msgstr "" +msgstr "但这种方法并不适用于许多其他情况。例如,集中服务器上没有数据,或者一台服务器" +"上的数据不足以训练出一个好的模型。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:150 +#, fuzzy msgid "|cf42accdacbf4e5eb4fa0503108ba7a7|" -msgstr "" +msgstr "|cf42accdacbf4e5eb4fa0503108ba7a7|" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:175 msgid "Centralized impossible" -msgstr "" +msgstr "无法集中" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:156 msgid "" "There are many reasons why the classic centralized machine learning " "approach does not work for a large number of highly important real-world " "use cases. Those reasons include:" -msgstr "" +msgstr "传统的集中式机器学习方法无法满足现实世界中大量极为重要的使用案例,原因有很多" +"。这些原因包括:" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:158 msgid "" @@ -15787,6 +16082,12 @@ msgid "" " in different parts of the world, and their data is governed by different" " data protection regulations." msgstr "" +"**法规**: GDPR(欧洲)、CCPA(加利福尼亚)、PIPEDA(加拿大)、LGPD(巴西)、" +"PDPL(阿根廷)、KVKK(土耳其)、POPI(南非)、FSS(俄罗斯)、CDPR(中国)、PD" +"PB(印度)、PIPA(韩国)、APPI(日本)、PDP(印度尼西亚)、PDPA(新加坡)、AP" +"P(澳大利亚)等法规保护敏感数据不被移动。事实上,这些法规有时甚至会阻止单个组" +"织将自己的用户数据用于人工智能培训,因为这些用户生活在世界不同地区,他们的数" +"据受不同的数据保护法规管辖。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:160 msgid "" @@ -15797,6 +16098,10 @@ msgid "" "company that developed that keyboard, do you? In fact, that use case was " "the reason federated learning was invented in the first place." msgstr "" +"**用户偏好**: 除了法规之外,在一些使用案例中,用户只是希望数据永远不会离开他" +"们的设备。如果你在手机的数字键盘上输入密码和信用卡信息,你不会希望这些密码最" +"终出现在开发该键盘的公司的服务器上吧?事实上,这种用例正是联合学习发明的初衷" +"。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:161 msgid "" @@ -15809,30 +16114,35 @@ msgid "" "incredibly powerful and exceedingly expensive infrastructure to process " "and store. And most of the data isn't even useful." msgstr "" +"**数据量**: 有些传感器(如摄像头)产生的数据量很大,收集所有数据既不可行,也" +"不经济(例如,由于带宽或通信效率的原因)。试想一下全国铁路服务,全国有数百个" +"火车站。如果每个火车站都安装了许多安全摄像头,那么它们所产生的大量原始设备数" +"据就需要功能强大且极其昂贵的基础设施来处理和存储。而大部分数据甚至都是无用的" +"。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:164 msgid "Examples where centralized machine learning does not work include:" -msgstr "" +msgstr "集中式机器学习不起作用的例子包括:" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:166 msgid "" "Sensitive healthcare records from multiple hospitals to train cancer " "detection models" -msgstr "" +msgstr "用多家医院的敏感医疗记录训练癌症检测模型" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:167 msgid "" "Financial information from different organizations to detect financial " "fraud" -msgstr "" +msgstr "不同组织的财务信息,以侦查财务欺诈行为" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:168 msgid "Location data from your electric car to make better range prediction" -msgstr "" +msgstr "通过电动汽车的定位数据更好地预测续航里程" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:169 msgid "End-to-end encrypted messages to train better auto-complete models" -msgstr "" +msgstr "端到端加密信息可训练出更好的自动完成模型" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:171 msgid "" @@ -15844,10 +16154,15 @@ msgid "" "these cases to utilize private data? After all, these are all areas that " "would benefit significantly from recent advances in AI." msgstr "" +"像 \"Brave `__\"浏览器或 \"Signal `" +"__\"信使这样的隐私增强系统的流行表明,用户关心隐私。事实上,他们会选择隐私增" +"强版,而不是其他替代品(如果存在这种替代品的话)。但是,我们能做些什么来将机" +"器学习和数据科学应用到这些情况中,以利用隐私数据呢?毕竟,这些领域都将从人工" +"智能的最新进展中受益匪浅。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:186 msgid "Federated learning" -msgstr "" +msgstr "联合学习" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:188 msgid "" @@ -15855,15 +16170,16 @@ msgid "" "learning on distributed data by moving the training to the data, instead " "of moving the data to the training. Here's the single-sentence " "explanation:" -msgstr "" +msgstr "联合学习简单地颠覆了这种方法。它通过将训练转移到数据上,而不是将数据转移到训" +"练上,在分布式数据上实现机器学习。下面是一句话的解释:" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:190 msgid "Central machine learning: move the data to the computation" -msgstr "" +msgstr "中央机器学习:将数据转移到计算中" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:191 msgid "Federated (machine) learning: move the computation to the data" -msgstr "" +msgstr "联合(机器)学习:将计算转移到数据上" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:193 msgid "" @@ -15878,41 +16194,49 @@ msgid "" "we discover more and more areas that can suddenly be reinvented because " "they now have access to vast amounts of previously inaccessible data." msgstr "" +"这样,我们就能在以前不可能的领域使用机器学习(和其他数据科学方法)。现在,我" +"们可以通过让不同的医院协同工作来训练优秀的医疗人工智能模型。我们可以通过在不" +"同金融机构的数据上训练人工智能模型来解决金融欺诈问题。我们可以构建新颖的隐私" +"增强型应用(如安全信息),其内置的人工智能比非隐私增强型应用更好。以上只是我" +"想到的几个例子。随着联合学习的部署,我们会发现越来越多的领域可以突然重获新生" +",因为它们现在可以访问大量以前无法访问的数据。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:196 msgid "" "So how does federated learning work, exactly? Let's start with an " "intuitive explanation." -msgstr "" +msgstr "那么,联合学习究竟是如何运作的呢?让我们从直观的解释开始。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:199 msgid "Federated learning in five steps" -msgstr "" +msgstr "联合学习的五个步骤" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:202 msgid "Step 0: Initialize global model" -msgstr "" +msgstr "步骤 0:初始化全局模型" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:204 msgid "" "We start by initializing the model on the server. This is exactly the " "same in classic centralized learning: we initialize the model parameters," " either randomly or from a previously saved checkpoint." -msgstr "" +msgstr "我们首先在服务器上初始化模型。这与经典的集中式学习完全相同:我们随机或从先前" +"保存的检查点初始化模型参数。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:210 +#, fuzzy msgid "|5ec8356bc2564fa09178b1ceed5beccc|" -msgstr "" +msgstr "|5ec8356bc2564fa09178b1ceed5beccc|" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:307 msgid "Initialize global model" -msgstr "" +msgstr "初始化全局模型" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:217 msgid "" "Step 1: Send model to a number of connected organizations/devices (client" " nodes)" -msgstr "" +msgstr "第 1 步:将模型发送到多个连接的组织/设备(客户节点)" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:219 msgid "" @@ -15923,20 +16247,25 @@ msgid "" " few of the connected nodes instead of all nodes. The reason for this is " "that selecting more and more client nodes has diminishing returns." msgstr "" +"接下来,我们会将全局模型的参数发送到连接的客户端节点(如智能手机等边缘设备或" +"企业的服务器)。这是为了确保每个参与节点都使用相同的模型参数开始本地训练。我" +"们通常只使用几个连接节点,而不是所有节点。这样做的原因是,选择越来越多的客户" +"端节点会导致收益递减。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:225 +#, fuzzy msgid "|7c9329e97bd0430bad335ab605a897a7|" -msgstr "" +msgstr "|7c9329e97bd0430bad335ab605a897a7|" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:309 msgid "Send global model" -msgstr "" +msgstr "发送全球模式" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:232 msgid "" "Step 2: Train model locally on the data of each organization/device " "(client node)" -msgstr "" +msgstr "步骤 2:在本地对每个机构/设备(客户端节点)的数据进行模型训练" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:234 msgid "" @@ -15947,18 +16276,23 @@ msgid "" "This could be as little as one epoch on the local data, or even just a " "few steps (mini-batches)." msgstr "" +"现在,所有(选定的)客户端节点都有了最新版本的全局模型参数,它们开始进行本地" +"训练。它们使用自己的本地数据集来训练自己的本地模型。它们不会一直训练到模型完" +"全收敛为止,而只是训练一小段时间。这可能只是本地数据上的一个历元,甚至只是几" +"个步骤(迷你批次)。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:240 +#, fuzzy msgid "|88002bbce1094ba1a83c9151df18f707|" -msgstr "" +msgstr "|88002bbce1094ba1a83c9151df18f707|" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:311 msgid "Train on local data" -msgstr "" +msgstr "根据本地数据进行培训" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:247 msgid "Step 3: Return model updates back to the server" -msgstr "" +msgstr "步骤 3:将模型更新返回服务器" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:249 msgid "" @@ -15970,18 +16304,23 @@ msgid "" "parameters or just the gradients that were accumulated during local " "training." msgstr "" +"经过本地训练后,每个客户节点最初收到的模型参数都会略有不同。参数之所以不同," +"是因为每个客户端节点的本地数据集中都有不同的示例。然后,客户端节点将这些模型" +"更新发回服务器。它们发送的模型更新既可以是完整的模型参数,也可以只是本地训练" +"过程中积累的梯度。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:255 +#, fuzzy msgid "|391766aee87c482c834c93f7c22225e2|" -msgstr "" +msgstr "|391766aee87c482c834c93f7c22225e2|" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:313 msgid "Send model updates" -msgstr "" +msgstr "发送模型更新" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:262 msgid "Step 4: Aggregate model updates into a new global model" -msgstr "" +msgstr "步骤 4:将模型更新汇总到新的全局模型中" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:264 msgid "" @@ -15991,6 +16330,10 @@ msgid "" "But didn't we want to have one model that contains the learnings from the" " data of all 100 client nodes?" msgstr "" +"服务器从选定的客户端节点接收模型更新。如果服务器选择了 100 个客户端节点," +"那么它现在就拥有 100 个略有不同的原始全局模型版本,每个版本都是根据一个客户端" +"的本地数据训练出来的。难道我们不希望有一个包含所有 100 " +"个客户节点数据的模型吗?" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:266 msgid "" @@ -16008,18 +16351,28 @@ msgid "" "weighting - each of the 10 examples would influence the global model ten " "times as much as each of the 100 examples." msgstr "" +"为了得到一个单一的模型,我们必须将从客户端节点收到的所有模型更新合并起来。这" +"个过程称为*聚合*,有许多不同的方法。最基本的方法称为 *Federated Averaging* " +"(`McMahan等人,2016 `__),通常缩写为*FedAvg*。*FedAvg* 采用 100 个模型更新,顾名思义,就是" +"对它们进行平均。更准确地说,它取的是模型更新的*加权平均值*,根据每个客户端用" +"于训练的示例数量进行加权。加权对于确保每个数据示例对生成的全局模型具有相同的 " +"\"影响 \"非常重要。如果一个客户有 10 个示例,而另一个客户有 100 " +"个示例,那么在不加权的情况下,10 个示例对全局模型的影响是 100 个示例的 10 " +"倍。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:273 +#, fuzzy msgid "|93b9a15bd27f4e91b40f642c253dfaac|" -msgstr "" +msgstr "|93b9a15bd27f4e91b40f642c253dfaac|" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:315 msgid "Aggregate model updates" -msgstr "" +msgstr "汇总模型更新" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:280 msgid "Step 5: Repeat steps 1 to 4 until the model converges" -msgstr "" +msgstr "步骤 5:重复步骤 1 至 4,直至模型收敛" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:282 msgid "" @@ -16029,6 +16382,11 @@ msgid "" "updated models to the server (step 3), and the server then aggregates the" " model updates to get a new version of the global model (step 4)." msgstr "" +"步骤 1 至 4 " +"就是我们所说的单轮联合学习。全局模型参数被发送到参与的客户端节点(第 1 " +"步),客户端节点对其本地数据进行训练(第 2 " +"步),然后将更新后的模型发送到服务器(第 3 " +"步),服务器汇总模型更新,得到新版本的全局模型(第 4 步)。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:284 msgid "" @@ -16040,6 +16398,10 @@ msgid "" "eventually arrive at a fully trained model that performs well across the " "data of all client nodes." msgstr "" +"在一轮迭代中,每个参与迭代的客户节点只训练一小段时间。这意味着,在聚合步骤(" +"步骤 4)之后,我们的模型已经在所有参与的客户节点的所有数据上训练过了,但只训" +"练了一小会儿。然后,我们必须一次又一次地重复这一训练过程,最终得到一个经过全" +"面训练的模型,该模型在所有客户节点的数据中都表现良好。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:289 msgid "" @@ -16051,6 +16413,10 @@ msgid "" "aggregate model updates? How can we handle failing client nodes " "(stragglers)?" msgstr "" +"恭喜你,现在你已经了解了联合学习的基础知识。当然,要讨论的内容还有很多,但这" +"只是联合学习的一个缩影。在本教程的后半部分,我们将进行更详细的介绍。" +"有趣的问题包括 我们如何选择最好的客户端节点参与下一轮学习?聚合模型更新的最佳" +"方法是什么?如何处理失败的客户端节点(落伍者)?" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:294 msgid "" @@ -16060,10 +16426,13 @@ msgid "" "abbreviated as FE. In fact, federated evaluation is an integral part of " "most federated learning systems." msgstr "" +"就像我们可以在不同客户节点的分散数据上训练一个模型一样,我们也可以在这些数据" +"上对模型进行评估,以获得有价值的指标。这就是所谓的联合评估,有时简称为 " +"FE。事实上,联合评估是大多数联合学习系统不可或缺的一部分。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:297 msgid "Federated analytics" -msgstr "" +msgstr "联合分析" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:299 msgid "" @@ -16076,10 +16445,14 @@ msgid "" "aggregation to prevent the server from seeing the results submitted by " "individual client nodes." msgstr "" +"在很多情况下,机器学习并不是从数据中获取价值的必要条件。数据分析可以产生有价" +"值的见解,但同样,往往没有足够的数据来获得明确的答案。人们患某种健康疾病的平" +"均年龄是多少?联合分析可以通过多个客户端节点进行此类查询。它通常与安全聚合等" +"其他隐私增强技术结合使用,以防止服务器看到单个客户端节点提交的结果。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:303 msgid "Differential Privacy" -msgstr "" +msgstr "差异化隐私" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:305 msgid "" @@ -16091,10 +16464,14 @@ msgid "" "distinguished or re-identified. This technique can be considered an " "optimization that provides a quantifiable privacy protection measure." msgstr "" +"差异隐私(DP)经常在联合学习中被提及。这是一种在分析和共享统计数据时使用的隐" +"私保护方法,可确保单个参与者的隐私。DP 通过在模型更新中添加统计噪声来实现这一" +"目的,确保任何个体参与者的信息都无法被区分或重新识别。这种技术可被视为一种优" +"化,提供了一种可量化的隐私保护措施。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:326 msgid "Flower" -msgstr "" +msgstr "Flower" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:328 msgid "" @@ -16106,28 +16483,35 @@ msgid "" " federated learning, analytics, and evaluation. It allows the user to " "federate any workload, any ML framework, and any programming language." msgstr "" +"联合学习、联合评估和联合分析需要基础设施来来回移动机器学习模型,在本地数据上" +"对其进行训练和评估,然后汇总更新的模型。Flower " +"提供的基础架构正是以简单、可扩展和安全的方式实现这些目标的。简而言之,Flower " +"为联合学习、分析和评估提供了一种统一的方法。它允许用户联合任何工作负载、任何 " +"ML 框架和任何编程语言。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:334 +#, fuzzy msgid "|a23d9638f96342ef9d25209951e2d564|" -msgstr "" +msgstr "|a23d9638f96342ef9d25209951e2d564|" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:340 msgid "" "Flower federated learning server and client nodes (car, scooter, personal" " computer, roomba, and phone)" -msgstr "" +msgstr "Flower联合学习服务器和客户端节点(汽车、滑板车、个人电脑、roomba 和电话)" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:353 msgid "" "Congratulations, you just learned the basics of federated learning and " "how it relates to the classic (centralized) machine learning!" -msgstr "" +msgstr "恭喜你,你刚刚了解了联合学习的基础知识,以及它与传统(集中式)机器学习的关系" +"!" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:355 msgid "" "In the next part of this tutorial, we are going to build a first " "federated learning system with Flower." -msgstr "" +msgstr "在本教程的下一部分,我们将用 Flower 建立第一个联合学习系统。" #: ../../source/tutorial-series-what-is-federated-learning.ipynb:373 msgid "" @@ -16136,3 +16520,6 @@ msgid "" "pytorch.html>`__ shows how to build a simple federated learning system " "with PyTorch and Flower." msgstr "" +"Flower 联合学习教程 - 第 1 部分 `__ 展示了如何使用 PyTorch 和 Flower " +"构建一个简单的联合学习系统。"