diff --git a/docs/advanced_examples/LoraMLP.ipynb b/docs/advanced_examples/LoraMLP.ipynb index 529603215..8a5dfd169 100644 --- a/docs/advanced_examples/LoraMLP.ipynb +++ b/docs/advanced_examples/LoraMLP.ipynb @@ -10,7 +10,7 @@ "\n", "The fine-tuning dataset and the trained LoRA weights are protected using encryption. Thus, training can be securely outsourced to a remote server without compromising any sensitive data.\n", "\n", - "The hybrid model approach is applied to fine-tuning: only the linear layers of the original model are outsourced to the server. The forward and backward passes on these layers are performed using encrypted activations and gradients. Meanwhile, the LoRA weights are kept by the client, which performs locally the forward and backward passes on the LoRA weights." + "The hybrid approach is applied to fine-tuning: only the linear layers of the original model are outsourced to the server. The forward and backward passes on these layers are performed using encrypted activations and gradients. Meanwhile, the LoRA weights are kept by the client, which performs locally the forward and backward passes on the LoRA weights." ] }, { @@ -249,7 +249,7 @@ "cell_type": "markdown", "metadata": {}, "source": [ - "## Setup FHE fine-tuning with LoraTraining and HybridFHEModel" + "## Setup FHE fine-tuning with LoraTrainer" ] }, { @@ -431,7 +431,7 @@ "\n", "lora_trainer.save_and_clear_private_info(path)\n", "\n", - "# At this point, the hybrid_model only contains the trainable parameters of the LoRA layers.\n", + "# At this point, the client's model only contains the trainable parameters of the LoRA layers.\n", "peft_model.print_trainable_parameters()" ] }, @@ -446,7 +446,7 @@ "**Key Takeaways:**\n", " \n", "- **Efficiency with LoRA:** While this example utilizes an MLP model with a relatively high proportion of LoRA weights due to its simplicity, the approach scales effectively to larger models like large language models (LLMs). In such cases, LoRA typically accounts for **less than one percent** of the total model parameters, ensuring minimal memory and computational overhead on the client side.\n", - "- **Scalability and Practicality:** The hybrid model approach demonstrated here is particularly beneficial for scenarios where client devices have limited resources. Memory heavy computations are offloaded to a secure server and the client handles only the lightweight LoRA adjustments locally." + "- **Scalability and Practicality:** The hybrid approach demonstrated here is particularly beneficial for scenarios where client devices have limited resources. Memory heavy computations are offloaded to a secure server and the client handles only the lightweight LoRA adjustments locally." ] } ],