Skip to content

Commit

Permalink
chore: re-word LoraMLP notebook
Browse files Browse the repository at this point in the history
  • Loading branch information
jfrery committed Dec 17, 2024
1 parent 825e4b6 commit 8014ec5
Showing 1 changed file with 4 additions and 4 deletions.
8 changes: 4 additions & 4 deletions docs/advanced_examples/LoraMLP.ipynb
Original file line number Diff line number Diff line change
Expand Up @@ -10,7 +10,7 @@
"\n",
"The fine-tuning dataset and the trained LoRA weights are protected using encryption. Thus, training can be securely outsourced to a remote server without compromising any sensitive data.\n",
"\n",
"The hybrid model approach is applied to fine-tuning: only the linear layers of the original model are outsourced to the server. The forward and backward passes on these layers are performed using encrypted activations and gradients. Meanwhile, the LoRA weights are kept by the client, which performs locally the forward and backward passes on the LoRA weights."
"The hybrid approach is applied to fine-tuning: only the linear layers of the original model are outsourced to the server. The forward and backward passes on these layers are performed using encrypted activations and gradients. Meanwhile, the LoRA weights are kept by the client, which performs locally the forward and backward passes on the LoRA weights."
]
},
{
Expand Down Expand Up @@ -249,7 +249,7 @@
"cell_type": "markdown",
"metadata": {},
"source": [
"## Setup FHE fine-tuning with LoraTraining and HybridFHEModel"
"## Setup FHE fine-tuning with LoraTrainer"
]
},
{
Expand Down Expand Up @@ -431,7 +431,7 @@
"\n",
"lora_trainer.save_and_clear_private_info(path)\n",
"\n",
"# At this point, the hybrid_model only contains the trainable parameters of the LoRA layers.\n",
"# At this point, the client's model only contains the trainable parameters of the LoRA layers.\n",
"peft_model.print_trainable_parameters()"
]
},
Expand All @@ -446,7 +446,7 @@
"**Key Takeaways:**\n",
" \n",
"- **Efficiency with LoRA:** While this example utilizes an MLP model with a relatively high proportion of LoRA weights due to its simplicity, the approach scales effectively to larger models like large language models (LLMs). In such cases, LoRA typically accounts for **less than one percent** of the total model parameters, ensuring minimal memory and computational overhead on the client side.\n",
"- **Scalability and Practicality:** The hybrid model approach demonstrated here is particularly beneficial for scenarios where client devices have limited resources. Memory heavy computations are offloaded to a secure server and the client handles only the lightweight LoRA adjustments locally."
"- **Scalability and Practicality:** The hybrid approach demonstrated here is particularly beneficial for scenarios where client devices have limited resources. Memory heavy computations are offloaded to a secure server and the client handles only the lightweight LoRA adjustments locally."
]
}
],
Expand Down

0 comments on commit 8014ec5

Please sign in to comment.