Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Update CPT documentation #2229

Open
wants to merge 33 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
Show all changes
33 commits
Select commit Hold shift + click to select a range
92b9e1a
added CPT model to peft
tsachiblau Oct 22, 2024
e54d380
Merge branch 'huggingface:main' into main
tsachiblau Oct 22, 2024
023f071
Merge branch 'huggingface:main' into main
tsachiblau Oct 24, 2024
54cddaf
Merge branch 'huggingface:main' into main
tsachiblau Oct 25, 2024
2dfe70f
Added arXiv link to the paper, integrated CPT into testing framework,…
tsachiblau Oct 25, 2024
ba4b115
Merge branch 'huggingface:main' into main
tsachiblau Oct 25, 2024
f8c8317
Merge branch 'huggingface:main' into main
tsachiblau Oct 30, 2024
bd2fc70
config: Added config check in __post_init__. Removed redundant initia…
tsachiblau Oct 30, 2024
b01b214
Merge branch 'main' of https://github.com/tsachiblau/peft_CPT
tsachiblau Oct 30, 2024
6ed1723
Merge branch 'huggingface:main' into main
tsachiblau Nov 3, 2024
77bb0b9
tests: Updated test_cpt and testing_common as per the PR requirements.
tsachiblau Nov 3, 2024
dbcdedf
Created cpt.md in package_regerence. Updated the prompting.md file. a…
tsachiblau Nov 3, 2024
f7138d4
Merge branch 'huggingface:main' into main
tsachiblau Nov 5, 2024
0a5fb20
verifying that the model is causal LM
tsachiblau Nov 5, 2024
7206db5
Changed CPTModel to CPTEmbedding
tsachiblau Nov 5, 2024
24b0af9
merge with main branch
tsachiblau Nov 5, 2024
81ffa09
make style
tsachiblau Nov 7, 2024
130ec76
make style
tsachiblau Nov 7, 2024
70067d8
make style
tsachiblau Nov 7, 2024
9397314
make doc
tsachiblau Nov 8, 2024
249713c
Merge branch 'huggingface:main' into main
tsachiblau Nov 10, 2024
0a43473
Removed redundant checks
tsachiblau Nov 10, 2024
144f042
Fixed errors
tsachiblau Nov 13, 2024
97449da
merge with peft
tsachiblau Nov 13, 2024
dacb400
Minor code updates.
tsachiblau Nov 13, 2024
cc348a4
Minor code updates.
tsachiblau Nov 17, 2024
79959d1
Merge branch 'huggingface:main' into main
tsachiblau Nov 18, 2024
7eea892
Minor code updates.
tsachiblau Nov 18, 2024
6d625c0
Merge branch 'huggingface:main' into main
tsachiblau Nov 21, 2024
d120d13
Merge branch 'huggingface:main' into main
tsachiblau Nov 21, 2024
9ae9939
Update Doc
tsachiblau Nov 21, 2024
2fada31
Update Doc
tsachiblau Nov 21, 2024
43260c7
Merge remote-tracking branch 'origin/main'
tsachiblau Nov 21, 2024
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion docs/source/conceptual_guides/prompting.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,4 +90,4 @@ In CPT, only specific context token embeddings are optimized, while the rest of
To prevent overfitting and maintain stability, CPT uses controlled perturbations to limit the allowed changes to context embeddings within a defined range.
Additionally, to address the phenomenon of recency bias—where examples near the end of the context tend to be prioritized over earlier ones—CPT applies a decay loss factor.

Take a look at [Context-Aware Prompt Tuning for few-shot classification](../task_guides/cpt-few-shot-classification) for a step-by-step guide on how to train a model with CPT.
Take a look at [Example](../../../examples/cpt_finetuning/README.md) for a step-by-step guide on how to train a model with CPT.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, I'm not sure if this link is going to work from the built docs. It's better if you link directly to the README, i.e. https://github.com/huggingface/peft/blob/main/examples/cpt_finetuning/README.md (of course, the link won't point anywhere right now, but after merging it will be valid).

5 changes: 5 additions & 0 deletions docs/source/package_reference/cpt.md
Original file line number Diff line number Diff line change
Expand Up @@ -9,6 +9,8 @@ Unless required by applicable law or agreed to in writing, software distributed
an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the
specific language governing permissions and limitations under the License.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove?

⚠️ Note that this file is in Markdown but contains specific syntax for our doc-builder (similar to MDX) that may not be
rendered properly in your Markdown viewer.
-->
Expand All @@ -21,6 +23,9 @@ The abstract from the paper is:

*Traditional fine-tuning is effective but computationally intensive, as it requires updating billions of parameters. CPT, inspired by ICL, PT, and adversarial attacks, refines context embeddings in a parameter-efficient manner. By optimizing context tokens and applying a controlled gradient descent, CPT achieves superior accuracy across various few-shot classification tasks, showing significant improvement over existing methods such as LoRA, PT, and ICL.*

Take a look at [Example](../../../examples/cpt_finetuning/README.md) for a step-by-step guide on how to train a model with CPT.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same argument about the link.



## CPTConfig

[[autodoc]] tuners.cpt.config.CPTConfig
Expand Down
68 changes: 68 additions & 0 deletions examples/cpt_finetuning/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,68 @@

# Context-aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods
## Introduction ([Paper](https://arxiv.org/abs/2410.17222), [Code](https://github.com/tsachiblau/Context-aware-Prompt-Tuning-Advancing-In-Context-Learning-with-Adversarial-Methods), [Notebook](cpt_train_and_inference.ipynb), [Colab](https://colab.research.google.com/drive/1UhQDVhZ9bDlSk1551SuJV8tIUmlIayta?usp=sharing))
Large Language Models (LLMs) can perform few-shot learning using either optimization-based approaches or In-Context Learning (ICL). Optimization-based methods often suffer from overfitting, as they require updating a large number of parameters with limited data. In contrast, ICL avoids overfitting but typically underperforms compared to optimization-based methods and is highly sensitive to the selection, order, and format of demonstration examples.

To overcome these challenges, we introduce Context-aware Prompt Tuning (CPT), a method inspired by ICL, Prompt Tuning (PT), and adversarial attacks.
CPT builds on the ICL strategy of concatenating examples before the input, extending it by incorporating PT-like learning to refine the context embedding through iterative optimization, extracting deeper insights from the training examples. Our approach carefully modifies specific context tokens, considering the unique structure of the examples within the context.

In addition to updating the context with PT-like optimization, CPT draws inspiration from adversarial attacks, adjusting the input based on the labels present in the context while preserving the inherent value of the user-provided data.
To ensure robustness and stability during optimization, we employ a projected gradient descent algorithm, constraining token embeddings to remain close to their original values and safeguarding the quality of the context.
Our method has demonstrated superior accuracy across multiple classification tasks using various LLM models, outperforming existing baselines and effectively addressing the overfitting challenge in few-shot learning.
Comment on lines +6 to +11
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In this section, you use a lot of "we" and "our". Let's try to word it in a more neutral way, as for the reader it could appear like "we" refers to the PEFT maintainers :) So use "The approach" instead of "Our approach" etc.



<div class="flex justify-center">
<img src="https://huggingface.co/datasets/huggingface/documentation-images/resolve/main/peft/cpt.png"/>
</div>
<small>CPT optimizing only specific token embeddings while keeping the rest of the model frozen <a href="https://huggingface.co/papers/2410.17222">(image source)</a>.</small>

---

## Dataset Creation and Collation for CPT

This document explains how to prepare datasets for **Context-Aware Prompt Tuning (CPT)** and align these processes with the CPT paper.

---

### Template-Based Tokenization

#### Purpose
Templates define the structure of the input-output pairs, enabling the model to interpret the task within a unified context.

- **Input Templates**:
Templates such as `"input: {sentence}"` format the raw input sentences. The `{sentence}` placeholder is replaced with the actual input text.

- **Output Templates**:
Similarly, templates like `"output: {label}"` format the labels (`positive`, `negative`, etc.).

- **Separator Tokens**:
Separators are used to distinguish between different parts of the input (e.g., input text and labels) and between examples.

#### Paper Reference
- Refer to **Section 3.1** of the paper, where template-based tokenization is described as a critical step in structuring inputs for CPT.

#### How it Helps
Templates provide context-aware structure, ensuring the model does not overfit by utilizing structured input-output formats. Using cpt_tokens_type_mask, we gain fine-grained information about the roles of different tokens in the input-output structure. This enables the model to:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
Templates provide context-aware structure, ensuring the model does not overfit by utilizing structured input-output formats. Using cpt_tokens_type_mask, we gain fine-grained information about the roles of different tokens in the input-output structure. This enables the model to:
Templates provide context-aware structure, ensuring the model does not overfit by utilizing structured input-output formats. Using `cpt_tokens_type_mask`, we gain fine-grained information about the roles of different tokens in the input-output structure. This enables the model to:


1. Refrain from Updating Label Tokens: Prevent overfitting to label tokens by excluding their gradients during training.
2. Apply Different Projection Norms: Use type-specific projections for different parts of the input during Projected Gradient Descent (PGD), enhancing robustness and generalization.


#### Paper Reference

These steps are directly informed by the principles outlined in the CPT paper, particularly in Sections **3.1**, **3.2**, and **3.3**.





Comment on lines +55 to +58
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Remove.

## Citation
```bib
@article{
blau2025cpt,
title={Context-Aware Prompt Tuning: Advancing In-Context Learning with Adversarial Methods},
author={Tsachi Blau, Moshe Kimhi, Yonatan Belinkov, Alexander Bronstein, Chaim Baskin},
journal={arXiv preprint arXiv:2410.17222}},
year={2025}
}
```
Loading