-
Notifications
You must be signed in to change notification settings - Fork 326
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
Fixes to the Gemma fine-tuning colab tutorial.
1 - Correct a bug where only the first sequence in a batch was being used in the loss calculation. (As the tutorial ultimately uses a batch size of 1, this was going unnoticed, but worth correcting, as I imagine folks might reuse this piece of code in their own projects) 2 - Change the attention mask to be causal w/ prefix (instead of just causal). I.e., an attention mask that is like the right diagram in Figure 3 of https://arxiv.org/pdf/1910.10683, instead of like the center diagram in Figure 3. Using the prefix in the attention mask is more appropriate for the fine-tuning task used in the tutorial. PiperOrigin-RevId: 688741676
- Loading branch information
Showing
3 changed files
with
216 additions
and
37 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Oops, something went wrong.