Skip to content

Commit

Permalink
update readme
Browse files Browse the repository at this point in the history
  • Loading branch information
Guangxuan-Xiao committed Feb 9, 2023
1 parent ced9774 commit 7911676
Show file tree
Hide file tree
Showing 2 changed files with 3 additions and 3 deletions.
6 changes: 3 additions & 3 deletions README.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
# Offsite-Tuning: Transfer Learning without Full Model [[paper]()]
<img src="figures/overview.png" width="600">
<center><img src="figures/overview.png" width="600"></center>

## Abstract

Expand Down Expand Up @@ -38,7 +38,7 @@ In this repository, you will find all the necessary components to reproduce the
## Results

- Comparing existing fine-tuning approaches (top and middle) and Offsite-Tuning (bottom). (a) Traditionally, users send labeled data to model owners for fine-tuning, raising privacy concerns and incurring high computational costs. (b) Model owner sending the full model to the data owner is not practical, which threatens the ownership of the proprietary model, and it's not affordable for users to fine-tune the huge foundation model due to resource constraints. (c) Offsite-tuning offers a privacy-preserving and efficient alternative to traditional fine-tuning methods that require access to full model weights.
<img src="figures/paradigm.png" width="600">
<center><img src="figures/paradigm.png" width="600"></center>

- On 1-billion scale language models, Offsite-tuning (OT Plug-in) improves zero-shot (ZS) performance across all tasks, with only slight decreases compared to full fine-tuning (FT). Also, a consistent performance gap is observed between the emulator fine-tuning and plug-in, indicating offsite-tuning effectively preserves the privacy of the original proprietary model (users can not use the emulator to achieve the same performance).
![lm_results](figures/lm_results.png)
Expand All @@ -48,7 +48,7 @@ In this repository, you will find all the necessary components to reproduce the


- Offsite-Tuning significantly increase the fine-tuning throughput and reduce the memory footprint compared to the existing fine-tuning methods.
<img src="figures/efficiency.png" width="600">
<center><img src="figures/efficiency.png" width="600"></center>

## Citation

Expand Down
Binary file modified figures/efficiency.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.

0 comments on commit 7911676

Please sign in to comment.