Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Feature Request: Federated Learning Support in Huggingface Transformers #1298

Open
porteratzo opened this issue Jan 22, 2025 · 0 comments
Open

Comments

@porteratzo
Copy link
Collaborator

Is your feature request related to a problem? Please describe.
The Huggingface Transformers library is widely used for centralized training due to its integration with many frameworks, features, and ease of use. However, it is not designed for federated learning and has several shortcomings for this task. These include persisting the scheduler and optimizer between rounds, and extracting performance-efficient fine-tuning (PEFT) parameters. While these features are not too difficult to implement, they require boilerplate code that could be avoided.

Describe the solution you'd like
Introduce a new FederatedTrainer class that manages the optimizer, scheduler, and trainer state across federated learning rounds. This class should also handle the correct extraction of PEFT parameters, thereby eliminating the need for users bringing in their Transformers models to implement this code themselves.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant