Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Accelerate runner helper method #2746

Merged
merged 52 commits into from
Jun 18, 2024

Conversation

avishniakov
Copy link
Contributor

@avishniakov avishniakov commented Jun 4, 2024

Describe changes

Key changes

  • Adding zenml.integrations.huggingface.steps.run_with_accelerate to be able to run any step using Accelerate (obviously, the step should be written in a way that it makes sense to run it with Accelerate). This function is supported by a utility, which wraps any function into a Click CLI script (which is needed by most of the Distributed train tools). CLI is quite limited, so it accepts only (str, int, float, bool, Path, tuple) as params. This can be maybe extended in the future.
    How it works?
from zenml.integrations.huggingface.steps import run_with_accelerate

@pipeline
def llm_peft_full_finetune():
    ...
    ft_model_dir = run_with_accelerate(finetune)(
        base_model_id=base_model_id,
        dataset_dir=datasets_dir,
        load_in_8bit=load_in_8bit,
    )
    ...

Running the same step without accelerating would be:

@pipeline
def llm_peft_full_finetune():
    ...
    ft_model_dir = finetune(
        base_model_id=base_model_id,
        dataset_dir=datasets_dir,
        load_in_8bit=load_in_8bit,
    )
    ...
  • Helper method cleanup_gpu_memory to clean up GPU memory at the step start. It has a side effect on the environment, so there is a must force param to be passed to protect that.

Companion PR: zenml-io/zenml-projects#102

Sample run: runs/5080958e-31f5-46b0-8e0b-fbf7086f1ff4 in Demo

Pre-requisites

Please ensure you have done the following:

  • I have read the CONTRIBUTING.md document.
  • If my change requires a change to docs, I have updated the documentation accordingly.
  • I have added tests to cover my changes.
  • I have based my new branch on develop and the open PR is targeting develop. If your branch wasn't based on develop read Contribution guide on rebasing branch to develop.
  • If my changes require changes to the dashboard, these changes are communicated/requested.

Types of changes

  • Bug fix (non-breaking change which fixes an issue)
  • New feature (non-breaking change which adds functionality)
  • Breaking change (fix or feature that would cause existing functionality to change)
  • Other (add details above)

Copy link
Contributor

coderabbitai bot commented Jun 4, 2024

Important

Review skipped

Auto reviews are disabled on this repository.

Please check the settings in the CodeRabbit UI or the .coderabbit.yaml file in this repository. To trigger a single review, invoke the @coderabbitai review command.

You can disable this status message by setting the reviews.review_status to false in the CodeRabbit configuration file.


Thank you for using CodeRabbit. We offer it for free to the OSS community and would appreciate your support in helping us grow. If you find it useful, would you consider giving us a shout-out on your favorite social media?

Share
Tips

Chat

There are 3 ways to chat with CodeRabbit:

  • Review comments: Directly reply to a review comment made by CodeRabbit. Example:
    • I pushed a fix in commit <commit_id>.
    • Generate unit testing code for this file.
    • Open a follow-up GitHub issue for this discussion.
  • Files and specific lines of code (under the "Files changed" tab): Tag @coderabbitai in a new review comment at the desired location with your query. Examples:
    • @coderabbitai generate unit testing code for this file.
    • @coderabbitai modularize this function.
  • PR comments: Tag @coderabbitai in a new PR comment to ask questions about the PR branch. For the best results, please provide a very specific query, as very limited context is provided in this mode. Examples:
    • @coderabbitai generate interesting stats about this repository and render them as a table.
    • @coderabbitai show all the console.log statements in this repository.
    • @coderabbitai read src/utils.ts and generate unit testing code.
    • @coderabbitai read the files in the src/scheduler package and generate a class diagram using mermaid and a README in the markdown format.
    • @coderabbitai help me debug CodeRabbit configuration file.

Note: Be mindful of the bot's finite context window. It's strongly recommended to break down tasks such as reading entire modules into smaller chunks. For a focused discussion, use review comments to chat about specific files and their changes, instead of using the PR comments.

CodeRabbit Commands (invoked as PR comments)

  • @coderabbitai pause to pause the reviews on a PR.
  • @coderabbitai resume to resume the paused reviews.
  • @coderabbitai review to trigger an incremental review. This is useful when automatic reviews are disabled for the repository.
  • @coderabbitai full review to do a full review from scratch and review all the files again.
  • @coderabbitai summary to regenerate the summary of the PR.
  • @coderabbitai resolve resolve all the CodeRabbit review comments.
  • @coderabbitai configuration to show the current CodeRabbit configuration for the repository.
  • @coderabbitai help to get help.

Additionally, you can add @coderabbitai ignore anywhere in the PR description to prevent this PR from being reviewed.

CodeRabbit Configration File (.coderabbit.yaml)

  • You can programmatically configure CodeRabbit by adding a .coderabbit.yaml file to the root of your repository.
  • Please see the configuration documentation for more information.
  • If your editor has YAML language server enabled, you can add the path at the top of this file to enable auto-completion and validation: # yaml-language-server: $schema=https://coderabbit.ai/integrations/schema.v2.json

Documentation and Community

  • Visit our Documentation for detailed information on how to use CodeRabbit.
  • Join our Discord Community to get help, request features, and share feedback.
  • Follow us on X/Twitter for updates and announcements.

@github-actions github-actions bot added internal To filter out internal PRs and issues enhancement New feature or request labels Jun 4, 2024
Copy link
Contributor

github-actions bot commented Jun 4, 2024

LLM Finetuning template updates in examples/llm_finetuning have been pushed.

@avishniakov avishniakov changed the title [WIP] Accelerate runner helper method Accelerate runner helper method Jun 5, 2024
@avishniakov
Copy link
Contributor Author

avishniakov commented Jun 5, 2024

@strickvl , I can see that the docs for the finetuning-llms are not quite ready. Shall I wait for you or how to proceed best to document these changes?

@avishniakov avishniakov requested review from strickvl and schustmi June 5, 2024 10:52
@strickvl
Copy link
Contributor

strickvl commented Jun 5, 2024

@strickvl , I can see that the docs for the finetuning-llms are not quite ready. Shall I wait for you or how to proceed best to document these changes?

Could you maybe make changes directly on the feature/gro-1047-docs branch directly @avishniakov and I'll touch them up. You can see I added a short section at the bottom of docs/book/how-to/training-with-gpus/training-with-gpus.md page. Or just send something to me in Discord and I'll add it. I couldn't find/think of the good way to document this with code examples, but probably you're in a better place to do so.

Copy link
Contributor

@strickvl strickvl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Just one tiny comment otherwise looks good to me. Is there any testing we can add for the function utils? Also wondering if we should have an e2e test script which runs a simple (CPU-backed) pipeline in CI but using run_with_accelerate? I'm a bit worried by how quickly HF things can break, esp with something like accelerate so I'd like to get advance failure notice of API changes that way if possible.

Copy link
Contributor

github-actions bot commented Jun 6, 2024

LLM Finetuning template updates in examples/llm_finetuning have been pushed.

Copy link
Contributor

github-actions bot commented Jun 6, 2024

Quickstart template updates in examples/quickstart have been pushed.

Copy link
Contributor

github-actions bot commented Jun 6, 2024

E2E template updates in examples/e2e have been pushed.

Copy link
Contributor

github-actions bot commented Jun 6, 2024

NLP template updates in examples/e2e_nlp have been pushed.

Copy link
Contributor

@strickvl strickvl left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks for adding the tests. This LGTM now, providing you get the tests passing (there's a security one failing atm e.g.)

@avishniakov
Copy link
Contributor Author

@bcdurak, as discussed this one is ready to be merged, but can give more work to the pydantic merge. LMK, when can I merge it?

@avishniakov avishniakov merged commit a64289f into develop Jun 18, 2024
71 checks passed
@avishniakov avishniakov deleted the feature/OSSK-535-accelerate-helper-method branch June 18, 2024 11:36
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
enhancement New feature or request internal To filter out internal PRs and issues run-slow-ci
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants