Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Do the device_map more automaticlly for multi-GPUs #472

Open
wants to merge 1 commit into
base: main
Choose a base branch
from

Conversation

alexhegit
Copy link

@alexhegit alexhegit commented Aug 17, 2023

The accelerate lib provides some functions to do the device_map automaticlly. This patch use infer_auto_device_map() to relpace the hard codes which splite the model by hand.

It works well with my two GPUs(6GB-RTX3060 + 12GB-RTX3060).

The accelerate lib provide some functions to do the device_map
automaticlly. This patch use infer_auto_device_map() to relpace the
hard codes which splite the mode by hand.

It works well with my two GPUs(6GB-RTX3060 + 12GB-RTX3060).

Signed-off-by: Alex He <[email protected]>
@alexhegit
Copy link
Author

BTW: this is also work for https://github.com/THUDM/ChatGLM-6B. I do not know if I need to do another pull request for ChatGLM-6B repo.

@alexhegit
Copy link
Author

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant