Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

merge: microsoft-main into main #62

Merged
merged 112 commits into from
Nov 12, 2024
Merged

merge: microsoft-main into main #62

merged 112 commits into from
Nov 12, 2024

Conversation

saforem2
Copy link
Member

This PR resolves the merge conflicts introduced in #35, containing upstream changes from microsoft/Megatron-DeepSpeed

saforem2 and others added 30 commits February 1, 2024 13:52
* Zero Bubble for Megatron-Deepspeed

* Remove schedule variant

* Integrate zbpp into the megatron

* clean up redundant lines

* Rename variables

* edit README

* Undo readme edit

* Minor change

* Add an example to example directory

---------

Co-authored-by: ufotalent <[email protected]>
Co-authored-by: Wan Xinyi <[email protected]>
* use split/squeeze instead of slice for performance

GPU may not have perf difference but HPU perf improves with this

* add copyrights
…urther on device (microsoft#411)

* improve performance by keeping attention_mask on device and run ops further on device

* add copyrights
* improve RoPE perf by using cached sin/cos tensors

* add copyrights
* Extend test utilities to support more accelerators

* Add Intel Copyright
* Update arguments.py

* Update training.py

* Create profiler.py

* add copyrights

* Update profiler.py

* add copyrights

* Update help

* add copyrights
* Refine wandb logging function

* Address comments

* enable user to specify wandb local save dir

* Update and fix comments

* Update
…icrosoft#412)

* Update arguments.py

* Update training.py

* Update utils.py

* add copyrights

* add copyrights

* add copyrights

* Update arguments.py help

* Update arguments.py

* Update training.py

* Update utils.py

* Update arguments.py
…rocessing (microsoft#421)

* Update arguments.py

* Update tokenizer.py

* Update preprocess_data.py
* Update module.py

* Update preprocess_data.py

* add copyrights

* add copyrights

* Update tokenizer.py

* add copyrights
This PR adds a Llama universal checkpointing example to examples_deepspeed/universal_checkpointing.

It also includes changes to the README, some minor changes, and an update to the TensorBoard analysis script.
saforem2 and others added 26 commits October 14, 2024 23:28
[merge]: into `microsoft-main` $\leftarrow$ from `hzheng-data-fix`
@saforem2 saforem2 requested a review from hatanp November 11, 2024 21:33
@saforem2
Copy link
Member Author

should be good to go, I forgot I put a blocker requiring a review before merging into main

Copy link
Collaborator

@hatanp hatanp left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

112 Commits, 72 files changed, +7339 -2537
LGTM :-)
Runs well on Sunspot with fresh install and train_aGPT_7B.sh

@saforem2 saforem2 merged commit c4de4d1 into main Nov 12, 2024
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.