Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add mark_step for encoder layers #650

Closed
wants to merge 1 commit into from

Conversation

yisonzhu
Copy link

@yisonzhu yisonzhu commented Dec 19, 2024

Coupled with [Use FusedSDPA for MllamaVisionSdpaAttention #620], these two issues arising when running llama3.2 vision model can be resolved:

  1. GC fail when batchsize>1 on Gaudi3.
  2. Increased device memory consumption with Torch 2.5 compared to Torch 2.4.

Copy link

@jkaniecki jkaniecki left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM

@yisonzhu yisonzhu requested a review from vivekgoe as a code owner December 25, 2024 04:42
@michalkuligowski michalkuligowski force-pushed the dev/mark_step_for_encoder branch from c2f7554 to 5efc637 Compare January 7, 2025 09:18
@@ -135,7 +135,7 @@ def flatten(in_list):
return list(itertools.chain(*in_list))


def get_decoder_layer_suffix(model_type):
def get_target_layer_suffix(model_type) -> list[str]:

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please change to something like "get_target_layer_suffix_list" to reflect return type

n=1,
counter=None):
def modify_model_layers(module: torch.nn.Module,
suffix: list[str],

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please change to something like "suffix_list" to reflect the underlying type

@yma11
Copy link

yma11 commented Jan 8, 2025

Hi @michalkuligowski, thanks for your review but I need to create a new PR #669 to address your comments. Can you help close this one and take a look at the new one?

@michalkuligowski michalkuligowski deleted the dev/mark_step_for_encoder branch January 8, 2025 09:02
michalkuligowski pushed a commit that referenced this pull request Jan 8, 2025
This is a updated version from
#650.


Coupled with [Use FusedSDPA for MllamaVisionSdpaAttention
#620], these two issues
arising when running llama3.2 vision model can be resolved:

GC fail when batchsize>1 on Gaudi3.
Increased device memory consumption with Torch 2.5 compared to Torch
2.4.

---------

Signed-off-by: yan ma <[email protected]>
Co-authored-by: yisonzhu <[email protected]>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants