Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

ValueError: Target module Qwen2_5_VisionTransformerPretrainedModel is not supported. #2388

Open
samuellimabraz opened this issue Feb 19, 2025 · 1 comment

Comments

@samuellimabraz
Copy link

Context

I'm finetuning the Qwen2.5-Vl model with swift for data extraction using LoRA. I'm not sure what is the correct way to save and upload the adapter and be able to recharge it correctly.
In short, I followed these steps

# load model
model, processor = get_model_tokenizer(
    'Qwen/Qwen2.5-VL-3B-Instruct',
    torch_dtype=torch.bfloat16,
    use_hf=True,
    attn_impl="flash_attn",
)
# get lora 
...
model = Swift.prepare_model(model, lora_config)
# train config e run
...
trainer = Seq2SeqTrainer(
    model=model,
    args=training_args,
    data_collator=template.data_collator,
    train_dataset=train_dataset,
    eval_dataset=val_dataset,
    template=template,
    callbacks= [
        EarlyStoppingCallback(
            early_stopping_patience=6,
            early_stopping_threshold=0.001
        )
    ]
)
stats = trainer.train()
# push adapter
model.push_to_hub(f"tech4humans/{model_name}", private=True)

debugging the peft model was loaded with the class PeftModelForCausalLM.

Problem

Then after I tried to recharge the adapter and I get an error with peft

from transformers import Qwen2_5_VLForConditionalGeneration
model = Qwen2_5_VLForConditionalGeneration.from_pretrained("Qwen/Qwen2.5-VL-3B-Instruct", device_map="auto") 
model.load_adapter("tech4humans/Qwen2.5-VL-3B-Instruct-r4-tuned")
/usr/local/lib/python3.10/dist-packages/peft/tuners/lora/model.py in _create_new_module(lora_config, adapter_name, target, **kwargs)
    345         if new_module is None:
    346             # no module could be matched
--> 347             raise ValueError(
    348                 f"Target module {target} is not supported. Currently, only the following modules are supported: "
    349                 "`torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv1d`, `torch.nn.Conv2d`, `torch.nn.Conv3d`, ".

ValueError: Target module Qwen2_5_VisionTransformerPretrainedModel(
  (patch_embed): Qwen2_5_VisionPatchEmbed(
    (proj): Conv3d(3, 1280, kernel_size=(2, 14, 14), stride=(2, 14, 14), bias=False)
  )
  (rotary_pos_emb): Qwen2_5_VisionRotaryEmbedding()
  (blocks): ModuleList(
    (0-31): 32 x Qwen2_5_VLVisionBlock(
      (norm1): Qwen2RMSNorm((1280,), eps=1e-06)
      (norm2): Qwen2RMSNorm((1280,), eps=1e-06)
      (attn): Qwen2_5_VLVisionSdpaAttention(
        (qkv): Linear(in_features=1280, out_features=3840, bias=True)
        (proj): Linear(in_features=1280, out_features=1280, bias=True)
      )
      (mlp): Qwen2_5_VLMLP(
        (gate_proj): Linear(in_features=1280, out_features=3420, bias=True)
        (up_proj): Linear(in_features=1280, out_features=3420, bias=True)
        (down_proj): Linear(in_features=3420, out_features=1280, bias=True)
        (act_fn): SiLU()
      )
    )
  )
  (merger): Qwen2_5_VLPatchMerger(
    (ln_q): Qwen2RMSNorm((1280,), eps=1e-06)
    (mlp): Sequential(
      (0): Linear(in_features=5120, out_features=5120, bias=True)
      (1): GELU(approximate='none')
      (2): Linear(in_features=5120, out_features=2048, bias=True)
    )
  )
) is not supported. Currently, only the following modules are supported: `torch.nn.Linear`, `torch.nn.Embedding`, `torch.nn.Conv1d`, `torch.nn.Conv2d`, `torch.nn.Conv3d`, `transformers.pytorch_utils.Conv1D`, `torch.nn.MultiheadAttention.`.

Sytem info

transformers 4.50.0.dev0
peft 0.14.1.dev0
ms-swift 3.2.0.dev0
Python 3.10.12
CUDA Version: 12.6

Am I missing something or doing something wrong? Any pointers would be appreciated. Thanks!

@samuellimabraz samuellimabraz changed the title ValueError: Target module Qwen2_5_VisionTransformerPretrainedMode is not supported. ValueError: Target module Qwen2_5_VisionTransformerPretrainedModel is not supported. Feb 19, 2025
@lsabrinax
Copy link

have you solved this problem? I meet with the same problem.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants