Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cannot import name '_MULTIMODAL_MODELS' from 'vllm.model_executor.models' #36

Open
andyluo7 opened this issue Oct 18, 2024 · 1 comment
Labels

Comments

@andyluo7
Copy link

I tried to run Aria video notebook with vllm 0.6.3 but I got the following error. Can you check?

load Aria model & tokenizer with vllm

import os
os.environ["CUDA_VISIBLE_DEVICES"] = "0"

import requests
import torch
from PIL import Image

from transformers import AutoTokenizer
from vllm import LLM, ModelRegistry, SamplingParams
from vllm.model_executor.models import _MULTIMODAL_MODELS

from aria.vllm.aria import AriaForConditionalGeneration

ModelRegistry.register_model(
"AriaForConditionalGeneration", AriaForConditionalGeneration
)
_MULTIMODAL_MODELS["AriaForConditionalGeneration"] = (
"aria",
"AriaForConditionalGeneration",
)

model_id_or_path = "rhymes-ai/Aria"

model = LLM(
model=model_id_or_path,
tokenizer=model_id_or_path,
dtype="bfloat16",
limit_mm_per_prompt={"image": 256},
enforce_eager=True,
trust_remote_code=True,
max_model_len=38400,
gpu_memory_utilization=0.84,
)

tokenizer = AutoTokenizer.from_pretrained(
model_id_or_path, trust_remote_code=True, use_fast=False
)

/opt/conda/envs/py_3.9/lib/python3.9/site-packages/tqdm/auto.py:21: TqdmWarning: IProgress not found. Please update jupyter and ipywidgets. See https://ipywidgets.readthedocs.io/en/stable/user_install.html
from .autonotebook import tqdm as notebook_tqdm
WARNING 10-18 16:58:37 rocm.py:17] fork method is not supported by ROCm. VLLM_WORKER_MULTIPROC_METHOD is overridden to spawn instead.

ImportError Traceback (most recent call last)
Cell In[1], line 12
10 from transformers import AutoTokenizer
11 from vllm import LLM, ModelRegistry, SamplingParams
---> 12 from vllm.model_executor.models import _MULTIMODAL_MODELS
14 from aria.vllm.aria import AriaForConditionalGeneration
16 ModelRegistry.register_model(
17 "AriaForConditionalGeneration", AriaForConditionalGeneration
18 )

ImportError: cannot import name '_MULTIMODAL_MODELS' from 'vllm.model_executor.models' (/opt/conda/envs/py_3.9/lib/python3.9/site-packages/vllm/model_executor/models/init.py)

@aria-hacker
Copy link
Collaborator

@andyluo7 It seems vLLM moved the register-related code somewhere else in v0.6.3 with this commit. So you could try to import it from vllm.model_executor.models.registry, but we only test Aria with vLLM v0.6.2. I'm not sure if there will be another compatible issue.

@xffxff xffxff added the vllm label Nov 11, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

No branches or pull requests

3 participants