Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

TypeError: unsupported operand type(s) for *: 'dict' and 'int' #1

Open
liuzmx opened this issue Dec 7, 2022 · 6 comments
Open

TypeError: unsupported operand type(s) for *: 'dict' and 'int' #1

liuzmx opened this issue Dec 7, 2022 · 6 comments

Comments

@liuzmx
Copy link

liuzmx commented Dec 7, 2022

I am a beginner in natural language processing.

When I clone the repository, install the environment dependencies, and try to run the quickstart code in README, the following error occurs, how should I solve it?

my python version is 3.8.15, other dependencies have the same version as the requirements.txt

Traceback (most recent call last):
  File "quickstart.py", line 15, in <module>
    inputs = processor(text=["一只小狗在摇尾巴", "一只小猪在吃饭"], images=image, return_tensors="pt", padding=True)
  File "/home/liuzhiming/.miniconda3/envs/clip-chinese/lib/python3.8/site-packages/transformers/models/clip/processing_clip.py", line 85, in __call__
    image_features = self.feature_extractor(images, return_tensors=return_tensors, **kwargs)
  File "/home/liuzhiming/.miniconda3/envs/clip-chinese/lib/python3.8/site-packages/transformers/models/clip/feature_extraction_clip.py", line 146, in __call__
    images = [self.resize(image=image, size=self.size, resample=self.resample) for image in images]
  File "/home/liuzhiming/.miniconda3/envs/clip-chinese/lib/python3.8/site-packages/transformers/models/clip/feature_extraction_clip.py", line 146, in <listcomp>
    images = [self.resize(image=image, size=self.size, resample=self.resample) for image in images]
  File "/home/liuzhiming/.miniconda3/envs/clip-chinese/lib/python3.8/site-packages/transformers/models/clip/feature_extraction_clip.py", line 207, in resize
    new_short, new_long = size, int(size * long / short)
TypeError: unsupported operand type(s) for *: 'dict' and 'int'

I tried to output the values of several parameters in transformers/models/clip/feature_extraction_clip.py", line 207, in resize and found that the type of size is dict, not int,The specific values are as follows:

size:  {'shortest_edge': 224}
long=960
short=600
@mobi1019
Copy link

mobi1019 commented Dec 8, 2022

I have the same question, then I use the Transformers-4.25.1 and the question is solved. Hope help you~

@liuzmx
Copy link
Author

liuzmx commented Dec 8, 2022

I have the same question, then I use the Transformers-4.25.1 and the question is solved. Hope help you~

Thanks, when I updated the version of transformers from 4.18.0 to 4.25.1, the error problem was solved.

@zbbwss
Copy link

zbbwss commented Apr 20, 2023

TypeError: unsupported operand type(s) for *: 'dict' and 'int'

@zbbwss
Copy link

zbbwss commented Apr 20, 2023

same question

@linwhitehat
Copy link

I have the same question, then I use the Transformers-4.25.1 and the question is solved. Hope help you~

Thanks, when I updated the version of transformers from 4.18.0 to 4.25.1, the error problem was solved.

hello, I met the problem like "texts.append(data['text']) KeyError: 'text'" when I updated the version to 4.25.1, and I would like to ask if you encountered the same problem and how to solve it.

@zhousteven
Copy link

I have the same question, then I use the Transformers-4.25.1 and the question is solved. Hope help you~

Thanks, when I updated the version of transformers from 4.18.0 to 4.25.1, the error problem was solved.

hello, I met the problem like "texts.append(data['text']) KeyError: 'text'" when I updated the version to 4.25.1, and I would like to ask if you encountered the same problem and how to solve it.

same question!!!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

5 participants