-
Notifications
You must be signed in to change notification settings - Fork 27.1k
Issues: huggingface/transformers
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Author
Label
Projects
Milestones
Assignee
Sort
Issues list
Setting tokenizer.pad_token_id = model.config.eos_token_id fails for LLama 3
#34869
opened Nov 22, 2024 by
dvrogozh
Offline mode doesn't work with models that require
trust_remote_code=True
bug
#34855
opened Nov 21, 2024 by
VladKha
2 of 4 tasks
QuantizedCache fails with latest versions of optimum-quanto
bug
#34848
opened Nov 21, 2024 by
SimJeg
1 of 4 tasks
model.dequantize() does not remove quantization_config from model.config which effects in error when loading the model
bug
Good First Issue
#34847
opened Nov 21, 2024 by
konradkalita
1 of 4 tasks
change inputs_embeds to input_embeds
Feature request
Request for a new feature
#34846
opened Nov 21, 2024 by
Luohongzhige
When set num_beams in GenerationConfig, stop_strings parameter has no effect
bug
#34843
opened Nov 21, 2024 by
ZYM66
2 of 4 tasks
Urgent: 429 Errors and Potential Rate Limiting on Model Access
bug
#34841
opened Nov 21, 2024 by
dnuanghsh
2 of 4 tasks
CVE-2024-11392/11393/11394 vulnerabilities
Feature request
Request for a new feature
#34840
opened Nov 21, 2024 by
JWYang0329
latest transformer has a tensorflow and numpy error
bug
#34839
opened Nov 21, 2024 by
sleepingcat4
1 of 4 tasks
Setting the
attention_mask
and past_key_values
simultaneously will cause the error.
bug
#34835
opened Nov 20, 2024 by
MikeDean2367
4 tasks done
HfArgumentParser error when using LoraConfig dataclass
bug
#34834
opened Nov 20, 2024 by
JoseSantosAMD
2 of 4 tasks
Data collator class type integrity is not intact throughout the runtime
bug
#34830
opened Nov 20, 2024 by
kmehant
2 of 4 tasks
Flash attention 2 broke when batch inference
bug
Multimodal
Vision
#34824
opened Nov 20, 2024 by
pspdada
2 of 4 tasks
Maybe memory leak occurs after evaluation when using
use_liger_kernel
.
bug
#34822
opened Nov 20, 2024 by
upskyy
1 of 4 tasks
HW based PIL images not being handled in pretrained image processor
bug
#34820
opened Nov 20, 2024 by
alem-147
2 of 4 tasks
Mamba2
torch_forward
reduction dimension possibly incorrect?
bug
#34817
opened Nov 19, 2024 by
HanGuo97
1 of 4 tasks
SeamlessM4TForTextToSpeech.generate
not working if generation_config
is passed
bug
#34811
opened Nov 19, 2024 by
ydshieh
Flex attention + refactor
Feature request
Request for a new feature
Good Difficult Issue
PyTorch
Anything PyTorch
#34809
opened Nov 19, 2024 by
ArthurZucker
1 of 7 tasks
The usage of the "forced_decoder_ids" parameter
Feature request
Request for a new feature
#34808
opened Nov 19, 2024 by
Alsace08
Loading bigger models is very slow using
AutoModelForCausalLM.from_pretrained
bug
#34798
opened Nov 19, 2024 by
vibhas-singh
2 of 4 tasks
Previous Next
ProTip!
Adding no:label will show everything without a label.