You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When using outlines with the Llama 3.2 Vision model, simple regex pattern generation works, but JSON schema-based generation fails with index out of bounds errors.
On CPU:
> python vision.py ~/dottxt/outlines/debug
Loading checkpoint shards: 100%|██████████████████████████| 5/5 [00:42<00:00, 8.41s/it]
Pluto
\{[ ]?"caption"[ ]?:[ ]?"([^"\\\x00-\x1F\x7F-\x9F]|\\["\\])*"[ ]?\}
Traceback (most recent call last):
File "/home/cameron/dottxt/outlines/debug/vision.py", line 44, in<module>
out = generator(
File "/home/cameron/dottxt/outlines/debug/.pyron/lib/python3.10/site-packages/outlines/generate/api.py", line 556, in __call__
completions = self.model.generate(
File "/home/cameron/dottxt/outlines/debug/.pyron/lib/python3.10/site-packages/outlines/models/transformers_vision.py", line 56, in generate
generated_ids = self._generate_output_seq(prompts, inputs, **generation_kwargs)
File "/home/cameron/dottxt/outlines/debug/.pyron/lib/python3.10/site-packages/outlines/models/transformers.py", line 350, in _generate_output_seq
output_ids = self.model.generate(
File "/home/cameron/dottxt/outlines/debug/.pyron/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/cameron/dottxt/outlines/debug/.pyron/lib/python3.10/site-packages/transformers/generation/utils.py", line 2215, in generate
result = self._sample(
File "/home/cameron/dottxt/outlines/debug/.pyron/lib/python3.10/site-packages/transformers/generation/utils.py", line 3223, in _sample
next_token_scores = logits_processor(input_ids, next_token_logits)
File "/home/cameron/dottxt/outlines/debug/.pyron/lib/python3.10/site-packages/transformers/generation/logits_process.py", line 104, in __call__
scores = processor(input_ids, scores)
File "/home/cameron/dottxt/outlines/debug/.pyron/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/cameron/dottxt/outlines/debug/.pyron/lib/python3.10/site-packages/outlines/processors/base_logits_processor.py", line 78, in __call__
processed_logits = self.process_logits(input_ids, torch_logits)
File "/home/cameron/dottxt/outlines/debug/.pyron/lib/python3.10/site-packages/outlines/processors/structured.py", line 121, in process_logits
mask[batch_indices_concat, allowed_tokens_concat] = False
IndexError: index 128256 is out of bounds for dimension 1 with size 128256
On GPU:
> python vision.py ~/dottxt/outlines/debug
The model weights are not tied. Please use the `tie_weights` method before using the `infer_auto_device` function.
Loading checkpoint shards: 100%|██████████████████████████| 5/5 [00:12<00:00, 2.45s/it]
Saturn
\{[ ]?"caption"[ ]?:[ ]?"([^"\\\x00-\x1F\x7F-\x9F]|\\["\\])*"[ ]?\}
../aten/src/ATen/native/cuda/IndexKernel.cu:92: operator(): block: [169,0,0], thread: [12,0,0] Assertion `-sizes[i] <= index && index < sizes[i] && "index out of bounds"` failed.
Traceback (most recent call last):
File "/home/cameron/dottxt/outlines/debug/vision.py", line 44, in <module>
out = generator(
File "/home/cameron/dottxt/outlines/debug/.pyron/lib/python3.10/site-packages/outlines/generate/api.py", line 556, in __call__
completions = self.model.generate(
File "/home/cameron/dottxt/outlines/debug/.pyron/lib/python3.10/site-packages/outlines/models/transformers_vision.py", line 56, in generate
generated_ids = self._generate_output_seq(prompts, inputs, **generation_kwargs)
File "/home/cameron/dottxt/outlines/debug/.pyron/lib/python3.10/site-packages/outlines/models/transformers.py", line 350, in _generate_output_seq
output_ids = self.model.generate(
File "/home/cameron/dottxt/outlines/debug/.pyron/lib/python3.10/site-packages/torch/utils/_contextlib.py", line 116, in decorate_context
return func(*args, **kwargs)
File "/home/cameron/dottxt/outlines/debug/.pyron/lib/python3.10/site-packages/transformers/generation/utils.py", line 2215, in generate
result = self._sample(
File "/home/cameron/dottxt/outlines/debug/.pyron/lib/python3.10/site-packages/transformers/generation/utils.py", line 3249, in _sample
next_tokens = torch.multinomial(probs, num_samples=1).squeeze(1)
RuntimeError: CUDA error: device-side assert triggered
CUDA kernel errors might be asynchronously reported at some other API call, so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1
Compile with `TORCH_USE_CUDA_DSA` to enable device-side assertions.
Also getting index 146001 is out of bounds for dimension 1 with size 128256 with Qwen/Qwen2.5-7B-Instruct during json generation with this llm: llm = Llama(model[1], tokenizer=llama_cpp.llama_tokenizer.LlamaHFTokenizer.from_pretrained(model[0]), n_ctx=4096, n_gpu_layers=-1, verbose=False,)
Describe the issue as clearly as possible:
When using outlines with the Llama 3.2 Vision model, simple regex pattern generation works, but JSON schema-based generation fails with index out of bounds errors.
Steps/code to reproduce the bug:
Expected result:
Error message:
On GPU:
Outlines/Python version information:
Version information
Context for the issue:
From HatterNoMad on the Outlines Discord.
The text was updated successfully, but these errors were encountered: