You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Could you give your pip and conda list , when run inference:
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: false INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1656352645774/work/torch/csrc/jit/codegen/cuda/executor_utils.cpp":1068, please report a bug to PyTorch. namespace CudaCodeGen {
The text was updated successfully, but these errors were encountered:
OR this error
This is an indication that codegen Failed for some reason.
To debug try disable codegen fallback path via setting the env variable export PYTORCH_NVFUSER_DISABLE=fallback
(Triggered internally at /opt/conda/conda-bld/pytorch_1656352645774/work/torch/csrc/jit/codegen/cuda/manager.cpp:329.)
attn_weights = upcast_masked_softmax(attn_weights, attention_mask, mask_value, unscale, softmax_dtype)
Write a Python code to count 1 to 10.
Traceback (most recent call last):
File "/WizardLM/WizardCoder/src/inference_wizardcoder.py", line 121, in
fire.Fire(main)
File "/home/anaconda3/envs/WizardCorder/lib/python3.10/site-packages/fire/core.py", line 141, in Fire
component_trace = _Fire(component, args, parsed_flag_args, context, name)
File "/home/anaconda3/envs/WizardCorder/lib/python3.10/site-packages/fire/core.py", line 475, in _Fire
component, remaining_args = _CallAndUpdateTrace(
File "/home/anaconda3/envs/WizardCorder/lib/python3.10/site-packages/fire/core.py", line 691, in _CallAndUpdateTrace
component = fn(*varargs, **kwargs)
File "/WizardLM/WizardCoder/src/inference_wizardcoder.py", line 110, in main
_output = evaluate(instruction, tokenizer, model)
File "/WizardLM/WizardCoder/src/inference_wizardcoder.py", line 47, in evaluate
generation_output = model.generate(
File "/home/anaconda3/envs/WizardCorder/lib/python3.10/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context
return func(*args, **kwargs)
File "/home/anaconda3/envs/WizardCorder/lib/python3.10/site-packages/transformers/generation/utils.py", line 1515, in generate
return self.greedy_search(
File "/home/anaconda3/envs/WizardCorder/lib/python3.10/site-packages/transformers/generation/utils.py", line 2385, in greedy_search
next_tokens.tile(eos_token_id_tensor.shape[0], 1).ne(eos_token_id_tensor.unsqueeze(1)).prod(dim=0)
RuntimeError:
#define POS_INFINITY __int_as_float(0x7f800000)
#define INFINITY POS_INFINITY
#define NEG_INFINITY __int_as_float(0xff800000)
#define NAN __int_as_float(0x7fffffff)
Could you give your pip and conda list , when run inference:
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: false INTERNAL ASSERT FAILED at "/opt/conda/conda-bld/pytorch_1656352645774/work/torch/csrc/jit/codegen/cuda/executor_utils.cpp":1068, please report a bug to PyTorch. namespace CudaCodeGen {
The text was updated successfully, but these errors were encountered: