We read every piece of feedback, and take your input very seriously.
To see all available qualifiers, see our documentation.
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
/usr/local/lib/python3.10/dist-packages/triton/compiler/backends/cuda.py in make_llir(src, metadata, options, capability) 171 add_external_libs(src, names, paths) 172 # TritonGPU -> LLVM-IR --> 173 ret = translate_triton_gpu_to_llvmir(src, capability, tma_infos, runtime.TARGET.NVVM) 174 if len(tma_infos) > 0: 175 metadata["tensormaps_info"] = parse_tma_info(tma_infos, metadata["ids_of_folded_args"])
IndexError: map::at env colab notebook:
%cd hydra !pip install -r requirements.txt
import torch from hydra import Hydra batch, length, dim = 2, 64, 16 x = torch.randn(batch, length, dim).to("cuda") model = Hydra( d_model=dim, # Model dimension d_model d_state=64, # SSM state expansion factor d_conv=7, # Local non-causal convolution width expand=2, # Block expansion factor use_mem_eff_path=False, # Nightly release. Thanks to Alston Lo headdim=16, ).to("cuda") y = model(x) assert y.shape == x.shape```
The text was updated successfully, but these errors were encountered:
Hi, which triton version is being used?
Sorry, something went wrong.
No branches or pull requests
/usr/local/lib/python3.10/dist-packages/triton/compiler/backends/cuda.py in make_llir(src, metadata, options, capability)
171 add_external_libs(src, names, paths)
172 # TritonGPU -> LLVM-IR
--> 173 ret = translate_triton_gpu_to_llvmir(src, capability, tma_infos, runtime.TARGET.NVVM)
174 if len(tma_infos) > 0:
175 metadata["tensormaps_info"] = parse_tma_info(tma_infos, metadata["ids_of_folded_args"])
IndexError: map::at
env colab notebook:
The text was updated successfully, but these errors were encountered: