-
Notifications
You must be signed in to change notification settings - Fork 1.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[Bug] maximum recursion depth exceeded #3518
Comments
This issue is duplicate as #3525 |
Thank you for pointing that out @kebe7jun . We will quickly review your PR and let you know. cc @zhaochenyang20 |
Do you have a PR to fix? @kebe7jun @sk2011-ship-it @jhinpan |
@zhaochenyang20 I believe @kebe7jun 's PR to fix this issue is here #3519, waiting for check. |
@jhinpan I will take a look. THnaks! |
I have installed datasets and the issue still exists, seems not the dependency problem |
I encountered the same problem. @zhaochenyang20 |
@zhaochenyang20 The merger failed |
@zwdgit Too many to merge. Please remind me. Thanks! |
Is there any new progress? @zhaochenyang20 |
@issaccv I am trying to pass the ci and merge it |
Checklist
Describe the bug
Maximum recursion depth triggered on exception exit.
Reproduction
N/A
Environment
root@g1805:/sgl-workspace# python3 -m sglang.check_env
INFO 02-12 08:49:13 init.py:190] Automatically detected platform cuda.
Python: 3.10.12 (main, Jan 17 2025, 14:35:34) [GCC 11.4.0]
CUDA available: True
GPU 0,1,2,3,4,5,6,7: NVIDIA GeForce RTX 4090
GPU 0,1,2,3,4,5,6,7 Compute Capability: 8.9
CUDA_HOME: /usr/local/cuda
NVCC: Cuda compilation tools, release 12.4, V12.4.131
CUDA Driver Version: 550.78
PyTorch: 2.5.1+cu124
sgl_kernel: 0.0.3.post3
flashinfer: 0.2.0.post2+cu124torch2.5
triton: 3.1.0
transformers: 4.48.3
torchao: 0.8.0
numpy: 1.26.4
aiohttp: 3.11.12
fastapi: 0.115.8
hf_transfer: 0.1.9
huggingface_hub: 0.28.1
interegular: 0.3.3
modelscope: 1.22.3
orjson: 3.10.15
packaging: 24.2
psutil: 6.1.1
pydantic: 2.10.6
multipart: 0.0.20
zmq: 26.2.1
uvicorn: 0.34.0
uvloop: 0.21.0
vllm: 0.7.2
openai: 1.61.1
tiktoken: 0.8.0
anthropic: 0.45.2
decord: 0.6.0
NVIDIA Topology:
GPU0 GPU1 GPU2 GPU3 GPU4 GPU5 GPU6 GPU7 CPU Affinity NUMA Affinity GPU NUMA ID
GPU0 X PHB SYS SYS SYS SYS SYS SYS 0-27,56-83 0 N/A
GPU1 PHB X SYS SYS SYS SYS SYS SYS 0-27,56-83 0 N/A
GPU2 SYS SYS X PHB SYS SYS SYS SYS 0-27,56-83 0 N/A
GPU3 SYS SYS PHB X SYS SYS SYS SYS 0-27,56-83 0 N/A
GPU4 SYS SYS SYS SYS X PHB SYS SYS 28-55,84-111 1 N/A
GPU5 SYS SYS SYS SYS PHB X SYS SYS 28-55,84-111 1 N/A
GPU6 SYS SYS SYS SYS SYS SYS X PHB 28-55,84-111 1 N/A
GPU7 SYS SYS SYS SYS SYS SYS PHB X 28-55,84-111 1 N/A
Legend:
X = Self
SYS = Connection traversing PCIe as well as the SMP interconnect between NUMA nodes (e.g., QPI/UPI)
NODE = Connection traversing PCIe as well as the interconnect between PCIe Host Bridges within a NUMA node
PHB = Connection traversing PCIe as well as a PCIe Host Bridge (typically the CPU)
PXB = Connection traversing multiple PCIe bridges (without traversing the PCIe Host Bridge)
PIX = Connection traversing at most a single PCIe bridge
NV# = Connection traversing a bonded set of # NVLinks
ulimit soft: 65535
The text was updated successfully, but these errors were encountered: