Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[V1][Log] Add max request concurrency log to V1 #12569

Merged

Conversation

mgoin
Copy link
Member

@mgoin mgoin commented Jan 30, 2025

Ports over the Maximum concurrency for XXXX tokens per request: YYx log from V0

VLLM_USE_V1=1 vllm serve neuralmagic/Meta-Llama-3.1-70B-Instruct-quantized.w8a8 --max-model-len 2048 --gpu-memory-utilization 0.98
...
INFO 01-30 03:18:12 gpu_model_runner.py:848] Loading model weights took 67.7229 GB
INFO 01-30 03:18:25 backends.py:577] Using cache directory: /home/mgoin/.cache/vllm/torch_compile_cache/2b70b500a6/rank_0 for vLLM's torch.compile
INFO 01-30 03:18:25 backends.py:585] Dynamo bytecode transform time: 12.88 s
INFO 01-30 03:18:28 backends.py:309] Cache the graph of shape None for later use
INFO 01-30 03:19:07 backends.py:321] Compiling a graph for general shape takes 41.86 s
INFO 01-30 03:19:13 monitor.py:31] torch.compile takes 54.73 s in total
INFO 01-30 03:19:14 kv_cache_utils.py:395] # GPU blocks: 1381
INFO 01-30 03:19:14 kv_cache_utils.py:398] Maximum concurrency for 2048 tokens per request: 10.79x

Copy link

👋 Hi! Thank you for contributing to the vLLM project.
Just a reminder: PRs would not trigger full CI run by default. Instead, it would only run fastcheck CI which starts running only a small and essential subset of CI tests to quickly catch errors. You can run other CI tests on top of those by going to your fastcheck build on Buildkite UI (linked in the PR checks section) and unblock them. If you do not have permission to unblock, ping simon-mo or khluu to add you in our Buildkite org.

Once the PR is approved and ready to go, your PR reviewer(s) can run CI to test the changes comprehensively before merging.

To run CI, PR reviewers can do one of these:

  • Add ready label to the PR
  • Enable auto-merge.

🚀

@robertgshaw2-redhat robertgshaw2-redhat enabled auto-merge (squash) January 30, 2025 03:22
@github-actions github-actions bot added the ready ONLY add when PR is ready to merge/full CI is needed label Jan 30, 2025
@robertgshaw2-redhat robertgshaw2-redhat merged commit 4078052 into vllm-project:main Jan 30, 2025
57 of 60 checks passed
Isotr0py pushed a commit to Isotr0py/vllm that referenced this pull request Feb 2, 2025
youngkent pushed a commit to youngkent/vllm that referenced this pull request Feb 3, 2025
srikanthsrnvs pushed a commit to srikanthsrnvs/vllm that referenced this pull request Feb 3, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
ready ONLY add when PR is ready to merge/full CI is needed
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants