-
Notifications
You must be signed in to change notification settings - Fork 2
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
58 changed files
with
8 additions
and
3,277 deletions.
There are no files selected for viewing
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,29 +1,8 @@ | ||
# LLaMA-Inference-Bench | ||
|
||
LLM-Inference-Bench: Inference Benchmarking of Large Language Models on AI Accelerators | ||
|
||
## Metrix of Evaluated Frameworks and Hardwares : | ||
|
||
| Framework/ Hardware | NVIDIA A100 | NVIDIA H100 | NVIDIA GH200 | AMD MI250 | Intel PVC | Habana Gaudi2 | Sambanova SN40L | | ||
|:-----------------------:|:---------------:|:---------------:|:------------:|:---------:|:---------:|:-------------:|:---------------:| | ||
| [vLLM](./vLLM/README.md) | [Link]() | [Link]() | Yes | [Link]() | [Link]() | No | N/A | | ||
| [llama.cpp](./llama.cpp/README.md) | [Link]() | [Link]() | Yes | [Link]() | [Link]() | N/A | N/A | | ||
| [TensorRT-LLM](./TensorRT-LLM/README.md) | [Link]() | [Link]() | [Link]() | N/A | N/A | N/A | N/A | | ||
| [DeepSpeed-MII](./Deepspeed-MII/README.md) | No | No | No | No | No | [Link]() | N/A | | ||
|
||
## Key Insights | ||
|
||
|
||
Cite this work: | ||
``` | ||
@INPROCEEDINGS{####, | ||
author={Krishna Teja Chitty-Venkata and Siddhisanket Raskar and Bharat Kale and Farah Ferdaus and Aditya Tanikanti and Ken Raffenetti and Valerie Taylor and Murali Emani and Venkatram Vishwanath}, | ||
booktitle={2024 IEEE/ACM International Workshop on Performance Modeling, Benchmarking and Simulation of High Performance Computer Systems (PMBS)}, | ||
title={LLM-Inference-Bench: Inference Benchmarking of Large Language Models on AI Accelerators}, | ||
year={2024}, | ||
volume={}, | ||
number={}, | ||
pages={}, | ||
keywords={Large Language Models, AI Accelerators, Performance Evaluation, Benchmarking }, | ||
doi={}} | ||
``` | ||
# InferenceGraphPlotter | ||
|
||
## How to run? | ||
1. Clone the repo and cd into the repo | ||
2. Spin up a simple webserver to serve the files. One way is by using python. | ||
- for python 2: python -m SimpleHTTPServer | ||
- for python 3: python -m http.server | ||
3. Open a webbrowser and go to http://localhost:8000 |
Empty file.
Empty file.
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
This file was deleted.
Oops, something went wrong.
Oops, something went wrong.