Skip to content

Commit

Permalink
Merge pull request #36 from kvcache-ai/develop-0.1.2
Browse files Browse the repository at this point in the history
Release v0.1.2
  • Loading branch information
UnicornChan authored Aug 15, 2024
2 parents 44f5727 + 395cd3e commit 77a34c2
Show file tree
Hide file tree
Showing 69 changed files with 4,799 additions and 1,961 deletions.
252 changes: 252 additions & 0 deletions .github/workflows/package_wheel_release.yml

Large diffs are not rendered by default.

132 changes: 132 additions & 0 deletions .github/workflows/package_wheel_test.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,132 @@
name: Build Wheels
on:
workflow_dispatch:
inputs:
release:
description: 'Release? 1 = yes, 0 = no'
default: '0'
required: true
type: string
jobs:
build_wheels:
name: ${{ matrix.os }} Python=${{ matrix.pyver }} CUDA=${{ matrix.cuda }} CPU_INSTRUCT=${{ matrix.instruct }} Torch=${{ matrix.torch }}
runs-on: ${{ matrix.os }}
strategy:
fail-fast: false
matrix:
include:
# Ubuntu
- { os: ubuntu-20.04, pyver: '3.12', cuda: '12.2.2', torch: '2.3.0', cudaarch: '8.9;9.0+PTX', instruct: 'FANCY', torch_cu: '121'}
- { os: windows-2022, pyver: '3.11', cuda: '12.5.1', torch: '2.4.0', cudaarch: '8.9;9.0+PTX', instruct: 'AVX2', torch_cu: '124'}

defaults:
run:
shell: pwsh

steps:
- uses: actions/checkout@v3

- name: Free Disk Space
uses: jlumbroso/[email protected]
if: runner.os == 'Linux'
with:
tool-cache: true
android: true
dotnet: true
haskell: true
large-packages: false
swap-storage: true

- uses: actions/setup-python@v4
with:
python-version: ${{ matrix.pyver }}

- name: check_space
run: |
if($IsLinux) {df -h}
if($IsWindows) {Get-PSDrive -PSProvider 'FileSystem'}
- uses: actions/setup-node@v4
with:
node-version: 20

- name: Setup Mamba
if: matrix.cuda != ''
uses: conda-incubator/[email protected]
with:
activate-environment: "ktransformers"
python-version: ${{ matrix.pyver }}
miniforge-variant: Mambaforge
miniforge-version: latest
use-mamba: true
add-pip-as-python-dependency: true
auto-activate-base: false



- name: build web
run: |
cd ktransformers/website/
npm install
npm run build
cd ../../
- name: build for cuda
if: matrix.cuda != ''
run: |
git submodule init
git submodule update
if($IsWindows){
$originalPath = Get-Location
Import-Module 'C:\Program Files\Microsoft Visual Studio\2022\Enterprise\Common7\Tools\Microsoft.VisualStudio.DevShell.dll'
Enter-VsDevShell -VsInstallPath 'C:\Program Files\Microsoft Visual Studio\2022\Enterprise' -DevCmdArguments '-arch=x64 -host_arch=x64'
$env:DISTUTILS_USE_SDK=1
Set-Location $originalPath
}
$cudaVersion = '${{ matrix.cuda }}'
$env:MAMBA_NO_LOW_SPEED_LIMIT = 1
mamba install -y -c nvidia/label/cuda-$cudaVersion cuda-toolkit cuda-runtime
$env:CUDA_PATH = $env:CONDA_PREFIX
$env:CUDA_HOME = $env:CONDA_PREFIX
if ($IsLinux) {
$env:LD_LIBRARY_PATH = $env:CONDA_PREFIX + '/lib:' + $env:LD_LIBRARY_PATH
$env:LD_LIBRARY_PATH = $env:CONDA_PREFIX + '/lib/python${{ matrix.pyver }}/site-packages/nvidia/nvjitlink/lib:' + $env:LD_LIBRARY_PATH
if (!(Test-Path $env:CUDA_HOME/lib64)) {
New-Item -ItemType SymbolicLink -Path $env:CUDA_HOME/lib64 -Target $env:CUDA_HOME/lib
}
}
if ($IsWindows) {
$env:CUDA_PATH = "$env:CUDA_PATH/Library"
$env:CUDA_HOME = $env:CUDA_PATH
$env:PATH = "$env:CUDA_PATH/bin;" + $env:PATH
cp $env:CUDA_PATH/lib/*.lib $env:CUDA_PATH/lib/x64/
$env:INCLUDE =$env:CUDA_PATH + "/include/targets/x64;" + $env:INCLUDE
}
python -m pip install torch==${{ matrix.torch }} torchvision torchaudio --index-url https://download.pytorch.org/whl/cu${{ matrix.torch_cu }}
python -m pip install cpufeature build wheel ninja packaging setuptools
$env:KTRANSFORMERS_FORCE_BUILD = "TRUE"
$env:CPU_INSTRUCT = '${{ matrix.instruct }}'
$env:TORCH_CUDA_ARCH_LIST = '${{ matrix.cudaarch }}'
python -m build --no-isolation --verbose
- name: create Rlease dir
run: |
if ($IsWindows) {
$env:date = $(Get-Date -Format "yyyy-MM-dd")
New-Item -ItemType Directory -Force -Path "$Env:USERPROFILE\.ssh"
$Env:SSH_PATH = "$Env:USERPROFILE\.ssh\id_rsa"
Set-Content -Path $Env:SSH_PATH -Value "${{ secrets.SSH_PRIVATE_KEY }}"
(Get-Content -Path $Env:SSH_PATH).Replace("`r`n","`n") | Set-Content -Path $Env:SSH_PATH
chmod 600 $Env:SSH_PATH
}
if ($IsLinux) {
$env:date = $(date +%Y-%m-%d)
mkdir -p ~/.ssh/
echo "${{ secrets.SSH_PRIVATE_KEY }}" > ~/.ssh/id_rsa
chmod 600 ~/.ssh/id_rsa
}
ssh -p ${{ secrets.SSH_PORT }} -o StrictHostKeyChecking=no root@${{ secrets.SSH_SERVER }} "mkdir -p /mnt/data/release-$env:date"
scp -P ${{ secrets.SSH_PORT }} -o StrictHostKeyChecking=no dist/*.whl root@${{ secrets.SSH_SERVER }}:/mnt/data/release-$env:date/
5 changes: 4 additions & 1 deletion .gitignore
Original file line number Diff line number Diff line change
Expand Up @@ -14,4 +14,7 @@ node_modules
.DS_Store
compile_commands.json
*.egg-info*
*dist/
*dist/
ktransformers/server/local_store/
ktransformers/server_test1.db
*.patch
7 changes: 5 additions & 2 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -268,7 +268,10 @@ In this example, the AutoModel is first initialized on the meta device to avoid
After injection, the original `generate` interface is available, but we also provide a compatible `prefill_and_generate` method, which enables further optimizations like CUDAGraph to improve generation speed.
<h3>YAML Template</h3>
<h3>How to custom your model</h3>
A detailed tutorial of the injection and multi-GPU using DeepSeek-V2 as an example is given [here](doc/en/injection_tutorial.md).
Below is an example of a YAML template for replacing all original Linear modules with Marlin, an advanced 4-bit quantization kernel.
```yaml
Expand All @@ -287,7 +290,7 @@ Each rule in the YAML file has two parts: `match` and `replace`. The `match` par
You can find example rule templates for optimizing DeepSeek-V2 and Qwen2-57B-A14, two SOTA MoE models, in the [ktransformers/optimize/optimize_rules](ktransformers/optimize/optimize_rules) directory. These templates are used to power the `local_chat.py` demo.
A detailed description of the injection using DeepSeek-V2 as an example is given [here](doc/en/deepseek-v2-injection.md).
If you are interested in our design principles and the implementation of the injection framework, please refer to the [design document](doc/en/deepseek-v2-injection.md).
<h2 id="ack">Acknowledgment and Contributors</h2>
Expand Down
Binary file added doc/assets/deepseekv2_structure.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/assets/model_structure_guild.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added doc/assets/multi_gpu.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
20 changes: 10 additions & 10 deletions doc/en/deepseek-v2-injection.md
Original file line number Diff line number Diff line change
Expand Up @@ -90,17 +90,17 @@ The YAML rule is listed below.
- match:
name: "^model\\.layers\\..*\\.self_attn$" # regular expression
replace:
class: ktransformers.operators.attention.DeepseekV2AttentionInjected # optimized MLA implementation
class: ktransformers.operators.attention.KDeepseekV2Attention # optimized MLA implementation
```
As we can see, each rule in the YAML file has two parts: `match` and `replace`.
The match part specifies which module should be replaced, and the replace part specifies the module to be injected into the model along with the initialization keywords.

<h3 id="experts">Routed Experts </h3>

For routed experts, the module we inject is a wrapper of CPUInfer, KTransformersMLPExpert. There are several implementations within a wrapper, and we need to specify keywords to tell the wrapper which implementation we want to use and how we intend to use it.
For routed experts, the module we inject is a wrapper of CPUInfer, KTransformersExperts. There are several implementations within a wrapper, and we need to specify keywords to tell the wrapper which implementation we want to use and how we intend to use it.

In KTransformers, some models exhibit different behaviors during prefilling and generation for better performance. KTransformersMLPExpert is one of them. All these special modules have a `device` keyword describing which device the module should be initialized on. Other keywords specify the behaviors during prefilling and generation and may be differ when using different injection modules. Here, we specify which implementation on which device we want to use during prefilling and generation, and which device the output should be on.
In KTransformers, some models exhibit different behaviors during prefilling and generation for better performance. KTransformersExperts is one of them. All these special modules have a `device` keyword describing which device the module should be initialized on. Other keywords specify the behaviors during prefilling and generation and may be differ when using different injection modules. Here, we specify which implementation on which device we want to use during prefilling and generation, and which device the output should be on.
Note that we only use these parameters when layer-wise prefilling is enabled; otherwise, prefilling is conducted with the same configuration as generation.

In the original implementation of Transformers, MoE is implemented using `nn.ModuleList`. We don't want KTransformers to iterate through all the sub-modules in the list, so we set `recursive: False` in this rule to prevent recursive injection into submodules of the current module. Here is the YAML rule:
Expand All @@ -109,13 +109,13 @@ In the original implementation of Transformers, MoE is implemented using `nn.Mod
- match:
name: "^model\\.layers\\..*\\.mlp\\.experts$"
replace:
class: ktransformers.operators.experts.KTransformersMLPExpert # custom MoE Kernel with expert parallelism
class: ktransformers.operators.experts.KTransformersExperts # custom MoE Kernel with expert parallelism
device: "cpu" # device to load this module on initialization
kwargs:
prefill_device: "cuda"
prefill_mlp_type: "MLPExpertsTorch"
prefill_op: "KExpertsTorch"
generate_device: "cpu"
generate_mlp_type: "MLPCPUExperts"
generate_op: "KExpertsCPU"
out_device: "cuda"
recursive: False # don't recursively inject submodules of this module
```
Expand All @@ -126,7 +126,7 @@ If we inject the expert list as a custom module, we can't use the interface in `
- match:
class: ktransformers.models.modeling_deepseek.DeepseekV2MoE
replace:
class: ktransformers.operators.experts.DeepseekV2MoEInjected # MLP module with custom forward function
class: ktransformers.operators.experts.KDeepseekV2MoE # MLP module with custom forward function
```

<h3 id="linear">Other Linear Modules</h3>
Expand All @@ -140,12 +140,12 @@ We also need to transfer some keywords similar to the injection of experts. Here
name: "^model\\.layers\\.(?!.*self_attn).*$" # regular expression
class: torch.nn.Linear # only match modules matching name and class simultaneously
replace:
class: ktransformers.operators.linear.KTransformerLinear # optimized Kernel on quantized data types
class: ktransformers.operators.linear.KTransformersLinear # optimized Kernel on quantized data types
kwargs:
generate_device: "cuda"
prefill_device: "cuda"
generate_op: "QuantizedLinearMarlin"
prefill_op: "QuantizedLinearTorch"
generate_op: "KLinearMarlin"
prefill_op: "KLinearTorch"
```

<h3 id="Pre-compute Buffers">Pre-compute Buffers </h3>
Expand Down
Loading

0 comments on commit 77a34c2

Please sign in to comment.