Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[ci] Add benchmark to postsubmit workflow #3220

Merged
merged 5 commits into from
Oct 25, 2021
Merged
Show file tree
Hide file tree
Changes from 3 commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 27 additions & 0 deletions .github/workflows/postsubmit.yml
Original file line number Diff line number Diff line change
Expand Up @@ -231,3 +231,30 @@ jobs:
TI_LIB_DIR=`python3 -c "import taichi;print(taichi.__path__[0])" | tail -1`
TI_LIB_DIR="$TI_LIB_DIR/lib" ./build/taichi_cpp_tests
ti test -vr2 -t4 -x

performance_monitoring:
name: Performance monitoring (NVGPU)
needs: check_code_format
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's no check_code_format in the postsubmit workflow right?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My bad, I used presubmit workflow in test_actions to test it before. 😆
[ci] Benchmark presubmit test

timeout-minutes: 60
runs-on: [self-hosted, x64, cuda, linux, benchmark]
steps:
- uses: actions/checkout@v2
with:
submodules: 'recursive'

- name: Build & Install
run: |
export PATH=$PATH:/usr/local/cuda/bin
.github/workflows/scripts/unix_build.sh
env:
LLVM_LIB_ROOT_DIR: /opt/taichi-llvm-10.0.0
LLVM_PATH: /opt/taichi-llvm-10.0.0/bin
LLVM_DIR: /opt/taichi-llvm-10.0.0/lib/cmake/llvm
CUDA_TOOLKIT_ROOT_DIR: /usr/local/cuda/
CI_SETUP_CMAKE_ARGS: -DTI_WITH_CUDA_TOOLKIT:BOOL=ON
BUILD_NUM_THREADS: 8
CXX: clang++-10

- name: Run benchmark
run: |
python3 benchmarks/misc/run.py /home/benchmarkbot/benchmark/
2 changes: 1 addition & 1 deletion benchmarks/misc/membound.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,7 +8,7 @@
test_cases = [fill, saxpy, reduction]
test_archs = [ti.cuda]
test_dtype = [ti.i32, ti.i64, ti.f32, ti.f64]
test_dsize = [(4**i) * kibibyte for i in range(1, 11)] #[4KB,16KB...1GB]
test_dsize = [(4**i) * kibibyte for i in range(1, 10)] #[4KB,16KB...256MB]
test_repeat = 10
results_evaluation = [geometric_mean]

Expand Down
31 changes: 25 additions & 6 deletions benchmarks/misc/run.py
Original file line number Diff line number Diff line change
@@ -1,4 +1,9 @@
import datetime
import os
import sys

from membound import Membound
from taichi.core import ti_core as _ti_core

import taichi as ti

Expand All @@ -18,16 +23,30 @@ def run(self):
for s in self.suites:
s.run()

def write_md(self):
filename = f'performance_result.md'
with open(filename, 'w') as f:
def store_to_path(self, path_with_file_name='./performance_result.md'):
with open(path_with_file_name, 'w') as f:
for arch in test_archs:
for s in self.suites:
lines = s.mdlines(arch)
for line in lines:
print(line, file=f)

def store_with_date_and_commit_id(self, file_dir='./'):
current_time = datetime.datetime.now().strftime("%Y%m%dd%Hh%Mm%Ss")
commit_hash = _ti_core.get_commit_hash()[:8]
file_name = f'perfresult_{current_time}_{commit_hash}.md'
path = os.path.join(file_dir, file_name)
print('Storing benchmark result to: ' + path)
self.store_to_path(path)


def main():
file_dir = sys.argv[1] if len(sys.argv) > 1 else './'
p = PerformanceMonitoring()
p.run()
p.store_to_path() # for /benchmark
p.store_with_date_and_commit_id(file_dir) #for postsubmit


p = PerformanceMonitoring()
p.run()
p.write_md()
if __name__ == '__main__':
main()