Skip to content
This repository has been archived by the owner on Oct 11, 2024. It is now read-only.

robertgshaw2-neuralmagic triggered nightly on refs/heads/expand-lm-eval-testing #21

robertgshaw2-neuralmagic triggered nightly on refs/heads/expand-lm-eval-testing

robertgshaw2-neuralmagic triggered nightly on refs/heads/expand-lm-eval-testing #21

Manually triggered June 23, 2024 18:43
Status Startup failure
Total duration
Artifacts

nm-nightly.yml

on: workflow_dispatch
PYTHON-3-10  /  ...  /  BUILD
PYTHON-3-10 / BUILD / BUILD
PYTHON-3-11  /  ...  /  BUILD
PYTHON-3-11 / BUILD / BUILD
PYTHON-3-8  /  ...  /  BUILD
PYTHON-3-8 / BUILD / BUILD
PYTHON-3-9  /  ...  /  BUILD
PYTHON-3-9 / BUILD / BUILD
PYTHON-3-10  /  ...  /  BENCHMARK
PYTHON-3-10 / BENCHMARK / BENCHMARK
PYTHON-3-10  /  ...  /  TEST
PYTHON-3-10 / TEST-SOLO / TEST
PYTHON-3-10  /  ...  /  LM-EVAL
PYTHON-3-10 / LM-EVAL-MULTI / LM-EVAL
PYTHON-3-10  /  ...  /  LM-EVAL
PYTHON-3-10 / LM-EVAL-SOLO / LM-EVAL
PYTHON-3-11  /  ...  /  BENCHMARK
PYTHON-3-11 / BENCHMARK / BENCHMARK
PYTHON-3-11  /  ...  /  TEST
PYTHON-3-11 / TEST-SOLO / TEST
PYTHON-3-11  /  ...  /  LM-EVAL
PYTHON-3-11 / LM-EVAL-MULTI / LM-EVAL
PYTHON-3-11  /  ...  /  LM-EVAL
PYTHON-3-11 / LM-EVAL-SOLO / LM-EVAL
PYTHON-3-8  /  ...  /  BENCHMARK
PYTHON-3-8 / BENCHMARK / BENCHMARK
PYTHON-3-8  /  ...  /  TEST
PYTHON-3-8 / TEST-SOLO / TEST
PYTHON-3-8  /  ...  /  LM-EVAL
PYTHON-3-8 / LM-EVAL-MULTI / LM-EVAL
PYTHON-3-8  /  ...  /  LM-EVAL
PYTHON-3-8 / LM-EVAL-SOLO / LM-EVAL
PYTHON-3-9  /  ...  /  BENCHMARK
PYTHON-3-9 / BENCHMARK / BENCHMARK
PYTHON-3-9  /  ...  /  TEST
PYTHON-3-9 / TEST-SOLO / TEST
PYTHON-3-9  /  ...  /  LM-EVAL
PYTHON-3-9 / LM-EVAL-MULTI / LM-EVAL
PYTHON-3-9  /  ...  /  LM-EVAL
PYTHON-3-9 / LM-EVAL-SOLO / LM-EVAL
PYTHON-3-10  /  ...  /  BENCHMARK_REPORT
PYTHON-3-10 / BENCHMARK / BENCHMARK_REPORT
PYTHON-3-10  /  ...  /  PUBLISH
PYTHON-3-10 / UPLOAD / PUBLISH
PYTHON-3-11  /  ...  /  BENCHMARK_REPORT
PYTHON-3-11 / BENCHMARK / BENCHMARK_REPORT
PYTHON-3-11  /  ...  /  PUBLISH
PYTHON-3-11 / UPLOAD / PUBLISH
PYTHON-3-8  /  ...  /  BENCHMARK_REPORT
PYTHON-3-8 / BENCHMARK / BENCHMARK_REPORT
PYTHON-3-8  /  ...  /  PUBLISH
PYTHON-3-8 / UPLOAD / PUBLISH
PYTHON-3-9  /  ...  /  BENCHMARK_REPORT
PYTHON-3-9 / BENCHMARK / BENCHMARK_REPORT
PYTHON-3-9  /  ...  /  PUBLISH
PYTHON-3-9 / UPLOAD / PUBLISH
Fit to window
Zoom out
Zoom in

Annotations

1 error
Invalid workflow file: .github/workflows/nm-nightly.yml#L38
The workflow is not valid. .github/workflows/nm-nightly.yml (Line: 38, Col: 34): Invalid input, lm_eval_label_mulit is not defined in the referenced workflow.