Skip to content

Apply the attention mask in all decoding steps (LM inference) #1295

Apply the attention mask in all decoding steps (LM inference)

Apply the attention mask in all decoding steps (LM inference) #1295

Triggered via pull request December 13, 2023 09:58
Status Success
Total duration 4m 43s
Artifacts
This run and associated checks have been archived and are scheduled for deletion. Learn more about checks retention

push.yml

on: pull_request
Matrix: lint-and-tests
Fit to window
Zoom out
Zoom in

Annotations

2 warnings
build-docs
The following actions uses node12 which is deprecated and will be forced to run on node16: actions/checkout@v2, actions/setup-python@v2. For more info: https://github.blog/changelog/2023-06-13-github-actions-all-actions-will-run-on-node16-instead-of-node12-by-default/
lint-and-tests (3.8)
The following actions uses node12 which is deprecated and will be forced to run on node16: actions/checkout@v2, actions/setup-python@v2. For more info: https://github.blog/changelog/2023-06-13-github-actions-all-actions-will-run-on-node16-instead-of-node12-by-default/