Skip to content

Commit

Permalink
run higher priority
Browse files Browse the repository at this point in the history
  • Loading branch information
epwalsh committed Oct 31, 2024
1 parent a236295 commit dd2ca7f
Show file tree
Hide file tree
Showing 2 changed files with 6 additions and 8 deletions.
10 changes: 6 additions & 4 deletions .github/workflows/main.yml
Original file line number Diff line number Diff line change
Expand Up @@ -174,16 +174,17 @@ jobs:
image:
beaker: ${{ env.BEAKER_IMAGE }}
context:
priority: normal
# priority: normal
priority: high
preemptible: true
resources:
gpuCount: ${{ matrix.task.gpus }}
constraints:
cluster:
- ai2/jupiter-cirrascale-2
- ai2/pluto-cirrascale
# - ai2/allennlp-cirrascale
# - ai2/allennlp-elanding-a100-40g
- ai2/pluto-cirrascale
- ai2/jupiter-cirrascale-2
# - ai2/saturn-cirrascale
envVars:
- name: CUBLAS_WORKSPACE_CONFIG
Expand All @@ -201,7 +202,8 @@ jobs:
result:
path: /unused
token: ${{ env.BEAKER_TOKEN }}
workspace: ${{ env.BEAKER_WORKSPACE }}
# workspace: ${{ env.BEAKER_WORKSPACE }}
workspace: ai2/OLMo-pretraining-stability

release:
name: Release
Expand Down
4 changes: 0 additions & 4 deletions CHANGELOG.md
Original file line number Diff line number Diff line change
Expand Up @@ -11,10 +11,6 @@ and this project adheres to [Semantic Versioning](https://semver.org/spec/v2.0.0

- Added `DownstreamEvaluatorCallbackConfig` class for running in-loop downstream eval via [OLMo-in-loop-evals](https://github.com/allenai/OLMo-in-loop-evals).

### Removed

- Removed `flash-attn` from the Beaker images since `flash-attn` currently can't be built for torch 2.5.1. We are waiting on updates from the `flash-attn` maintainers. See https://github.com/Dao-AILab/flash-attention/issues/1302.

### Fixed

- Made GCS client more robust by automatically retrying timeout errors for most operations.
Expand Down

0 comments on commit dd2ca7f

Please sign in to comment.