-
Notifications
You must be signed in to change notification settings - Fork 1.7k
Conversation
set PATH . for artifacts.zip
PLATFORM for build scripts
Sum up. This skeleton can be used for work. In the future, we can and should expand the functionality. Add releases, nightly builds, publishing Docker images and more. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's rename test.yml
to check.yml
, looks good otherwise!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In general looks very nice, good work @General-Beck !
I have a few open questions, considering the time it takes to execute right now the CI:
- The build phase is the most time consuming one, instead of approaching this with sscache, couldnt we also do it with cached docker layers?
- Im not a rust developer so my understanding of Rust build process is a bit limited right now. I think anyway right now we lack of some data to assess potential problems/improvements. Could we add timing inside the scripts to differenciate between: a) time it takes to download cache b) time it takes to download dependencies c) time it takes to compile the code
Note that that we only care about compiling tests speed for now, as building release artifacts will be only done on releasing a new version, which doesn't happen very often. sccache is the defacto standard of caching for large rust projects, and already compile time with cache are reasonable (<13 mins on macOS https://github.com/OpenEthereum/open-ethereum/runs/476907565). But if you have ideas how it could be improved with docker layers, PRs are welcome :)
We already have a) and c)
c) is
b) is part of c) and negligible compared to compile times. |
@ordian my bad, You are right that caching download, etc is already measured in time, I was looking for it inside the test build stage. I was asking for that because it would not be the first time I see it takes more time to download a cache than actually get all depedencies from scratch, it happened to me on travis due to the machine hardware they use. About the docker layer caching, I don't know if it will improve times, it might be the same or worse, but I would like to test it. Not now, as I don't have capacity at the moment but in a next iteration of this. So far looks very good, good job both @General-Beck @ordian |
continue-on-error in test
excellent work from my point of view, |
* master: Code cleanup in the sync module (#11552) initial cleanup (#11542) Warn if genesis constructor revert (#11550) ethcore: cleanup after #11531 (#11546) license update (#11543) Less cloning when importing blocks (#11531) Github Actions (#11528) Fix Alpine Dockerfile (#11538) Remove AuxiliaryData/AuxiliaryRequest (#11533) [journaldb]: cleanup (#11534) Remove references to parity-ethereum (#11525) Drop IPFS support (#11532) chain-supplier: fix warning reporting for GetNodeData request (#11530) Faster kill_garbage (#11514) [EngineSigner]: don't sign message with only zeroes (#11524)
Check, test and build code in Actions CI
Scheme:
VM and environment variables
ubuntu-latest
to the matrix and test and build in parallel for the two versions.Processes start after a push to the repository.
TODO
cargo audit
. github actions for cargo audit #11527