Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Pre-release branch for N.2.9.1 #695

Merged
merged 142 commits into from
Jan 27, 2025
Merged

Conversation

JamesPiechota
Copy link
Collaborator

No description provided.

JamesPiechota and others added 30 commits December 12, 2024 19:50
Add a new librandomx build and nifs.
Implement "randomx squared" packing.

fix: nif now uses a librandomx function to query the scratchpad size

configuration.h can't be relied upon in nifs since they aren't
built with the same -D flags as librandomx.a

fix: replace randomx::ScratchpadSize with randomx_get_scratchpad_size()

We can't rely on the randomx headers for values which can
be changed with build time flags unfortunately.

Add replica_2_9 support to ar_chunk_storage:

* use unpacked_padded as the packing to request for replica_2_9 storage
  modules;
* prepare entropy in advance in ar_chunk_storage;
* read the entropy and encipher upon receiving a chunk;

!!! Set 2.9 HF height to block 1602350 (Feb. 3, 2025) !!!

Deprecate composite packing 60 days after 2.9 activation.

Improve ar_test_node:start usability.

feat: bin/benchmark-2.9 to benchmark the new packing format

Usage: benchmark-2.9 [threads 1] [mib 1024] [dir a dir b dir c ...]

threads: number of threads to run.
mib: total amount of data to pack in MiB.
     Will be divided evenly between threads, so the final number may be
     lower than specified to ensure balanced threads.
dir: directories to pack data to. If left off, benchmark will just simulate
     entropy generation without writing to disk.

Co-authored-by: vird <[email protected]>
Co-authored-by: Lev Berman <[email protected]>
Github Workflow has been improved and optimized to run in a parallel
environment using dynamic runner spawned on demand. Artifacts are now
uploaded and copied across each builds. A build will always start from
a fresh installation (meaning the compilation time will take longer).

This commit fixes also few bugs in the test suite, causing random
crash during execution. They are mainly related to the load of the
system where the tests are running.

 - A race condition was found when starting test on test
   peer. Sometimes, peers are not ready to start the tests and crash.

 - Many race conditions because of timeouts, mainly in
   `ar_vdf_server_tests`, `ar_http_iface_tests`, `ar_poa_tests`
   and `ar_tx_blacklist_tests`

The number of workers has been set to `8` instead of `12`. The
current server is having trouble dealing with more than `12`
workers in parallel.

A cache has been added for dependencies, this is inefficient to
fetch dependencies for every build. To avoid that, deps are
updated when rebar.lock checksum is different from the previous
build. If deps cache is not found on the cache store, it is
created and uploaded.

see: ArweaveTeam/infra#114
This commit is the first step to improve download-partition.sh by
adding content length and checksum verification/validation. In
a near future, this script should use another tool than bash
and wget. Rsync or any other protocols will be better.

see: ArweaveTeam/arweave-dev#740
The new script is not ready (yet) and the old one should
be used instead.
This reverts commit bf954b4.
also add some tests and rename sub_chunk_index to slice_index
ar_chunk_storage:get_chunk_bucket_start/1 is mostly about how we
manage chunk storage and so isn't part of the protocol.

get_entropy_bucket_start, however, is part of the protocol so
best to keep them separate
This commit is the first step to improve download-partition.sh by
adding content length and checksum verification/validation. In
a near future, this script should use another tool than bash
and wget. Rsync or any other protocols will be better.

see: ArweaveTeam/arweave-dev#740
JamesPiechota and others added 29 commits January 24, 2025 11:27
1. When writing entropy to the overlap zone between storage modules
   sometimes entropy could be written over chunk data. Fix is not
   to write to neighboring storage modules and instead waste a bit
   of entropy. Wasted entropy is about 0.07% of all entropy generated
   per partition.
2. When repacking in place make sure we wait for entropy to be written
   whenever moving to a new slice index. This is primarily an issue
   during tests where there are only 3 chunks per sector, but could
   hypothetically be an issue in production.
3. Remove all code related to sub-chunk iteration since we don't need
   it and it may impact performance.
this reduces the disk thrash that can occur when doing a cross-module
repack
Move entropy generation out to its own process so that when it's active it doesn't block ar_chunk_storage
@JamesPiechota JamesPiechota merged commit a4d5996 into master Jan 27, 2025
2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants