-
Notifications
You must be signed in to change notification settings - Fork 573
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
STAGING -> MASTER #4430
Merged
Merged
STAGING -> MASTER #4430
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
The default maxPeers is 50, as is the current default targetPeers. The logic is currently such that you continuously open outbound connections until you reach targetPeers. If targetPeers threshold is met, but maxPeers is not, you will continue accepting inbound connections. By setting these values separately, we create a buffer to allow for more inbound connections. This should have a downstream effect of allowing other peers to each peer saturation faster.
* removing filter connected identities from address manager * removing new line
subtracts mint amounts from amount of each asset needed for funding transactions if the amount of an asset in the outputs of the transaction is less than the amount minted in the transaction then minted tokens are used directly in the outputs instead of funding the outputs with existing notes
* changes Note.memo() to return Buffer the 'memo' method on the Note class returns the memo as a human-readable string by using 'BufferUtils.toHuman' on a utf8 decoded string of the memo this human-readable representation may lose some information from the memo returning the buffer instead allows sdk clients to choose how to interpret the data encoded in the note memo for instance, a memo might contain a base64-encoded Ethereum public address. if this string is treated as a human-readable utf8 string, then the encoded address cannot be recovered accurately * fixes getAccountNotesStream test
Add better peer selection logic to the syncer The Syncer's `findPeer` function will now choose 8 random peers. Of these peers, it will measure the round-trip-time of a blockHeadersRequest of the genesis block. It then chooses the fastest of these peers to sync from. Additionally, it will check again on a growing interval (2, 4, 8... up to 60) minutes. The purpose of this is that it should check frequently on initial node startup, as the number of connected peers grows so that it has a good chance of selecting a good peer to sync from early. It includes the current syncing peer in the checks on the interval, so that if you find a good quality peer, you are likely to continue syncing from that peer unless you find a better one.
* checking for outbound websocket connections * updating comment: * updating test with signalling peer * Removing console * more thorough inbound and outbound test * combining logic into the same function * adding webrtc websocket info * prevent list from truncating * addPeer when websocket connection is successful * updating priorPeers array * only storing websocket connections * changing variable name * adding peers from disk to candidates * storing dispose attemps and successful connections * moving previous peer connection logic to peer network startup * resetging connection manager * log to debug * resetting manager test * removing peer on ban * Removing dispose peer from tests * adding inbound check * removing peer on listener * moving connecting to address manager peers after bootstarp nodes * Adding max limit to number of peers * updating limit to 200 * adding missing fields for test * updating tests * simplifying initial setup logic * cleaning up logic * removing get random disconnected addresses * adding upsert test * fixing upsert test * limit peers test * adding back save on stop * adding test for removing oldest peer * revertng test utilities * removing null types from peer address * updating tests with non nullable peer address fields * adding comment * updating test name * rearranging if condition for readability * filtering first * adding save on timestamp update * Update ironfish/src/network/peers/addressManager.test.ts Co-authored-by: mat-if <97762857+mat-if@users.noreply.github.com> * Update ironfish/src/network/peers/addressManager.test.ts Co-authored-by: mat-if <97762857+mat-if@users.noreply.github.com> * Update ironfish/src/network/peers/addressManager.ts Co-authored-by: mat-if <97762857+mat-if@users.noreply.github.com> * adding private prefix to peerIdentityMap * removing no ops from test * Update ironfish/src/network/peers/addressManager.test.ts Co-authored-by: mat-if <97762857+mat-if@users.noreply.github.com> * converting limit to max peer addresses * replacing subtraction with constant * adding comment to replace with populating with peer candidates * using max peer address in test * Update ironfish/src/network/peers/addressManager.ts Co-authored-by: mat-if <97762857+mat-if@users.noreply.github.com> * Update ironfish/src/network/peers/addressManager.ts Co-authored-by: mat-if <97762857+mat-if@users.noreply.github.com> * changing to not equals for outbound * Removing simple comment * adding peer 2 test for removePeer * using nullish-coalescing for last added timstamp * using set system time instead of mocking date.now * Adding void case test * removing save from constructor * removing list void this.save --------- Co-authored-by: mat-if <97762857+mat-if@users.noreply.github.com>
…4388) * address manager doesn't require the peer manager anymore * removing new line
* address manager doesn't require the peer manager anymore * removing new line * changing host store to peer store and address manager to peer store manager * adding comment * changing back file name * update comment * Update ironfish/src/fileStores/peerStore.ts Co-authored-by: mat-if <97762857+mat-if@users.noreply.github.com> --------- Co-authored-by: mat-if <97762857+mat-if@users.noreply.github.com>
We maintain this pattern pretty consistently throughout the codebase, and it makes it hard to find the start/stop functions easily because they're buried deep in the file compared to other classes.
Since NodeJS LTS version was just bumped to v20, we need to support it properly. - Bumps any minimum versions of 16 to 18 - Allows the CLI and the main entrypoint to support 18 _or higher_ - Update install instructions to simply reference latest LTS version so we can avoid manually bumping it all the time
* adding save and create file mutex * Adding comment why explanation * adding test * removing long comment * removing extra test case * using flushTimeout to flush the promises
* adding peer direction to rpc peers command * changing order * making it part of the extended flag instead
… and eventLoop (#4407) * simplifying logic * updating type * removing duplicate can connect check * removing protected * splitting up the or * changing back to using keys as identity * adding new line
* updates getTransactionStream for relay integration - optionally decrypts notes for spender - adds optional outgoingViewKey to request - attempts to decrypt using outgoingViewKey if decryption fails using incomingViewKey - adds sender address to note responses - returns memo as hex string instead of human-readable utf8 string supports relay applications (e.g., bridges) that require information on outgoing transactions and may encode non-human-readable data in memos * uses 'memoAsHex' flag to determine memo encoding defaults to false to preserve backwards compatibility
When writing a file, we now do a couple extra things: - Write to a temp file first, and rename if the write was successful. Usually, renames are considered atomic operations, so if it fails for some reason, we should keep the original file intact. - After writing the contents of the file, we call sync to try to force a buffer flush and ensure the file is written before we close it. NodeJS implements this functionality by adding a `flush` option to the `writeFile` utility function starting in NodeJS 21, but it will be a long time before we can use that.
The peer connection manager will now disconnect from peers if there are more than maxPeers connected peers. Additionally, if keepOpenPeerSlot is set, it will disconnect if there are more than maxPeers - 1 connected peers. This can be used by nodes who want to ensure they are allowing incoming connections regardless of them being near/at capacity
This uses connected nodes instead of non-disconnected nodes to determine if a node should continue making outbound connections and accepting inbound connections. The main change here is that we are making the assumption that most node connections will fail, so this allows a node to have more active connection attempts. The end goal should be faster peer saturation. This does introduce the possibility of a node to have more than maxPeers connected peers. The current assumption is that natural attrition will take care of this, however, it would be a low amount of effort to add additional logic to prune peers if above maxPeers if it ends up being an issue.
The new and improved fishtank (framework for simulating Iron Fish networks) now lives at https://github.com/iron-fish/ironfish-tank
This should fix dependabot vulnerability #73: https://github.com/iron-fish/ironfish/security/dependabot/73
This should fix two dependabot alerts: - rustix: https://github.com/iron-fish/ironfish/security/dependabot/74 - openssl: https://github.com/iron-fish/ironfish/security/dependabot/64
…4422) Previously, this loop would get executed every event loop iteration regardless of whether it could even create new connections. Now, it will only run if we can actually create new connections, saving a bit of overhead when it is not useful. Also includes a light refactor to break this chunk of code into a function for easier reading and testing.
… for web rtc connections (#4401) * returning list of brokering peers instead of single * moving array * cleaner for loop logic * fixing test * shuffling in the getBrokeringPeers function * has brokering peer function
This is not the right way, because os.tmpdir() does not give you a new random directory per call. You need to create a new directory using that as the root with a random name yourself. These tests are all affectively using the tmpdir() root to store ironfish files in.
andiflabs
approved these changes
Nov 6, 2023
These were satisfied and the tests now pass
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Summary
Release for 1.12.0
Testing Plan
Documentation
Does this change require any updates to the Iron Fish Docs (ex. the RPC API
Reference)? If yes, link a
related documentation pull request for the website.
Breaking Change
Is this a breaking change? If yes, add notes below on why this is breaking and
what additional work is required, if any.