Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[Solana][NONEVM-1246] Implement allowlist on onramp #520

Merged
merged 6 commits into from
Jan 31, 2025

Conversation

PabloMansanet
Copy link
Contributor

No description provided.

@PabloMansanet PabloMansanet requested a review from a team as a code owner January 29, 2025 18:17
Comment on lines +128 to +125
#[max_len(0)]
pub allowed_senders: Vec<Pubkey>,
pub allow_list_enabled: bool,
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

can we double check the product requirements for allowed_senders?

if it's unbounded, then we should allowlist based on a user specific PDA since there is a maximum account size that would bound the list

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checked by @agusaldasoro , we're OK to go with the boundary enforced by maximum account size.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

We estimated a max size of 300k which is okay

}

pub fn dynamic_space(&self) -> usize {
self.allowed_senders.len() * std::mem::size_of::<Pubkey>()
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Don't arrays have a base size of 4? I remember using 4 + len * size

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is captured by INIT_SPACE. When specifying a max_len of 0, INIT_SPACE only considers the vector discriminator (4 bytes), so we only need to provide the actual dynamic space in this method.

I find it a bit annoying that 'max_len' is so strangely named, because it's way more useful than I first thought: It should be called elements_to_preallocate_at_init or something 😆.

@@ -2138,6 +2141,71 @@ func TestCCIPRouter(t *testing.T) {
require.NotNil(t, result)
})

t.Run("When sending with an empty but enabled allowlist, it fails", func(t *testing.T) {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should we add a test for sending with disabled allowlist as well?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't think it's necessary: that's already captured by all the other tests, as the allowlist is disabled by default.

constraint = valid_version(dest_chain_state.version, MAX_CHAINSTATE_V) @ CcipRouterError::InvalidInputs,
realloc = ANCHOR_DISCRIMINATOR + DestChain::INIT_SPACE + dest_chain_config.dynamic_space(),
realloc::payer = authority,
realloc::zero = false
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If using false, then we need to ensure zeroing the bytes when decreasing the size of the allowed senders?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good question. Do we? I'm not sure.

The logic is pretty simple and self-contained, so I'm not sure what damage leaving those bytes "dirty" could cause. If we downsize and upsize afterwards, all code paths lead to overwriting these bytes again, so I can't see how we'd end up with garbage in an allocated structure.

I might be missing some solana risks I'm not aware of here though, so if it's safer to make this true, I'm happy to.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All the docs that I read recommend using realloc zero to true, or in this doc they recommend using an ALT table instead

https://www.helius.dev/blog/a-hitchhikers-guide-to-solana-program-security?utm_source=chatgpt.com#account-data-reallocation

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This module talks about issues when init, but they may be valid for resize as well? https://solana.com/developers/courses/program-security/reinitialization-attacks

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@agusaldasoro the guide you linked (and the official documentation) bot specify that realloc::zero = true is only necessary in the cases where a single instruction can downsize and then upsize in one call. This is impossible in our case as we only reallocate once on entry. This means the bytes are automatically zeroed, and using realloc::zero = true would be wasted cycles.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ahhh, okay! that's clear now. Can you please add a comment in code staying that so we don't forget about that in the future? thanks!

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sounds good! :)

Copy link

Metric onramp_allowlist main
Coverage 75.6% 75.6%

@PabloMansanet PabloMansanet merged commit e828388 into main Jan 31, 2025
17 checks passed
@PabloMansanet PabloMansanet deleted the onramp_allowlist branch January 31, 2025 16:46
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants