Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

WIP diff viewer: utilize NEW_TOKEN frames #8

Draft
wants to merge 12 commits into
base: main
Choose a base branch
from
Draft

Conversation

gretchenfrage
Copy link
Owner

@gretchenfrage gretchenfrage commented Jun 16, 2024

Key points:

  • Token generalized to have both a "retry token" variant and a "new token frame token" variant
    • Additional byte acts as discriminant. It is not encrypted, but rather considered the "additional data" of the token's AEAD encryption. Other than that, the "retry token" variant remains the same.
    • The NewToken variant's aead_from_hkdf key derivation is based on an empty byte slice &[] rather than the retry_src_cid.
    • The NewToken variant's encrypted data consists of: randomly generated 128 bits, IP address (not including port), issued timestamp.
  • Server sends client 2 NEW_TOKEN frames whenever client's path is validated (reconfigurable through ServerConfig.new_tokens_sent_upon_validation)
  • ClientConfig.new_token_store: Option<Arc<dyn NewTokenStore>> object stores NEW_TOKEN tokens received by client, and dispenses them for one-time use when connecting to same server_name again
    • Default implementation InMemNewTokenStore stores 2 newest unused tokens for up to 256 servers with LRU eviction policy of server names, so as to pair well with rustls::client::ClientSessionMemoryCache
  • ServerConfig.token_reuse_preventer: Option<Arc<Mutex<Box<dyn TokenReusePreventer>>>> object is responsible for mitigating reuse of NEW_TOKEN tokens
    • Default implementation BloomTokenReusePreventer:

      Divides all time into periods of length new_token_lifetime starting at unix epoch. Always maintains two "filters" which track used tokens which expires in that period. Turning over filters as time passes prevents infinite accumulation of tracked tokens.

      Filters start out as FxHashSets. This achieves the desirable property of linear-ish memory usage: if few NEW_TOKEN tokens are actually being used, the server's bloom token reuse preventer uses negligible memory.

      Once a hash set filter would exceed a configurable maximum memory consumption, it's converted to a bloom filter. This achieves the property that an upper bound is set on the number of bytes allocated by the reuse preventer. Instead, as more tokens are added to the bloom filter, the false positive rate (tokens not actually reused but considered to be reused and thus ignored anyways) increases.

  • ServerConfig.new_token_lifetime is different from ServerConfig.retry_token_lifetime and defaults to 2 weeks.

TODO:

  • Send when validated rather than upon first connecting
  • Send upon path change
  • Update stats
  • Tests
  • Reuse prevention
    • Simplify it--it's not even used concurrently
  • Make sure encryption is good
  • Make not break if receive Retry in response to request with NEW_TOKEN token
  • NEW_TOKEN tokens should not encode the port (?)
  • We don't need a top-level Token.encode

@gretchenfrage gretchenfrage force-pushed the new-token branch 12 times, most recently from 5c5def2 to a79c3af Compare November 24, 2024 08:08
@gretchenfrage gretchenfrage force-pushed the new-token branch 11 times, most recently from bbebc09 to 84b70fe Compare November 30, 2024 19:08
@gretchenfrage gretchenfrage force-pushed the new-token branch 7 times, most recently from 4f15496 to 1bcf2a2 Compare December 28, 2024 06:29
Moves all the fields of Token to a new RetryTokenPayload struct, and
makes Token have a single `payload: RetryTokenPayload` field. This may
seem strange at first, but it sets up for the next commit, which adds
an additional field to Token.
@gretchenfrage gretchenfrage force-pushed the new-token branch 7 times, most recently from 37ef7da to 91aaae5 Compare January 24, 2025 23:56
Previously, retry tokens were encrypted using the retry src cid as the
key derivation input. This has been described by a reputable individual
as "cheeky" (who, coincidentially, wrote that code in the first place).
More importantly, this presents obstacles to using NEW_TOKEN frames.

With this commit, tokens carry a random 128-bit value, which is used to
derive the key for encrypting the rest of the token.
The ability for the server to process tokens from NEW_TOKEN frames will
create the possibility of Incoming which are validated, but may still be
retried. This commit creates an API for that. This means that rather
than Incoming.remote_address_validated being tied to retry_src_cid, it
is tied to a new `validated: bool` of `IncomingToken`.

Currently, this field is initialized to true iff retry_src_cid is some.
However, subsequent commits will introduce the possibility for
divergence.
As of this commit, it only has a single variant, which is Retry.
However, the next commit will add an additional variant. In addition
to pure refactors, a discriminant byte is used when encoding.
When a path becomes validated, the server may send the client NEW_TOKEN
frames. These may cause an Incoming to be validated.

- Adds TokenPayload::Validation variant
- Adds relevant configuration to ServerConfig
- Adds `TokenLog` object to server to mitigate token reuse

As of this commit, the only provided implementation of TokenLog is
NoneTokenLog, which is equivalent to the lack of a token log, and is the
default.
When a client receives a token from a NEW_TOKEN frame, it submits it to
a TokenStore object for storage. When an endpoint connects to a server,
it queries the TokenStore object for a token applicable to the server
name, and uses it if one is retrieved.

As of this commit, the only provided implementation of TokenStore is
NoneTokenStore, which is equivalent to the lack of a token store, and is
the default.
When we first added tests::util::IncomingConnectionBehavior, we opted to
use an enum instead of a callback because it seemed cleaner. However,
the number of variants have grown, and adding integration tests for
validation tokens from NEW_TOKEN frames threatens to make this logic
even more complicated. Moreover, there is another advantage to callbacks
we have not been exploiting: a stateful FnMut can assert that incoming
connection handling within a test follows a certain expected sequence
of Incoming properties.

As such, this commit replaces TestEndpoint.incoming_connection_behavior
with a handle_incoming callback, and modifies an existing test to
exploit this functionality to test more things than it was previously.
Configures the default clients and servers in proto tests to be able to
utilize NEW_TOKEN frames. This involves creating simple implementations
of token-related traits internal to the test module. These
implementations are essentially the most boring possible implementation
that is able to actually utilize tokens. They would not be suitable for
use in real applications because their memory usage is unbounded.
Moves the existing `stateless_retry` test into that module.
Also adds a `FakeTimeSource` utility to the test module.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant