Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

fractional_sample_rate: Vary sample rate according to traffic estimation #4839

Merged
merged 6 commits into from
Nov 25, 2024

Conversation

macobo
Copy link
Contributor

@macobo macobo commented Nov 19, 2024

Our old sampling mechanism used SAMPLE 20000000 syntax. This was wonderful since it allowed essentially dynamic sampling based on the data being queried. However this ran into many issues relating to JOINs and sample rate being different for different tables.

Instead, we now start to dynamically vary sample rates fractionally.

At query time we check the time window being queried and estimate how many rows this query might reach. For large queries, we then dynamically decide the sample rate.

For getting the traffic estimate for a site, we have a new SampingCache class which queries ingest_counters.

Future work:

  • The query being cached is slightly expensive and can be sped up with a ClickHouse projection. This will be added in a follow-up PR.
  • Reducing sample rate if query.filters is set.

…past 30 days

Our old sampling mechanism used SAMPLE 20000000 syntax. This was
wonderful since it allowed essentially dynamic sampling based on the
data being queried. However this ran into many issues relating to JOINs
and sample rate being different for different tables.

Instead, we now start to dynamically vary sample rates fractionally.

At query time we check the time window being queried and estimate how
many rows this query might reach. For large queries, we then dynamically
decide the sample rate.

For getting the traffic estimate for a site, we have a new SampingCache class which queries `ingest_counters`.

The query being cached is slightly expensive and can be sped up with a
ClickHouse projection.
@macobo macobo force-pushed the sampling-rate-changes branch from 7b7982b to 7ee1254 Compare November 19, 2024 15:57
@macobo macobo marked this pull request as ready for review November 20, 2024 06:39
@macobo macobo requested review from apata and RobertJoonas November 20, 2024 06:39
@macobo macobo force-pushed the sampling-rate-changes branch from 3694063 to f7e8e3a Compare November 20, 2024 06:42
macobo added a commit that referenced this pull request Nov 20, 2024
Companion to #4839

A projection speeds up ClickHouse queries by storing data in a different
order in each part. In this case, it pre-aggregates results by site,
speeding up the query and requiring less resources at query-time.

Query stats before:
```
-- 10 rows in set. Elapsed: 1.134 sec. Processed 1.45 billion rows, 21.65 GB (1.28 billion rows/s., 19.09 GB/s.)
-- Peak memory usage: 53.41 MiB.
```

After:
```
-- 10 rows in set. Elapsed: 0.321 sec. Processed 39.70 million rows, 1.07 GB (123.51 million rows/s., 3.33 GB/s.)
-- Peak memory usage: 97.43 MiB.
```
Copy link
Member

@aerosol aerosol left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There's a small problem with cache tests, but otherwise looks good. Happy to see counters and cache stitched together for greater good.

extra/lib/plausible/stats/sampling_cache.ex Show resolved Hide resolved
extra/lib/plausible/stats/sampling_cache.ex Outdated Show resolved Hide resolved
test/plausible/stats/sampling_cache_test.exs Outdated Show resolved Hide resolved
@macobo macobo force-pushed the sampling-rate-changes branch from f68928d to db34e53 Compare November 21, 2024 21:05
@macobo macobo added this pull request to the merge queue Nov 25, 2024
Merged via the queue into master with commit 969f094 Nov 25, 2024
19 checks passed
@macobo macobo deleted the sampling-rate-changes branch November 25, 2024 07:30
github-merge-queue bot pushed a commit that referenced this pull request Jan 7, 2025
* Speed up counting past 30 day traffic via ingest_counters

Companion to #4839

A projection speeds up ClickHouse queries by storing data in a different
order in each part. In this case, it pre-aggregates results by site,
speeding up the query and requiring less resources at query-time.

Query stats before:
```
-- 10 rows in set. Elapsed: 1.134 sec. Processed 1.45 billion rows, 21.65 GB (1.28 billion rows/s., 19.09 GB/s.)
-- Peak memory usage: 53.41 MiB.
```

After:
```
-- 10 rows in set. Elapsed: 0.321 sec. Processed 39.70 million rows, 1.07 GB (123.51 million rows/s., 3.33 GB/s.)
-- Peak memory usage: 97.43 MiB.
```

* deduplicate_merge_projection_mode

* materialize projection
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants