Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

tidb: add doc for batch-policy config #18446

Merged
merged 6 commits into from
Aug 20, 2024
Merged
Changes from 4 commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
10 changes: 10 additions & 0 deletions tidb-configuration-file.md
Original file line number Diff line number Diff line change
Expand Up @@ -726,6 +726,16 @@ Configuration items related to opentracing.reporter.
- Default value: `41s`
- It is required to set this value larger than twice of the Raft election timeout.

### `batch-policy` <span class="version-mark">New in v8.3.0</span>

- Controls the batching strategy for requests from TiDB to TiKV. When sending requests to TiKV, TiDB always encapsulates the requests in the current waiting queue into a `BatchCommandsRequest` and sends it to TiKV as a package. This is the basic batching strategy. When the load throughput is high, TiDB decides whether to wait for an extra period after the basic batching based on the `batch-policy` configuration item. This additional batching allows more requests to be encapsulated in a single `BatchCommandsRequest`.
Oreoxmt marked this conversation as resolved.
Show resolved Hide resolved
- Default value: `"standard"`
- Value options:
- `"basic"`: the behavior is consistent with versions before v8.3.0, where TiDB performs additional batching only if [`tikv-client.max-batch-wait-time`](#max-batch-wait-time) is greater than 0 and the load of TiKV exceeds the value of [`tikv-client.overload-threshold`](#overload-threshold).
- `"standard"`: TiDB dynamically batches requests based the arrival time intervals of recent requests, suitable for high-throughput scenarios.
Oreoxmt marked this conversation as resolved.
Show resolved Hide resolved
- `"positive"`: TiDB always performs additional batching, suitable for high-throughput testing scenarios to achieve optimal performance. However, in low-load scenarios, this strategy might introduce unnecessary batching wait time, potentially reducing performance.
- `"custom{...}"`: allows customization of batching strategy parameters. This is used for the internal testing of TiDB and is **NOT recommended** for use.
Oreoxmt marked this conversation as resolved.
Show resolved Hide resolved

Oreoxmt marked this conversation as resolved.
Show resolved Hide resolved
### `max-batch-size`

- The maximum number of RPC packets sent in batch. If the value is not `0`, the `BatchCommands` API is used to send requests to TiKV, and the RPC latency can be reduced in the case of high concurrency. It is recommended that you do not modify this value.
Expand Down
Loading