-
Notifications
You must be signed in to change notification settings - Fork 5.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ddl: Fixed partitioning a non-partitioned table with placement rules #57560
base: master
Are you sure you want to change the base?
Conversation
Hi @mjonss. Thanks for your PR. PRs from untrusted users cannot be marked as trusted with I understand the commands that are listed here. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Copilot reviewed 3 out of 3 changed files in this pull request and generated no suggestions.
end string | ||
} | ||
keys := make([]keyRange, 0, rules) | ||
for k := range m.bundles { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think we only need to check the rules inside one bundle just now. If GroupBundle.Override
is set to true
, it is allowed to have multiple leader rules.
And in one RuleGroup
when a rule has Override==true
, it should also be skipped.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So the below is OK?
if rule.Role == pd.Leader && !m.bundles[k].Override {
keys = append(keys, keyRange{start: k + ":" + rule.StartKeyHex, end: k + ":" + rule.EndKeyHex})
}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We still need to check the rules in one bundle even bundle[k].Override is true
, but the current implementation skips all rules in such bundle?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We still need to check the rules in one bundle even
bundle[k].Override is true
true.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@lcwangchao @xhebox please review the new duplicate leader range check.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, why not only validating the input bundles
one by one? I think the current version does not care the overlaps across different rules.
Codecov ReportAttention: Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## master #57560 +/- ##
================================================
+ Coverage 72.8329% 74.8011% +1.9681%
================================================
Files 1676 1721 +45
Lines 463539 478136 +14597
================================================
+ Hits 337609 357651 +20042
+ Misses 105125 98140 -6985
- Partials 20805 22345 +1540
Flags with carried forward coverage won't be shown. Click here to find out more.
|
[LGTM Timeline notifier]Timeline:
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: Defined2014 The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
@Defined2014: Your lgtm message is repeated, so it is ignored. In response to this: Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
// - not removing any placement rules for removed partitions | ||
// - not leaving anything in case of failure/rollback! | ||
// So we will: | ||
// 1) First write the new bundles including both new and old partitions, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why the new bundles should include old partitions? The old bundles have already defined these rules and still take effects here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Oh, so if the Bundle is overwritten, the older bundle rules will still be in effect?
Like if the first bundle had rules for table id 111, part id 112 and 113, and the new bundle would have table id 111, partition id 113 and 114, would part id 112 still be placed as the old bundle rules or would it be placed without any rules?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
or would it be placed without any rules?
This, I believe.
// it will filter the rules depends on the interval index override in the same group or the
// group-index override between different groups
// For example, given rules:
// ruleA: group_id: 4, id: 2, override: true
// ruleB: group_id: 4, id: 1, override: true
// ruleC: group_id: 3
// ruleD: group_id: 2
// RuleGroupA: id:4, override: false
// RuleGroupB: id:3, override: true
// RuleGroupC: id:2, override: false
// Finally only ruleA and ruleC will be selected.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A better example:
CREATE TABLE t (a int) partition by range (a) (partition p0 values less than (1000000), partition p1M values less than (2000000)) placement policy pp1;
ALTER TABLE t REORGANIZE PARTITION p1M INTO (PARTITION p1M values less than (2000000), partition p2M values less than (3000000));
During the DDL we would still want the original p1M to follow the table level rule, so we keep it in the bundle.
When the final bundles will be created it will not be included.
Since the table keeps its table id (for REORGANIZE PARTITION) there are no other bundles that covers the inherited table's placement rules for the original partition p1M, which means we need to cover it during the time of the DDL, since it can still be accessed (and double written).
Another alternative would be to create partition level bundles for the old replaced partitions, and let them be removed by GC later, but I think that would qualify for a separate PR.
@xhebox PTAL |
/retest |
@mjonss: Cannot trigger testing until a trusted user reviews the PR and leaves an In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
end string | ||
} | ||
keys := make([]keyRange, 0, rules) | ||
for k := range m.bundles { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So, why not only validating the input bundles
one by one? I think the current version does not care the overlaps across different rules.
return keys[i].start < keys[j].start | ||
}) | ||
for i := 1; i < len(keys); i++ { | ||
if keys[i].start < keys[i-1].end && !keys[i].override { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If we have two groups, should the below case also report an error if order by group id first?
// when i==1 `keys[i].start < keys[i-1].end`
[0] {group_id: 1, override: false, rules: [(start: 0xF0, end: 0xF2)]}
[1] {group_id: 2, override: false, rules: [(start: 0x01, end: 0x02)]}
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
And for below case, I think it should still report an error but seems not
{
group_id: 1,
rules: [
{override: false, start: 0x01, end: 0x0F},
{override: true, start: 0x02, end: 0x03},
{override: false, start: 0x04, end: 0x05}
]
}
What problem does this PR solve?
Issue Number: close #55705
Problem Summary:
When updating the placement rule bundle, the table ID was included both as the table and the partition id.
What changed and how does it work?
Not adding it as partition id for non-partitioned table (which uses the original table as a single partition, hence the same ID).
Also simplified the bundle handling during schema update / ApplyDiff, by removing logic that:
to
For REORGANIZE PARTITION, there the bundles will be updated twice, first with the intermediate set of partitions (both old and new), so that data copying and index creations on the new partitions will follow the correct policies directly, and then with the final partition set, avoiding having to move the data again when the DDL is done.
Check List
Tests
Side effects
Documentation
Release note
Please refer to Release Notes Language Style Guide to write a quality release note.