-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Ensure re-sync is triggered #773
Ensure re-sync is triggered #773
Conversation
Signed-off-by: David Cassany <[email protected]>
9b4c3bc
to
60f4b27
Compare
patchBase := client.MergeFrom(ch.DeepCopy()) | ||
ch.Spec.SyncInterval = "10m" | ||
ch.Spec.SyncInterval = "10s" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sets sync interval to 10 seconds for this test
patchBase := client.MergeFrom(ch.DeepCopy()) | ||
ch.Spec.SyncInterval = "10m" | ||
ch.Spec.SyncInterval = "10s" | ||
Expect(cl.Patch(ctx, ch, patchBase)).To(Succeed()) | ||
|
||
// Pod is created |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A pod is created immediately after patching the channel as a channel resource update triggers a new sync
|
||
// After channel update already existing versions were patched | ||
Expect(cl.Get(ctx, client.ObjectKey{ | ||
Name: "v0.1.0", | ||
Namespace: ch.Namespace, | ||
}, managedOSVersion)).To(Succeed()) | ||
Expect(managedOSVersion.Spec.Version).To(Equal("v0.1.0-patched")) | ||
|
||
// Simulate another channel content change | ||
syncerProvider.SetJSON(deprecatingJSON) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Changing channel content but not channel resource, hence this does not trigger a new resync, we have to wait for the interval (10s).
} | ||
|
||
if managedOSVersionChannel.Status.FailedSynchronizationAttempts > maxConscutiveFailures { | ||
logger.Error(fmt.Errorf("stop retrying"), "sychronization failed consecutively too many times", "failed attempts", managedOSVersionChannel.Status.FailedSynchronizationAttempts) | ||
return ctrl.Result{}, nil | ||
return ctrl.Result{RequeueAfter: time.Until(lastSync.Add(interval))}, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think this was an actual bug or leftover
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, nice fix!
@@ -187,12 +187,12 @@ func (r *ManagedOSVersionChannelReconciler) reconcile(ctx context.Context, manag | |||
|
|||
if readyCondition.Status == metav1.ConditionTrue { | |||
logger.Info("synchronization already done", "lastSync", lastSync) | |||
return ctrl.Result{}, nil | |||
return ctrl.Result{RequeueAfter: time.Until(lastSync.Add(interval))}, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
IMHO this shouldn't be needed, but it certainly does not hurt and helps on making the logic more robust. The unit test verifying the automatic resync after the interval passes without this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I was wondering if this could let us to queue extra, unneeded reconcile loops, but seems in practice never happens. Moreover, if we ever would reconcile once more, nothing bad could happen 👍🏼
@@ -525,5 +530,16 @@ func filterChannelEvents() predicate.Funcs { | |||
logger.V(log.DebugDepth).Info("Processing generic event", "Obj", e.Object.GetName()) | |||
return true | |||
}, | |||
// Ignore pods creation | |||
CreateFunc: func(e event.CreateEvent) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is to prevent reconciling again immediately after creating the pod resource. we should only re-reconcile on pod status changes.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well done, this was the extra reconcile loop we saw
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well done, seems the channel sync is in pretty good shape now!
(tested and checked on a test deployment)
@@ -187,12 +187,12 @@ func (r *ManagedOSVersionChannelReconciler) reconcile(ctx context.Context, manag | |||
|
|||
if readyCondition.Status == metav1.ConditionTrue { | |||
logger.Info("synchronization already done", "lastSync", lastSync) | |||
return ctrl.Result{}, nil | |||
return ctrl.Result{RequeueAfter: time.Until(lastSync.Add(interval))}, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Yeah, I was wondering if this could let us to queue extra, unneeded reconcile loops, but seems in practice never happens. Moreover, if we ever would reconcile once more, nothing bad could happen 👍🏼
} | ||
|
||
if managedOSVersionChannel.Status.FailedSynchronizationAttempts > maxConscutiveFailures { | ||
logger.Error(fmt.Errorf("stop retrying"), "sychronization failed consecutively too many times", "failed attempts", managedOSVersionChannel.Status.FailedSynchronizationAttempts) | ||
return ctrl.Result{}, nil | ||
return ctrl.Result{RequeueAfter: time.Until(lastSync.Add(interval))}, nil |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yep, nice fix!
@@ -525,5 +530,16 @@ func filterChannelEvents() predicate.Funcs { | |||
logger.V(log.DebugDepth).Info("Processing generic event", "Obj", e.Object.GetName()) | |||
return true | |||
}, | |||
// Ignore pods creation | |||
CreateFunc: func(e event.CreateEvent) bool { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
well done, this was the extra reconcile loop we saw
No description provided.