-
Notifications
You must be signed in to change notification settings - Fork 6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
added log to identify if it's a resource only backup #346
Conversation
OSS Scan Results:
Total issues: 106 |
License Evaluation Results:
Total License Issues: 19 |
@@ -171,6 +171,10 @@ func runMaintenance(maintenanceType string) error { | |||
repo.Name = genericBackupDir + "/" | |||
bucket = blob.PrefixedBucket(bucket, repo.Name) | |||
repoList, err = getRepoList(bucket) | |||
if len(repoList) == 0 { | |||
logrus.Warnf("Provider is non-nfs, No directory %v exists, verify if it is a resource only backup", repo.Name) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Here repo name will be "genericBackupDir/" It is better to add the backup location CR name in the debug statement.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
repo name and backup location cr name are the same
What this PR does / why we need it: It adds logs to identify if it's a resource only backup in case of s3 bl
Which issue(s) this PR fixes (optional)
Closes # PB-5823
Special notes for your reviewer:
RCA-
If we take resource only backup (without any pvc), then we don't create a generic backup dir on the backup location
TestMaintenanceJobForResourceOnlyBackup checks for the log in the maintenance pod log to verify if it's a resource only backup
If we don't find any generic backup dir, then we throw this warning "No directory exists, verify if it is a resource only backup"
This log statement is covered in nfs backup flow but not in s3 bl scenario
ss of quick maintenance pod log