Skip to content

Commit

Permalink
docs: fixing some spelling issues
Browse files Browse the repository at this point in the history
closes: rook#12987

Signed-off-by: Redouane Kachach <[email protected]>
(cherry picked from commit b3dd74e)
Signed-off-by: parth-gr <[email protected]>
  • Loading branch information
rkachach authored and parth-gr committed Oct 5, 2023
1 parent 923fd5c commit 7c762e1
Show file tree
Hide file tree
Showing 8 changed files with 15 additions and 9 deletions.
10 changes: 8 additions & 2 deletions .github/workflows/canary-integration-test.yml
Original file line number Diff line number Diff line change
Expand Up @@ -173,7 +173,7 @@ jobs:
kubectl -n rook-ceph exec $toolbox -- ceph auth ls
# update the existing non-restricted client auth with the new ones
kubectl -n rook-ceph exec $toolbox -- python3 /etc/ceph/create-external-cluster-resources.py --upgrade
# print ugraded client auth
# print upgraded client auth
kubectl -n rook-ceph exec $toolbox -- ceph auth ls
- name: test the upgrade flag for restricted auth user
Expand All @@ -182,9 +182,15 @@ jobs:
# print existing client auth
kubectl -n rook-ceph exec $toolbox -- ceph auth get client.csi-rbd-node-rookstorage-replicapool
# restricted auth user need to provide --rbd-data-pool-name,
<<<<<<< HEAD
# --cluster-name and --run-as-user flag while upgrading
kubectl -n rook-ceph exec $toolbox -- python3 /etc/ceph/create-external-cluster-resources.py --upgrade --rbd-data-pool-name replicapool --cluster-name rookstorage --run-as-user client.csi-rbd-node-rookstorage-replicapool
# print ugraded client auth
=======
# --k8s-cluster-name and --run-as-user flag while upgrading
kubectl -n rook-ceph exec $toolbox -- python3 /etc/ceph/create-external-cluster-resources.py --upgrade --rbd-data-pool-name replicapool --k8s-cluster-name rookstorage --run-as-user client.csi-rbd-node-rookstorage-replicapool
# print upgraded client auth
>>>>>>> b3dd74ea2 (docs: fixing some spelling issues)
kubectl -n rook-ceph exec $toolbox -- ceph auth get client.csi-rbd-node-rookstorage-replicapool

- name: validate-rgw-endpoint
Expand Down Expand Up @@ -1293,7 +1299,7 @@ jobs:
# snaps=$(kubectl -n rook-ceph exec deploy/rook-ceph-fs-mirror -- ceph --admin-daemon /var/run/ceph/$mirror_daemon fs mirror peer status myfs@1 $clusterfsid|jq -r '."/volumes/_nogroup/testsubvolume"."snaps_synced"')
# echo "snapshots: $snaps"
# if [ $num_snaps_target = $snaps ]
# then echo "Snaphots have synced."
# then echo "Snapshots have synced."
# else echo "Snaps have not synced. NEEDS INVESTIGATION"
# fi
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -189,7 +189,7 @@ kubectl create -f object-multisite-pull-realm.yaml
## Scaling a Multisite

Scaling the number of gateways that run the synchronization thread to 2 or more can increase the latency of the
replication of each S3 object. The recommended way to scale a mutisite configuration is to dissociate the gateway dedicated
replication of each S3 object. The recommended way to scale a multisite configuration is to dissociate the gateway dedicated
to the synchronization from gateways that serve clients.

The two types of gateways can be deployed by creating two CephObjectStores associated with the same CephObjectZone. The
Expand Down
2 changes: 1 addition & 1 deletion Documentation/Storage-Configuration/ceph-teardown.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ partprobe $DISK
```

Ceph can leave LVM and device mapper data that can lock the disks, preventing the disks from being
used again. These steps can help to free up old Ceph disks for re-use. Note that this only needs to
used again. These steps can help to free up old Ceph disks for reuse. Note that this only needs to
be run once on each node. If you have **only one** Rook cluster and **all** Ceph disks are
being wiped, run the following command.

Expand Down
2 changes: 1 addition & 1 deletion cmd/rook/rook/rook.go
Original file line number Diff line number Diff line change
Expand Up @@ -228,7 +228,7 @@ func GetInternalOrExternalClient() kubernetes.Interface {
for _, kConf := range strings.Split(kubeconfig, ":") {
restConfig, err = clientcmd.BuildConfigFromFlags("", kConf)
if err == nil {
logger.Debugf("attmepting to create kube clientset from kube config file %q", kConf)
logger.Debugf("attempting to create kube clientset from kube config file %q", kConf)
clientset, err = kubernetes.NewForConfig(restConfig)
if err == nil {
logger.Infof("created kube client interface from kube config file %q present in KUBECONFIG environment variable", kConf)
Expand Down
2 changes: 1 addition & 1 deletion deploy/examples/create-external-cluster-resources.py
Original file line number Diff line number Diff line change
Expand Up @@ -557,7 +557,7 @@ def _check_conflicting_options(self):
)

def _invalid_endpoint(self, endpoint_str):
# seprating port, by getting last split of `:` delimiter
# separating port, by getting last split of `:` delimiter
try:
endpoint_str_ip, port = endpoint_str.rsplit(":", 1)
except ValueError:
Expand Down
2 changes: 1 addition & 1 deletion pkg/daemon/multus/validation.go
Original file line number Diff line number Diff line change
Expand Up @@ -113,7 +113,7 @@ func (s *getExpectedNumberOfImagePullPodsState) Run(
}

/*
* Re-usable state to verify that expected number of pods are "Running" but not necessarily "Ready"
* Reusable state to verify that expected number of pods are "Running" but not necessarily "Ready"
* > Verify all image pull pods are running
* -- next state --> Delete image pull daemonset
* > Verify all client pods are running
Expand Down
2 changes: 1 addition & 1 deletion pkg/operator/ceph/object/admin.go
Original file line number Diff line number Diff line change
Expand Up @@ -302,7 +302,7 @@ func CommitConfigChanges(c *Context) error {
return errorOrIsNotFound(err, "failed to get the current RGW configuration period to see if it needs changed")
}

// this stages the current config changees and returns what the new period config will look like
// this stages the current config changes and returns what the new period config will look like
// without committing the changes
stagedPeriod, err := runAdminCommand(c, true, "period", "update")
if err != nil {
Expand Down
2 changes: 1 addition & 1 deletion tests/framework/clients/block.go
Original file line number Diff line number Diff line change
Expand Up @@ -49,7 +49,7 @@ func CreateBlockOperation(k8shelp *utils.K8sHelper, manifests installer.CephMani
// BlockCreate Function to create a Block using Rook
// Input parameters -
// manifest - pod definition that creates a pvc in k8s - yaml should describe name and size of pvc being created
// size - not user for k8s implementation since its descried on the pvc yaml definition
// size - not user for k8s implementation since its described on the pvc yaml definition
// Output - k8s create pvc operation output and/or error
func (b *BlockOperation) Create(manifest string, size int) (string, error) {
args := []string{"apply", "-f", "-"}
Expand Down

0 comments on commit 7c762e1

Please sign in to comment.