Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix docker cp #20

Open
wants to merge 19 commits into
base: master
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from 1 commit
Commits
Show all changes
19 commits
Select commit Hold shift + click to select a range
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
110 changes: 110 additions & 0 deletions activemount.go
Original file line number Diff line number Diff line change
@@ -0,0 +1,110 @@
package main

import (
"encoding/json"
"errors"
"fmt"
"io"
"os"

"github.com/docker/go-plugins-helpers/volume"
)

type ActiveMount struct {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You probably meant to call it activeMount (see point 2 of the next note, on exporting entities in go).
(note that unlike the structure, its field should be exported: otherwise json.Marshal/.Unmarshal won't be able to access it, as they are not part of this package)

Copy link
Author

@andergnet andergnet Jul 28, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's my first real program in Go, I had no idea about that, I simply copied how VolumeInfo was defined. I have fixed it.
e55a292

UsageCount int
}

func (d *DockerOnTop) Activate(request *volume.MountRequest, activemountsdir lockedFile) (bool, error) {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

A comment on both DockerOnTop.Activate and DockerOnTop.Activate:

  1. Their names probably don't quite reflect what they do. If I saw the following piece of code:

    var d DockerOnTop = ...
    d.Activate(...)

    I would most probably assume that d.Activate activates the d object (whatever it could mean) because that's what the name suggests. But what it actually does is it activates a volume. If we had a proper volume object, it would make sense to move this method there and have volume.Activate, but as its a method of DockerOnTop, its name should probably be ActivateVolume

  2. In go, the case of the first letter (i.e., whether it's lowercase or uppercase) has a semantic effect: if an entity name is capitalized, it's exported (i.e., public - available from other packages), otherwise it's unexported (i.e., private, only available in this package). As I understand, the .Activate and .Deactivate are both intended for internal use only (a caller external to DockerOnTop shall not call them directly). If that's the case, their names should be changed.

  3. If I'm wrong and you intended that these functions are exported, it's a go convention to have a documentation string for every exported entity (see Documenting go code for details). You may notice that other exported methods of DockerOnTop don't have documentation, which is a flaw, but all of them correspond to Docker API calls, whereas the two new functions you are adding are something new.

  4. Finally, as you may notice in further comments, at some points it wasn't clear to me what these functions should do and, especially, what they should return. That's why, even if you didn't mean to export them and you'll rename them to .activateVolume and .deactivateVolume respectively, they would still benefit from a short docstring that explains what they do and what they return.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I hope this looks better now with the changes I have done:
Commit: b18043e

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Also commit 3a67652 since I didn't properly rename deactivateVolume

var activeMount ActiveMount
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It could be declared closer to its usage, making it easier to read the code

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed 7043a09

var result bool
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Could we make this name more informative?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Since this was an issue also in driver.go (see comment), I have named it the same way everywhere: 1985b52


_, readDirErr := activemountsdir.ReadDir(1) // Check if there are any files inside activemounts dir
if readDirErr == nil {
// There is something no need to mount the filesystem again
result = false
} else if errors.Is(readDirErr, io.EOF) {
// The directory is empty, mount the filesystem
result = true
} else {
log.Errorf("Failed to list the activemounts directory: %v", readDirErr)
return false, fmt.Errorf("failed to list activemounts/ %v", readDirErr)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return false, fmt.Errorf("failed to list activemounts/ %v", readDirErr)
return false, fmt.Errorf("failed to list activemounts/ : %w", readDirErr)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done.

}

activemountFilePath := d.activemountsdir(request.Name) + request.ID
file, err := os.Open(activemountFilePath)

if err == nil {
// The file can exist from a previous mount when doing a docker cp on an already mounted container, no need to mount the filesystem again
payload, _ := io.ReadAll(file)
json.Unmarshal(payload, &activeMount)
file.Close()
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

All three of these functions may return an error. In go, you usually want to check the error value when one is returned and handle it somehow. In some cases it makes sense to behave in one way or another when there's an error; in other cases the best you can do is just log the error and/or return it from your function (so the caller has to deal with it). Finally, in some (pretty rare) cases you indeed want to ignore an error value returned by a function. In such cases it's better to do this explicitly, e.g., _ = file.Close() rather than file.Close() and, perhaps, leave a comment, explaining, why it's ok to ignore an error here.

In your case:

  1. The call to json.Unmarshal is supposed to set the value of activeMount, used further in the code. If an error occurs, what is the value of activeMount? It's probably either unaffected (that is, remains the default value, as of the declaration in the beginning of the function), or the value is unspecified (can be invalidated) - I'm not sure, we should check out the docs. So, if there is an error, it should definitely be handled, and, likely, lead to the failure to mount this volume (because its internal filesystem state became invalid).
  2. The call to file.Close is much less important, and there isn't much we can do about it if an error occurs. But, firstly, problems with closing files may result in a file descriptor leak (although it's unlikely, as kernels try to release a descriptor early in the closing process), second, it may indicate an I/O problem, which a user would probably want to know about. So here I would prefer logging the error if it occurs, but continuing trying to activate the volume, unlike the previous case.
  3. Finally, io.ReadAll in this particular context may be one of those cases when you can legitimately ignore an error! On the one hand, logs, presumably, would be more informative if you checked it, on the other hand, if there was an error in your io.ReadAll, then an error is bound to occur during json.Unmarshal as well because if io.ReadAll either couldn't read anything or only read the file partially, the result won't be a valid JSON dictionary (and your object corresponds to a dictionary), so unmarshalling will fail. Yet, this is not too obvious, so, if you choose to ignore the error here, you should leave a comment to explain why.

Btw, you can see some examples of documented functions in volumeTreeManagement.go. Ironically, all of them are unexported

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed: f6634d9

} else if os.IsNotExist(err) {
// Default case, we need to create a new active mount, the filesystem needs to be mounted
activeMount = ActiveMount{UsageCount: 0}
} else {
log.Errorf("Active mount file %s exists but cannot be read.", activemountFilePath)
return false, fmt.Errorf("active mount file %s exists but cannot be read", activemountFilePath)
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. That's too little information: we have an error err and could've told what actually happened. See the doc for fmt.Errorf, in particular, its %w verb.
  2. I can see you both log an error and return it. The messages are very similar but slightly different (in case and punctuation). Is that intended? As there are several places like this, would it make sense to only return an error here, and then log it from the caller?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have applied the suggestions as they do make good sense. As if it was intended, the case and punctuation changes they were, initially they where the same with human readable format (casing and sentence ending dots) but the linter complained about that format on the error message. That's how it ended up like this.

c57da77
f9aadb4


activeMount.UsageCount++

payload, _ := json.Marshal(activeMount)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The error is ignored explicitly but it's not clear to me why. json.Marshal's behavior should be much more predictable than that of json.Unmarshal, so it's probably safe to ignore the error here, but, please, write a comment explaining why, relying on the documentation of json.Marshal

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it's not clear to me why. json.Marshal's behavior should be much more predictable than that of json.Unmarshal
We have control over what the Marshal-ed structure (activeMount) so we will not get suprises there. Unmarshal, on the other hand, deals with a file in the filesystem and we could have a surprise if someone changes it.

I have added a comment to indicate why it is safe to ignore the error, which basically boils down to the fact that there is no room for error when converting an structure containing a single integer to JSON.
03f2054

err = os.WriteFile(activemountFilePath, payload, 0o666)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

0o666? World can read and write? I don't think you intended that, 0o644 should be sufficient

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Indeed, 0764601

if err != nil {
log.Errorf("Active mount file %s cannot be written.", activemountFilePath)
return false, fmt.Errorf("active mount file %s cannot be written", activemountFilePath)
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same two points as for a similar error handling earlier

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


return result, nil
}

func (d *DockerOnTop) Deactivate(request *volume.UnmountRequest, activemountsdir lockedFile) (bool, error) {

dirEntries, readDirErr := activemountsdir.ReadDir(2) // Check if there is any _other_ container using the volume
if errors.Is(readDirErr, io.EOF) {
// If directory is empty, unmount overlay and clean up
log.Errorf("There are no active mount files and one was expected. Unmounting.")
return true, fmt.Errorf("there are no active mount files and one was expected. Unmounting")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same two points as for a similar error handling earlier

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

} else if readDirErr != nil {
log.Errorf("Failed to list the activemounts directory: %v", readDirErr)
return false, fmt.Errorf("failed to list activemounts/ %v", readDirErr)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Suggested change
return false, fmt.Errorf("failed to list activemounts/ %v", readDirErr)
return false, fmt.Errorf("failed to list activemounts/ : %w", readDirErr)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Done

}

var activeMount ActiveMount
activemountFilePath := d.activemountsdir(request.Name) + request.ID

otherVolumesPresent := len(dirEntries) > 1 || dirEntries[0].Name() != request.ID

file, err := os.Open(activemountFilePath)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Unless there is a reason for these lines in this order, I'd reorder them as follows:

Suggested change
var activeMount ActiveMount
activemountFilePath := d.activemountsdir(request.Name) + request.ID
otherVolumesPresent := len(dirEntries) > 1 || dirEntries[0].Name() != request.ID
file, err := os.Open(activemountFilePath)
var activeMount ActiveMount
otherVolumesPresent := len(dirEntries) > 1 || dirEntries[0].Name() != request.ID
activemountFilePath := d.activemountsdir(request.Name) + request.ID
file, err := os.Open(activemountFilePath)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed: c0d28c0


if err == nil {
payload, _ := io.ReadAll(file)
json.Unmarshal(payload, &activeMount)
file.Close()
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same error handling note as in Activate

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed: f6634d9

} else if os.IsNotExist(err) {
log.Errorf("The active mount file %s was expected but is not there", activemountFilePath)
return !otherVolumesPresent, fmt.Errorf("the active mount file %s was expected but is not there", activemountFilePath)
} else {
log.Errorf("The active mount file %s could not be opened", activemountFilePath)
return false, fmt.Errorf("the active mount file %s could not be opened", activemountFilePath)
}
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same two points as for a similar error handling earlier

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.


activeMount.UsageCount--

if activeMount.UsageCount == 0 {
err := os.Remove(activemountFilePath)
if err != nil {
log.Errorf("The active mount file %s could not be deleted", activemountFilePath)
return false, fmt.Errorf("the active mount file %s could not be deleted", activemountFilePath)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same two points as for a similar error handling earlier.

Plus! As I can see, the behavior here is significantly different from that of previous docker-on-top. As I can see, this branch is handling the case when we are supposed to actually unmount volume and remove the mount file but failing to remove the file.

One thing you've changed is the order: you're first trying to remove the file, and then, on success, attempt to unmount the overlay (unlike the previous code which attempted it the other way around). And that's for the better! In the old version, if an error occurred during the file deletion, this would bring the volume in a very bad state (overlay is not actually mounted but it looks like it is, so other containers will try to use it straight away - expect horrible bugs); with this change, if an error occurs during the file deletion, the volume is in a just bad state (the volume looks like it's used, although it's not, but it's at least properly mounted).

The other thing you've changed is the logging level: it used to be log.Criticalf, you changed it to log.Errorf. If you think of it, that probably makes sense too, given the above.

So, actually, no problem that I first thought of when starting the "Plus!" part. Good job! But I would still add something to this error message, something like "this volume will never get unmounted until a reboot"

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not really convinced with the new text either, if you have a suggestion please do not hesitate to propose.
Now that the log messages are handled in driver.go, i would keep all of them as Errorf and not Criticalf for the sake of simplicity and the fact that I do not consider the errors more severe.
e441320

}
return !otherVolumesPresent, nil
} else {
payload, _ := json.Marshal(activeMount)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same note as for the previous json.Marshal

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

err = os.WriteFile(activemountFilePath, payload, 0o666)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Permission: presumably, you intended 0o644?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You give me too much credit 😉, I probably copy-pasted that from somewhere else and didn't give it a second thought. Thankfully revisions exist.
0764601

if err != nil {
log.Errorf("The active mount file %s could not be updated", activemountFilePath)
return false, fmt.Errorf("the active mount file %s could not be updated", activemountFilePath)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Same two points as for a similar error handling earlier

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

}
return false, nil
}
}
84 changes: 12 additions & 72 deletions driver.go
Original file line number Diff line number Diff line change
Expand Up @@ -3,7 +3,6 @@ package main
import (
"errors"
"fmt"
"io"
"os"
"regexp"
"strings"
Expand Down Expand Up @@ -184,10 +183,10 @@ func (d *DockerOnTop) Mount(request *volume.MountRequest) (*volume.MountResponse
}
defer activemountsdir.Close() // There is nothing I could do about the error (logging is performed inside `Close()` anyway)

_, readDirErr := activemountsdir.ReadDir(1) // Check if there are any files inside activemounts dir
if errors.Is(readDirErr, io.EOF) {
// No files => no other containers are using the volume. Need to mount the overlay

mount, err := d.Activate(request, activemountsdir)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It's not clear to me what kind of value the boolean variable mount holds here. Is there a better name for this?

Is it an instruction for us returned by d.Activate, i.e., pleaseMount/doMount?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Changed to doMountFs also to differenciate between the two meanings of "Mount".
1985b52

if err != nil {
return nil, internalError("failed to activate the active mount:", err)
} else if mount {
lowerdir := thisVol.BaseDirPath
upperdir := d.upperdir(request.Name)
workdir := d.workdir(request.Name)
Copy link
Owner

@kolayne kolayne Jul 27, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Read this comment first.


Here you, unfortunately, did kind of the opposite (similarly, you reversed the original order, but it has the inverse effect in this case).

Combining the logic inside d.Activate and the code below this comment up to and including the syscall.Mount (that's the code you did not modify), the overall (new) logic is as follows:

  • Check if the volume is already mounted based on activemount files. If it's not:
  • Create a new activemount file to indicate that I am the user of the volume [may fail]
  • Perform the actual mount of the overlay [may fail]

Now, imagine the file creation step succeeds but the next one fails. For that container, of course, we'll report the error and it will refuse to start, but the volume ends up in a very bad state. If another container tries to use that same volume, it will perform the same steps above, but in the first one it will find an activemount file from the previous container and will assume that the volume is already mounted, while it in fact is not!!

The reverse order is not very good either but slightly better: if the first container manages to mount the volume but doesn't manage to mark it as mounted, it, likewise, won't let the current container start (so that overlay mount remains unused), then, when the next container attempts a mount, it will run the same steps and, in the trickiest case, will just mount the same overlay on top of the old one and start using it (which is weird because there are two overlays mounted simultaneously using the same internal directories, but should be ok, as the first overlay is never used, being shadowed by the new one).


So,

  1. First possible improvement for your code: your implementation should at least emit a scary screaming error!
    (see examples from the old code - errors used in very similar situations: 1, 2)
  2. Second, it would be much better if you could fix things such that the order of the system calls (mount/unmount call and activemount file creation/deletion call) is old (as in my code) for the Mount operation but is new (as in your code) for the Unmount operation.

In fact, I now see how we could improve the Mount part, so that we never even attempt to mount overlay on top of anything old. Of course, this is out of scope of this PR, so I'll create a separate issue (but it still requires that you restore the old order here in Mount)

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I have improved the way Mount function works, I keep the new order but in case the filesystem mount fails it will undo the activemount creation so it returns the function in a consistent state: filesystem not mounted and no activemount present.

01de48d

Expand All @@ -210,48 +209,9 @@ func (d *DockerOnTop) Mount(request *volume.MountRequest) (*volume.MountResponse
log.Errorf("Failed to mount overlay for volume %s: %v", request.Name, err)
return nil, internalError("failed to mount overlay", err)
}

log.Debugf("Mounted volume %s at %s", request.Name, mountpoint)
} else if err == nil {
log.Debugf("Volume %s is already mounted for some other container. Indicating success without remounting",
request.Name)
} else {
log.Errorf("Failed to list the activemounts directory: %v", err)
return nil, internalError("failed to list activemounts/", err)
}

activemountFilePath := d.activemountsdir(request.Name) + request.ID
f, err := os.Create(activemountFilePath)
if err == nil {
// We don't care about the file's contents
_ = f.Close()
} else {
if os.IsExist(err) {
// Super weird. I can't imagine why this would happen.
log.Warningf("Active mount %s already exists (but it shouldn't...)", activemountFilePath)
} else {
// A really bad situation!
// We successfully mounted (`syscall.Mount`) the volume but failed to put information about the container
// using the volume. In the worst case (if we just created the volume) the following happens:
// Using the plugin, it is now impossible to unmount the volume (this container is not created, so there's
// no one to trigger `.Unmount()`) and impossible to remove (the directory mountpoint/ is a mountpoint, so
// attempting to remove it will fail with `syscall.EBUSY`).
// It is possible to mount the volume again: a new overlay will be mounted, shadowing the previous one.
// The new overlay will be possible to unmount but, as the old overlay remains, the Unmount method won't
// succeed because the attempt to remove mountpoint/ will result in `syscall.EBUSY`.
//
// Thus, a human interaction is required.
//
// (if it's not us who actually mounted the overlay, then the situation isn't too bad: no new container is
// started, the error is reported to the end user).
log.Criticalf("Failed to create active mount file: %v. If no other container was currently "+
"using the volume, this volume's state is now invalid. A human interaction or a reboot is required",
err)
return nil, fmt.Errorf("docker-on-top internal error: failed to create an active mount file: %w. "+
"The volume is now locked. Make sure that no other container is using the volume, then run "+
"`unmount %s` to unlock it. Human interaction is required. Please, report this bug",
err, mountpoint)
}
log.Debugf("Volume %s already mounted at %s", request.Name, mountpoint)
}

return &response, nil
Expand All @@ -273,40 +233,20 @@ func (d *DockerOnTop) Unmount(request *volume.UnmountRequest) error {
}
defer activemountsdir.Close() // There's nothing I could do about the error if it occurs

dirEntries, readDirErr := activemountsdir.ReadDir(2) // Check if there is any _other_ container using the volume
if len(dirEntries) == 1 || errors.Is(readDirErr, io.EOF) {
// If just one entry or directory is empty, unmount overlay and clean up

unmount, err := d.Deactivate(request, activemountsdir)
if err != nil {
return internalError("failed to deactivate the active mount:", err)
} else if unmount {
err = syscall.Unmount(d.mountpointdir(request.Name), 0)
if err != nil {
log.Errorf("Failed to unmount %s: %v", d.mountpointdir(request.Name), err)
return err
}

err = d.volumeTreePostUnmount(request.Name)
// Don't return yet. The above error will be returned later
} else if readDirErr == nil {
log.Debugf("Volume %s is still mounted in some other container. Indicating success without unmounting",
request.Name)
} else {
log.Errorf("Failed to list the activemounts directory: %v", err)
return internalError("failed to list activemounts/", err)
}

activemountFilePath := d.activemountsdir(request.Name) + request.ID
err2 := os.Remove(activemountFilePath)
if os.IsNotExist(err2) {
log.Warningf("Failed to remove %s because it does not exist (but it should...)", activemountFilePath)
} else if err2 != nil {
// Another pretty bad situation. Even though we are no longer using the volume, it is seemingly in use by us
// because we failed to remove the file corresponding to this container.
log.Criticalf("Failed to remove the active mount file: %v. The volume is now considered used by a container "+
"that no longer exists", err)
// The user most likely won't see this error message due to daemon not showing unmount errors to the
// `docker run` clients :((
return fmt.Errorf("docker-on-top internal error: failed to remove the active mount file: %w. The volume is "+
"now considered used by a container that no longer exists. Human interaction is required: remove the file "+
"manually to fix the problem", err)
log.Debugf("Unmounted volume %s", request.Name)
} else {
log.Debugf("Volume %s is still mounted. Indicating success without unmounting", request.Name)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't agree with the error message change here. Here, you're seemingly saying "still mounted so I'm not unmounting it", which doesn't make sense. That's because of the ambiguity between "mounted" as in "a container is running where this volume is used" and "mounted" as in "the underlying overlay filesystem is present". IMO, the previous error message was clearer:

log.Debugf("Volume %s is still mounted in some other container. Indicating success without unmounting", request.Name)

Maybe could be phrased even better, e.g., Volume %s is still used in another container. Indicating success without unmounting

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I like the one you proposed, it's more clear of what's happening:
197c6fe

}

// Report an error during cleanup, if any
Expand Down
Loading