-
Notifications
You must be signed in to change notification settings - Fork 87
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stopping containers during backup that are part of a stack replicates containers #17
Comments
Also curious about this as I'm investigating whether this tool will also help solve my backup needs with Docker Local named Volumes in Swarm Clusters... |
@prologic I was able to change the restart_policy setting to “on-failure” which resolved this problem https://docs.docker.com/compose/compose-file/#restart I did encounter other problems with my service names not being re-registered with my Traefik proxy after the restart but that’s another issue |
That would be a bit of a blocker for me as I also use Traefik as my ingress. Hmmm 🤔 |
Have to say I haven't thought about the interactions with orchestrators at all. So if you figure out elegant solutions, feel free to post them here, and I'll try to update the README accordingly. |
Old thread, but still relevant problem: All of my containers are deployed with when I have my I tried setting One work-around I found, is to change Has anyone figured out how to have the backup container successfully manage the startup of the stopped container instance, when using |
Yeah, using a fixed delay isn't great, but at least it works it seems. Can't say I have better ideas, sorry. |
What's the recommended way to prevent containers that are part of a service and stopped during backup from restarting and effectively scaling up those services?
i.e. PGADMIN service is stopped during backup, backup takes place, after backup is complete I now have 2 instances of PGADMIN service running when I only require 1
The text was updated successfully, but these errors were encountered: