Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

force_restart: true ignores any changed parameters in container task #816

Open
va1entin opened this issue Aug 14, 2024 · 7 comments · May be fixed by #820
Open

force_restart: true ignores any changed parameters in container task #816

va1entin opened this issue Aug 14, 2024 · 7 comments · May be fixed by #820

Comments

@va1entin
Copy link

va1entin commented Aug 14, 2024

Is this a BUG REPORT or FEATURE REQUEST? (leave only one on its own line)

/kind bug

Description

Hi team, I'm filing this as a bug but it could be just a docs bug if the behavior I'm describing is actually desired. When force_restart: true is set on a podman_container task and the container already exists, podman_container will ignore any parameters changed in the task.

For example, you could deploy a Ubuntu 22.04 container with force_restart set to true. The container then exists on the target host. Then you change the image to Ubuntu 24.04 and run the playbook again. The container will be restarted but the new image will be ignored. This applies at least to image and command but looking at make_started I'd assume it applies to any parameter.

The return statement here means the function will always return before any parameter changes to containers are made if restart/force_restart are true.

I also couldn't find anything in the docs describing this behavior and iirc the docker container module doesn't do it this way either.

Personally, I don't think force_restart should mean "do only a restart and literally NOTHING else". If I need force_restart for some reason that means I have to set it to false every time I want to change anything about my container or create a second task that only contains the container name, image and force_restart to have a "restart only" task and then a separate task that contains my actual container parameter. This seems very counterintuitive and might have unforeseen consequences that I don't see right now.

My proposal is to remove the return statement linked above. That means after the restart is done, the rest of make_started will still be executed and changed parameters will be reflected on the target host. There might be downsides to this particular approach that I don't see. For example, update_container_result() could be called twice in one run of make_started() - I think that will be a problem or at the very least the restart information might be lost.

If you don't agree with my understanding of force_restart and do think that the default when true should be to ignore any changed parameters, one could also leverage the recreate parameter here, check whether it's true and only then apply changed parameters when force_restart: true. Currently, even with recreate: true my container will not be recreated as long as force_restart is true as far as I could tell.

Happy to create a PR for this by the way - just wanted to discuss whether this behavior is desired and which approach would be best before I do. 😃

Steps to reproduce the issue:

  1. Deploy a container with force_restart: true

  2. Change a container parameter like image or command in the playbook and run it again

  3. Container will be restarted but your changed parameter will not be reflected; image/command or whatever will not be changed to what you set in step 2 until you set force_restart to false and re-run the playbook

Describe the results you received:
Containers are restarted but updated parameters are not reflected when force_restart: true

Describe the results you expected:
Containers are restarted and updated parameters are reflected when force_restart: true

Additional information you deem important (e.g. issue happens only occasionally):

Version of the containers.podman collection:
Either git commit if installed from git: git show --summary
Or version from ansible-galaxy if installed from galaxy: ansible-galaxy collection list | grep containers.podman

1.15.4

Output of ansible --version:

ansible [core 2.17.2]
  config file = None
  configured module search path = ['/home/val/.ansible/plugins/modules', '/usr/share/ansible/plugins/modules']
  ansible python module location = /usr/lib/python3.12/site-packages/ansible
  ansible collection location = /home/val/.ansible/collections:/usr/share/ansible/collections
  executable location = /usr/bin/ansible
  python version = 3.12.4 (main, Jun  7 2024, 06:33:07) [GCC 14.1.1 20240522] (/usr/bin/python)
  jinja version = 3.1.4
  libyaml = True

Output of podman version:

Client:       Podman Engine
Version:      4.9.3
API Version:  4.9.3
Go Version:   go1.22.2
Built:        Thu Jan  1 01:00:00 1970
OS/Arch:      linux/amd64

Output of podman info --debug:

host:                                                                                                                                                                    
  arch: amd64                              
  buildahVersion: 1.33.7  
  cgroupControllers:      
  - cpu                                                                                                                                                                                     
  - memory                
  - pids                
  cgroupManager: systemd                                                                      
  cgroupVersion: v2             
  conmon:                  
    package: conmon_2.1.10+ds1-1build2_amd64
    path: /usr/bin/conmon                
    version: 'conmon version 2.1.10, commit: unknown'
  cpuUtilization:                
    idlePercent: 98.88             
    systemPercent: 0.84                                                                       
    userPercent: 0.28      
  cpus: 1                        
  databaseBackend: sqlite
  distribution:                 
    codename: noble                 
    distribution: ubuntu                                                                      
    version: "24.04"                                                                          
  eventLogger: journald                                                                       
  freeLocks: 2041                
  hostname: REDACTED
  idMappings:                                                                                 
    gidmap:    
    - container_id: 0
      host_id: 1000           
      size: 1                                                                                 
    - container_id: 1
      host_id: 100000      
      size: 65536                                                                                                                                                                           
    uidmap:                                                                                   
    - container_id: 0                                                                         
      host_id: 1000                                                                           
      size: 1                                                                                 
    - container_id: 1 
      host_id: 100000                                                                                                                                                     
      size: 65536                          
  kernel: 6.8.0-40-generic
  linkmode: dynamic       
  logDriver: journald                                                                                                                                                                       
  memFree: 7835648        
  memTotal: 430129152   
  networkBackend: netavark                                                                    
  networkBackendInfo:           
    backend: netavark      
    dns:      
      package: aardvark-dns_1.4.0-5_amd64
      path: /usr/lib/podman/aardvark-dns    
      version: aardvark-dns 1.4.0
    package: netavark_1.4.0-4_amd64
    path: /usr/lib/podman/netavark                                                            
    version: netavark 1.4.0
  ociRuntime:                    
    name: crun         
    package: crun_1.14.1-1_amd64
    path: /usr/bin/crun             
    version: |-                                                                               
      crun version 1.14.1                                                                     
      commit: de537a7965bfbe9992e2cfae0baeb56a08128171                                        
      rundir: /run/user/1000/crun
      spec: 1.0.0
      +SYSTEMD +SELINUX +APPARMOR +CAP +SECCOMP +EBPF +WASM:wasmedge +YAJL                    
  os: linux    
  pasta:             
    executable: /usr/bin/pasta
    package: passt_0.0~git20240220.1e6f92b-1_amd64                                            
    version: |  
      pasta unknown version
      Copyright Red Hat                                                                                                                                                                     
      GNU General Public License, version 2 or later                                          
        <https://www.gnu.org/licenses/old-licenses/gpl-2.0.html>                              
      This is free software: you are free to change and redistribute it.                      
      There is NO WARRANTY, to the extent permitted by law.                                   
  remoteSocket:
    exists: false                                                                                                                                       
    path: /run/user/1000/podman/podman.sock
  security:
    apparmorEnabled: false
    capabilities: CAP_CHOWN,CAP_DAC_OVERRIDE,CAP_FOWNER,CAP_FSETID,CAP_KILL,CAP_NET_BIND_SERVICE,CAP_SETFCAP,CAP_SETGID,CAP_SETPCAP,CAP_SETUID,CAP_SYS_CHROOT
    rootless: true        
    seccompEnabled: true
    seccompProfilePath: /usr/share/containers/seccomp.json
    selinuxEnabled: false       
  serviceIsRemote: false   
  slirp4netns:
    executable: /usr/bin/slirp4netns
    package: slirp4netns_1.2.1-1build2_amd64
    version: |-            
      slirp4netns version 1.2.1
      commit: 09e31e92fa3d2a1d3ca261adaeb012c8d75a8194
      libslirp: 4.7.0      
      SLIRP_CONFIG_VERSION_MAX: 4
      libseccomp: 2.5.5
  swapFree: 0
  swapTotal: 0                      
  uptime: 38h 52m 28.00s (Approximately 1.58 days)
  variant: ""                                                                                 
plugins:
  authorization: null
  log:    
  - k8s-file                         
  - none       
  - passthrough      
  - journald
  network:           
  - bridge      
  - macvlan
  - ipvlan                                                                                                                                                                                  
  volume:                  
  - local
registries: {}
store:
  configFile: /home/val/.config/containers/storage.conf
  containerStore:
    number: 3
    paused: 0
    running: 3
    stopped: 0
  graphDriverName: overlay
  graphOptions: {}
  graphRoot: /home/val/.local/share/containers/storage
  graphRootAllocated: 9283444736
  graphRootUsed: 4377153536
  graphStatus:
    Backing Filesystem: extfs
    Native Overlay Diff: "true"
    Supports d_type: "true"
    Supports shifting: "false"
    Supports volatile: "true"
    Using metacopy: "false"
  imageCopyTmpDir: /var/tmp
  imageStore:
    number: 5
  runRoot: /run/user/1000/containers
  transientStore: false
  volumePath: /home/val/.local/share/containers/storage/volumes
version:
  APIVersion: 4.9.3
  Built: 0
  BuiltTime: Thu Jan  1 01:00:00 1970
  GitCommit: ""
  GoVersion: go1.22.2
  Os: linux
  OsArch: linux/amd64
  Version: 4.9.3

Package info (e.g. output of rpm -q podman or apt list podman):

podman/noble-updates,noble-security,now 4.9.3+ds1-1ubuntu0.1 amd64 [installed]
podman/noble 4.9.3+ds1-1build2 amd64

Playbok you run with ansible (e.g. content of playbook.yaml):

---
- name: Ubuntu test
  hosts: "cloud"
  tasks:
    - name: Run Ubuntu container
      containers.podman.podman_container:
        name: "ubuntu-test"
        image: docker.io/library/ubuntu:22.04
        detach: true
        hostname: "ubuntu"
        pull: always
        recreate: true
        force_restart: true
        restart_policy: always
        volumes:
          - "/etc/localtime:/etc/localtime:ro"
        command: "sleep infinity"
        state: started

Command line and output of ansible run with high verbosity

Please NOTE: if you submit a bug about idempotency, run the playbook with --diff option, like:

ansible-playbook -i inventory --diff -vv playbook.yml

Omitted because I think the problem/cause is already identified in the make_started function linked above.

Additional environment details (AWS, VirtualBox, physical, etc.):

@va1entin va1entin changed the title force_restart: true ignores any changed parameters in playbook force_restart: true ignores any changed parameters in container task Aug 14, 2024
@sshnaidm
Copy link
Member

Hi, @va1entin
I think when you change something in container definition it recreates the container from scratch, and force_restart doesn't make a lot of sense here. The container will be deleted and created with new parameters. force_restart is for restarting existing not-changed containers.
For example you have a running container nginx and want just to restart it without specifying now all required original data:

- containers.podman.podman_container:
    name: nginx
    state: started
    force_restart: true

When you change the container definition I don't see why to use force_restart key, it will be just recreated.

@va1entin
Copy link
Author

Hi @sshnaidm thanks for your swift reply and view on the matter!

When you change the container definition I don't see why to use force_restart key, it will be just recreated.

I think there is a use case in adding force_restart to the container definition rather than a separate task as in your example because that allows you to have a more concise playbook in some cases. You can have the container definition and the restart in one task rather than having two separate tasks.

If the container definition doesn't change you can rest assured that the container will be restarted without needing a separate task and if the definition does change the changes will be applied.

In fact that is essentially my use case: I have two container definitions in my playbook and usually only one of them changes at a time because of version updates or so. With force_restart I can ensure that the other, unchanged, container will still be restarted without needing a separate job.

It's a bit of a special use case but feels valid to me in the spirit of keeping playbooks DRY.

@sshnaidm
Copy link
Member

@va1entin let me understand better your case. Do you restart the second container only if first changed? In this case it's better to have a condition:

- name: First container
  podman_container:
    name: first
    ...
  register: first

- name: Second container
  podman_container:
    name: second
    ....

- name: Restart second if first has changed
  podman_container:
    name: second
    force_restart: true
  when: first is changed

I can't think about a use case when you need to restart the container every time you run the playbook. Do you have such?

@va1entin
Copy link
Author

@sshnaidm Currently, I don't restart the second container only if the first changed.

Restarting the container every time is not mandatory in my setup but I like to do it to ensure that a change in container 1 doesn't disrupt the functionality of container 2, even if the latter is restarted. While it's hopefully rare, I have seen cases where only a restart of the app exposes a problem that wouldn't immediately appear during normal operation when something is changed in a container that this app relies on. Such problems might otherwise only manifest way later when one wouldn't necessarily connect it to the container update one did potentially days or weeks ago, depending on when the container is restarted next.

Apart from changes in the containers themselves, the restart could also exposes new problems between host and containers that might only show way later - which makes it useful regardless of whether the container definitions changed.

In essence, containers should survive restarts at any point and services should not or only minimally be disrupted when they happen - otherwise that's an indication for a problem in my setup and having every playbook run restart the container gives me an opportunity to add that "restart test case" into my playbook very leanly.

@sshnaidm
Copy link
Member

I still not fully understand your use case, but anyway - if we start require all settings to be in container definition with force_restart, we'll loose the option to restart container just by name. I think it might be solved by adding state: restarted maybe, not sure, it will require some of redesign of current logic. I think we can have it as RFE, though not for 1.x.x because it will be breaking change.

@va1entin
Copy link
Author

va1entin commented Aug 19, 2024

Hi @sshnaidm I have created a PR with an idea how to fix this without requiring a new state or losing the possibility to restart a container just by name in #820.
Looking forward to your thoughts!

@Crapshit
Copy link

Crapshit commented Oct 9, 2024

We are also affected from the issue 😢

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants