Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A way of mounting a directory where files from the container overwrite host files with docker compose watch #12510

Open
oprypkhantc opened this issue Jan 29, 2025 · 2 comments

Comments

@oprypkhantc
Copy link

oprypkhantc commented Jan 29, 2025

Description

Hey.

First of all, it seems like there already was a similar issue, but it lacked context to understand why it's important to have this implemented in some way. Which is why I'm creating another issue, sorry: #11658

Our app

Our app is a PHP app and uses Composer package manager, but all of this is also relevant for NodeJS apps as well. All packages are installed into /vendor directory, so the entire project structure looks something like this:

.
├── backend/
│   ├── src/
│   │   └── SourceFile.php
│   ├── vendor/
│   │   └── google/
│   │       └── api-client/
│   │           └── GoogleFile.php
│   ├── Dockerfile
│   ├── composer.json
│   └── composer.lock
└── docker-compose.yml

For development, each team member uses an IDE. The IDE uses files in backend/vendor/ to provide type information, auto-complete and to show sources of vendor packages whenever necessary. Moreover, since PHP is an interpreted language, sometimes we modify the files in backend/vendor/ directly to assist with debugging. Of course, any changes in backend/vendor/ are only ever done locally, during development and with full understanding that the changes are going to be gone when Composer re-installs dependencies.

docker-compose.yml

docker-compose.yml is only used for local development. Hence, it currently uses bind mounts to share the entire backend directory into the container:

services:
  app:
    build:
      context: ./
      dockerfile: ./backend/Dockerfile
      args:
        - CONTAINER_ENV=local
    volumes:
      - ./backend:/app
    deploy:
      replicas: 2

This works, but requires each developer to keep track of changes in Composer's lock files and run composer install (which installs dependencies into vendor) every time the lock file changes, by running something like docker compose run --rm -it app composer install or docker compose exec app composer install. It works this way:

  1. old dependencies are bind mounted from /backend/vendor to /app/vendor
  2. Composer downloads and modifies dependencies in /app/vendor
  3. bind mount syncs changes from /app/vendor back to the host to /backend/vendor
  4. other running containers see the changes on the host and propagate them too

Dockerfile

Locally, no project files are copied into the container; Dockerfile is just a base PHP image with some configuration

On production, the app runs on AWS Fargate (which means no way to mount anything), so we pre-build our application into a Docker image, with all Composer dependencies and project files.

This is how it looks:

ARG CONTAINER_ENV

FROM php:8.2.27-fpm-alpine3.21 AS base

COPY --from=composer:2.7.4 /usr/bin/composer /usr/local/bin/composer

WORKDIR /app


FROM base AS base-local

# Nothing here


FROM base AS base-production

COPY backend/composer.json /app/composer.json
COPY backend/composer.lock /app/composer.lock
RUN composer install

COPY backend/src /app/src


FROM base-${CONTAINER_ENV}

EXPOSE 22 80
CMD tail -f /dev/null

docker compose watch

Now, there are several services like these in our project. Each requires developers to keep track of lock files and re-run package managers whenever they change. This is inconvenient and creates a lot of situations that could have been avoided. It also completely means our production build works in an entirely different way from our local builds.

This is where docker compose watch helps - not only would it allow us to use the same (production) Dockerfile for all environments, but it would also eliminate all unnecessary movements developers currently have to make. So let's say we modify the above docker-compose.yml to include the watch configuration, and remove the volume:

services:
  app:
    build:
      context: ./
      dockerfile: ./backend/Dockerfile
    deploy:
      replicas: 2
    develop:
      watch:
        - action: sync
          path: ./backend
          target: /app
          ignore:
            - backend/vendor/
        - action: rebuild
          path: backend/composer.lock

This works, but now developers no longer have access to backend/vendor on the host, meaning the IDE has no idea what dependencies are installed, and neither do developers. This is a problem.

Let's say we remove the ignore: [backend/vendor/] part. Still, backend/vendor/ is not synced back to the host if it didn't exist in the first place.

Okay, let's try adding the volume back, just for the vendor directory, and ignore it for watch:

services:
  app:
    build:
      context: ./
      dockerfile: ./backend/Dockerfile
    volumes:
      - ./backend/vendor:/app/vendor
    develop:
      watch:
        - action: sync
          path: ./backend
          target: /app
          ignore:
            - backend/vendor
            - app/vendor
            - vendor
        - action: rebuild
          path: backend/composer.lock

Still broken. Now both the host and container have an empty vendor folder.

Summary

We need a way of syncing the backend/vendor folder between the host and the container, but for image built files to always overwrite the host contents.

@ndeloof
Copy link
Contributor

ndeloof commented Jan 30, 2025

AFAICT your last solution is close to a solution, you could rely on sync+exec watch action:

services:
  app:
    build:
      context: ./
      dockerfile: ./backend/Dockerfile
    volumes:
      - ./backend/vendor:/app/vendor
    develop:
      watch:
        - action: sync
          path: ./backend
          target: /app
          ignore:
            - backend/vendor
            - app/vendor
            - vendor
        - action: sync=exec
          path: backend/composer.lock
          target: backend/composer.lock
          exec:
            command: composer install

Anyway, I'm a bit confused with the initial state "This requires each developer to keep track of changes in Composer's lock files and run composer install (which installs dependencies into vendor) every time the lock file changes" - doesn't your IDE detect updates to lock file and suggest running this command ? This actually sounds like a local workflow automation issue, as source code is synced from upstream repo, vs a compose issue

@oprypkhantc
Copy link
Author

oprypkhantc commented Jan 30, 2025

Would love to try your solution, but it seems that sync+exec is available from docker compose v2.32, and the latest shipped docker compose with Docker for Mac currently is Docker Compose version v2.31.0-desktop.2.

It does seem like it will work, but it would also mean that composer install would run from scratch, without Dockerfile build-time cache, every time a container is up. I was looking for more of a native solution: with action: rebuild we could use a Dockerfile like this:

RUN --mount=type=cache,target=/root/.composer/cache composer install

It does seem like this could be extracted into a named volume and be mounted in docker-compose.yml, but it's still a bit more complicated than I would prefer :) Hope you get where I'm coming from.

IDE

The IDE does detect updates, and it does suggest running the install command. However:

  • not all developers see or pay attention to those notifications, especially AQAs
  • you still have to click those manually and wait, instead of just having the docker compose watch running somewhere and it doing 99% of work
  • there's currently three "sub projects" (e.g. services that are all part of a single project, stored in a monorepo), and all three of them are updated quite frequently, which makes it 3x more likely that someone will miss the notification or forget about it. So developers currently have a script they run when switching branches or pulling changes from the remote, but it is still not ideal

And, most importantly, it still does not solve the issue of unifying different Dockerfile "strategies" we use for local and production deployment. Unifying those would be great in of itself, but it would also allow running additional commands during build on local environments as part of build process, which we currently cannot do because there are no project files during Docker build on local environments. They are only copied into a container with a volume, so we have to also run these commands after containers start separately.

I get where you're coming from. But docker compose watch seems like a perfect solution that would eliminate both a separate Docker build process for local, and eliminate any "manual" part from the entire developer experience. It would be seamless and not require any additional scripts or SDLCs :)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants