-
Notifications
You must be signed in to change notification settings - Fork 5.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
A way of mounting a directory where files from the container overwrite host files with docker compose watch
#12510
Comments
AFAICT your last solution is close to a solution, you could rely on services:
app:
build:
context: ./
dockerfile: ./backend/Dockerfile
volumes:
- ./backend/vendor:/app/vendor
develop:
watch:
- action: sync
path: ./backend
target: /app
ignore:
- backend/vendor
- app/vendor
- vendor
- action: sync=exec
path: backend/composer.lock
target: backend/composer.lock
exec:
command: composer install Anyway, I'm a bit confused with the initial state "This requires each developer to keep track of changes in Composer's lock files and run composer install (which installs dependencies into vendor) every time the lock file changes" - doesn't your IDE detect updates to lock file and suggest running this command ? This actually sounds like a local workflow automation issue, as source code is synced from upstream repo, vs a compose issue |
Would love to try your solution, but it seems that It does seem like it will work, but it would also mean that RUN --mount=type=cache,target=/root/.composer/cache composer install It does seem like this could be extracted into a named volume and be mounted in IDEThe IDE does detect updates, and it does suggest running the install command. However:
And, most importantly, it still does not solve the issue of unifying different Dockerfile "strategies" we use for local and production deployment. Unifying those would be great in of itself, but it would also allow running additional commands during build on local environments as part of build process, which we currently cannot do because there are no project files during Docker build on local environments. They are only copied into a container with a volume, so we have to also run these commands after containers start separately. I get where you're coming from. But |
Description
Hey.
First of all, it seems like there already was a similar issue, but it lacked context to understand why it's important to have this implemented in some way. Which is why I'm creating another issue, sorry: #11658
Our app
Our app is a PHP app and uses Composer package manager, but all of this is also relevant for NodeJS apps as well. All packages are installed into
/vendor
directory, so the entire project structure looks something like this:For development, each team member uses an IDE. The IDE uses files in
backend/vendor/
to provide type information, auto-complete and to show sources of vendor packages whenever necessary. Moreover, since PHP is an interpreted language, sometimes we modify the files inbackend/vendor/
directly to assist with debugging. Of course, any changes inbackend/vendor/
are only ever done locally, during development and with full understanding that the changes are going to be gone when Composer re-installs dependencies.docker-compose.yml
docker-compose.yml
is only used for local development. Hence, it currently uses bind mounts to share the entirebackend
directory into the container:This works, but requires each developer to keep track of changes in Composer's lock files and run
composer install
(which installs dependencies intovendor
) every time the lock file changes, by running something likedocker compose run --rm -it app composer install
ordocker compose exec app composer install
. It works this way:/backend/vendor
to/app/vendor
/app/vendor
/app/vendor
back to the host to/backend/vendor
Dockerfile
Locally, no project files are copied into the container;
Dockerfile
is just a base PHP image with some configurationOn production, the app runs on AWS Fargate (which means no way to mount anything), so we pre-build our application into a Docker image, with all Composer dependencies and project files.
This is how it looks:
docker compose watch
Now, there are several services like these in our project. Each requires developers to keep track of lock files and re-run package managers whenever they change. This is inconvenient and creates a lot of situations that could have been avoided. It also completely means our production build works in an entirely different way from our local builds.
This is where
docker compose watch
helps - not only would it allow us to use the same (production) Dockerfile for all environments, but it would also eliminate all unnecessary movements developers currently have to make. So let's say we modify the abovedocker-compose.yml
to include thewatch
configuration, and remove the volume:This works, but now developers no longer have access to
backend/vendor
on the host, meaning the IDE has no idea what dependencies are installed, and neither do developers. This is a problem.Let's say we remove the
ignore: [backend/vendor/]
part. Still,backend/vendor/
is not synced back to the host if it didn't exist in the first place.Okay, let's try adding the volume back, just for the
vendor
directory, and ignore it for watch:Still broken. Now both the host and container have an empty
vendor
folder.Summary
We need a way of syncing the
backend/vendor
folder between the host and the container, but for image built files to always overwrite the host contents.The text was updated successfully, but these errors were encountered: