-
-
Notifications
You must be signed in to change notification settings - Fork 1
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add caching to gitlab-ci example #4
Comments
Our experience with trying to cache images in GitHub hasn't been very successful, largely because just saving them or extracting them takes more time than downloading them in the first place. |
@rfay @ochorocho I was thinking of a possible solution. Is DDEV somehow capable of building and pushing a docker image to a registry? If this would be possible, we could somehow check if the DDEV configuration has changed (maybe a salt.txt file or similar). If the DDEV configuration did not change, we can just pull the image from the registry and run the ddev tests. Or are there other ideas how to avoid the constant download of all docker layers within the container? |
DDEV's docker images are all in the hub.docker.com registry. That's the problem here, we don't have a way to efficiently store the images locally. Downloading them doesn't take long, but unpacking/extracting does take a long time. So caching locally (which isn't hard) still takes a long time because the extraction into the local (fresh) docker instance does take time even if the download is fast. |
But isn't DDEV building a new docker image with Locally I can profit from the fact, that the layer are cached. I understand that downloading is as fast as geting them from the cache. But I thought We could put the resulting image from docker compose which uses the DDEV docker images as a base could be pushed to a registry and reused. Or is there another mechanism possibly speed up the process? Instead of running Docker-in-Docker, we build the DDEV final docker image, then push it to gitlab registry and in the next step we run the tests inside this prebuild DDEV image. Maybe this is also the wrong approach. Do you have another idea? We are using kaniko to build and push docker images and I was imaging that this could be a way to save some resources in the pipeline. |
DDEV adds a new layer to the image at start, but the key problem with testing is the (download and) extraction of the base image, especially ddev/ddev-webserver. The addition of the extra layers (for username, etc.) takes nearly no time. You can watch it yourself on a IMO the general problem is figuring out how to have the actual docker server persist state, so that images are ready there when they're needed. |
Ok I understand. You are absolutely right. Unfortunatly I lack the knowledge how to do that. I'm not even sure that it is technically possible. But I agree that if the base image would be available, the process would be very fast. |
If the images are stored in ddev itself and not in the DIND-Service image we could certainly extend the image to contain the images. I'd like to keep this image small if possible. |
You point to for some reason... but that was fixed ages ago so |
👍 my intention was to use |
Will be great to see how that comes out! |
Great idea! That make things faster for sure! |
Caching the downloaded docker images on a project base would make sense to speed up builds.
Currently, all images are download on each and every job run.
The text was updated successfully, but these errors were encountered: