-
Notifications
You must be signed in to change notification settings - Fork 51
vagrant reload
on OpenShift ADB is too slow
#467
Comments
@containscafeine We have similar issue projectatomic/adb-utils#51 and discussed it why we should not do local check. |
@praveenkumar ah, got the point. Oh wait, and also if the image is updated with the same tag, the image ID changes (IIRC, and if not, at least the layer ID will), so how about adding a check for that by querying the Docker Hub API, that should be faster than the docker pull command? |
@containscafeine Just to understand your workflow. Is there any specific reason you do |
@praveenkumar Or, maybe remove the |
This is something we need provision side. if you want to avoid provisioning then better reload with
We thought using skopeo which use Docker API but that also take around same time. |
This option was added later because we had a bug in the past. |
@LalatenduMohanty Multiple reasons. Maybe I make changes to the Vagrantfile, like changing the synced folders for my code I am mounting inside the VM, or changing the memory allocated to my VM depending on the application I am deploying, and more. All of these things happen frequently when I'm developing, hence lots of |
@praveenkumar Yep, that is one option, but this is different from how my workflow with Vagrant, since I generally tend to put Should this be documented under something like 'best practices' or 'recommendations' while working with ADB? |
Obviously |
@praveenkumar could you point me to the bug, or maybe a pointer on why is it required to provision on every reload? TIA! |
@praveenkumar Yes, the provisioning is required the first time, but for taking care about the same tag being overwritten by another image, can VSM not be of help? Something like, |
@containscafeine #347 |
@containscafeine Do you have any more query around it or should I close this issue? |
@praveenkumar maybe you missed #467 (comment) |
@containscafeine I read it but I think let not overload VSM with all those hacks if we really get some more requests then might be rethink this issue. Hope workaround working for you ( |
@praveenkumar yep, thanks, closing! :) |
Hi,
Every time I do a
vagrant reload
...
...
default: Running: inline script
==> default: Downloading OpenShift docker images
==> default: docker pull docker.io/openshift/origin:v1.2.0
==> default: docker pull docker.io/openshift/origin-haproxy-router:v1.2.0
==> default: docker pull docker.io/openshift/origin-deployer:v1.2.0
==> default: docker pull docker.io/openshift/origin-docker-registry:v1.2.0
==> default: docker pull docker.io/openshift/origin-sti-builder:v1.2.0
==> default: Running provisioner: shell...
default: Running: inline script
==> default: You can now access OpenShift console on: https://10.1.2.2:8443/console
==> default: Configured basic user: openshift-dev, Password: devel
==> default: Configured cluster admin user: admin, Password: admin
==> default:
==> default: To use OpenShift CLI, run:
==> default: $ vagrant ssh
==> default: $ oc login
==> default:
==> default: To browse the OpenShift API documentation, follow this link:
==> default: http://openshift3swagger-claytondev.rhcloud.com
==> default: Then enter this URL:
==> default: https://10.1.2.2:8443/swaggerapi/oapi/v1
==> default: .
The images are not pulled again because I already have those in my box, but the whole process of
docker pull
, going over the network, receiving the response that the image already exists and moving on to the next image, this is repeated about 5 times and adds a lot of delay to what otherwise would have been a very quick reload.Is it possible to modify the provisioning scripts in a way that they check if the container images exist already (in
docker images -q
, maybe?) and make a decision based on that, instead of doing a docker pull each time.The text was updated successfully, but these errors were encountered: