0.3.0: Simpler cloud auth, custom base images, DLVMs
This release was focused on making it easier for others to contribute. The highlights are:
- we now support custom base images
- you no longer need a service account key to submit jobs to AI Platform
- we won't push anymore if a docker image already exists in Cloud, saving you some time and stdout spray
- custom base images via
.calibanconfig.json
, AND support, with special base image names, for all of Google's "Deep Learning VMs" instead of Caliban's default base images.
Thanks to @ramasesh , @eschnett, @ajslone and @sagravat for their contributions on this release!
For more detail, here's the CHANGELOG:
- @ramasesh Added a fix that prevented
pip
git dependencies from working in
caliban shell
mode (#55) This adds a
small update to the base image, so be sure to run
docker pull gcr.io/blueshift-playground/blueshift:cpu
docker pull gcr.io/blueshift-playground/blueshift:gpu
to get access to this fix.
-
Thanks to @eschnett,
--docker_run-args
can now deal with arbitrary
whitespace in the list of arguments, instead of single spaces only.
(#46) -
Caliban now authenticates AI Platform job submissions using the authentication
provided bygcloud auth login
, rather than requiring a service account key.
This significantly simplifies the setup required for a first time user. -
caliban cloud
now checks if the image exists remotely before issuing a
docker push
command on the newly built image
(#36) -
Big internal refactor to make it easier to work on code, increase test
coverage, add new backends (#32) -
add
schema
validation for.calibanconfig.json
. This makes it much easier
to add configuration knobs: #37 -
Custom base image support (#39), thanks
to #20 from @sagravat.
.calibanconfig.json
now supports a"base_image"
key. For the value, can
supply:- a Docker base image of your own
- a dict of the form
{"cpu": "base_image", "gpu": "base_image"}
with both
entries optional, of course.
Two more cool features.
First, if you use a format string, like
"my_image-{}:latest"
, the format
block{}
will be filled in with eithercpu
orgpu
, depending on the mode
Caliban is using.Second, we now have native support for Google's Deep Learning
VMs
as base images. The actual VM containers live
here.
If you provide any of the following strings, Caliban will expand them out to
the actual base image location:
dlvm:pytorch-cpu
dlvm:pytorch-cpu-1.0
dlvm:pytorch-cpu-1.1
dlvm:pytorch-cpu-1.2
dlvm:pytorch-cpu-1.3
dlvm:pytorch-cpu-1.4
dlvm:pytorch-gpu
dlvm:pytorch-gpu-1.0
dlvm:pytorch-gpu-1.1
dlvm:pytorch-gpu-1.2
dlvm:pytorch-gpu-1.3
dlvm:pytorch-gpu-1.4
dlvm:tf-cpu
dlvm:tf-cpu-1.0
dlvm:tf-cpu-1.13
dlvm:tf-cpu-1.14
dlvm:tf-cpu-1.15
dlvm:tf-gpu
dlvm:tf-gpu-1.0
dlvm:tf-gpu-1.13
dlvm:tf-gpu-1.14
dlvm:tf-gpu-1.15
dlvm:tf2-cpu
dlvm:tf2-cpu-2.0
dlvm:tf2-cpu-2.1
dlvm:tf2-cpu-2.2
dlvm:tf2-gpu
dlvm:tf2-gpu-2.0
dlvm:tf2-gpu-2.1
dlvm:tf2-gpu-2.2
Format strings work here as well! So, "dlvm:pytorch-{}-1.4"
is a totally valid
base image.