The example below creates a Kubernetes cluster with 4 worker node Virtual Machines and a master Virtual Machine (i.e. 5 VMs in your cluster). This cluster is set up and controlled from your workstation (or wherever you find convenient).
- You need a Google Cloud Platform account with billing enabled. Visit http://cloud.google.com/console for more details.
- Make sure you can start up a GCE VM. At least make sure you can do the Create an instance part of the GCE Quickstart.
- Make sure you can ssh into the VM without interactive prompts.
- Your GCE SSH key must either have no passcode or you need to be using
ssh-agent
. - Ensure the GCE firewall isn't blocking port 22 to your VMs. By default, this should work but if you have edited firewall rules or created a new non-default network, you'll need to expose it:
gcloud compute firewall-rules create --network=<network-name> --description "SSH allowed from anywhere" --allow tcp:22 default-ssh
- You need to have the Google Cloud Storage API, and the Google Cloud Storage JSON API enabled. This can be done in the Google Cloud Console.
- Be running a Linux or Mac OS X.
- You must have the Google Cloud SDK installed. This will get you
gcloud
andgsutil
. - Ensure that your
gcloud
components are up-to-date by runninggcloud components update
. - If you want to build your own release, you need to have Docker installed. On Mac OS X you can use boot2docker.
- Get or build a binary release
cluster/kube-up.sh
The script above relies on Google Storage to stage the Kubernetes release. It
then will start (by default) a single master VM along with 4 worker VMs. You
can tweak some of these parameters by editing cluster/gce/config-default.sh
You can view a transcript of a successful cluster creation
here.
The instances must be able to connect to each other using their private IP. The
script uses the "default" network which should have a firewall rule called
"default-allow-internal" which allows traffic on any port on the private IPs.
If this rule is missing from the default network or if you change the network
being used in cluster/config-default.sh
create a new rule with the following
field values:
- Source Ranges:
10.0.0.0/8
- Allowed Protocols and Port:
tcp:1-65535;udp:1-65535;icmp
Once you have your instances up and running, the build-go.sh
script sets up
your Go workspace and builds the Go components.
The kubecfg.sh
line below spins up two containers running
Nginx with port 80 mapped to 8080:
cluster/kubecfg.sh -p 8080:80 run dockerfile/nginx 2 myNginx
To stop the containers:
cluster/kubecfg.sh stop myNginx
To delete the containers:
cluster/kubecfg.sh rm myNginx
Assuming you've run hack/dev-build-and-up.sh
and hack/build-go.sh
, you
can create a pod like this:
cd kubernetes
cluster/kubecfg.sh -c api/examples/pod.json create /pods
Where pod.json contains something like:
{
"id": "php",
"kind": "Pod",
"apiVersion": "v1beta1",
"desiredState": {
"manifest": {
"version": "v1beta1",
"id": "php",
"containers": [{
"name": "nginx",
"image": "dockerfile/nginx",
"ports": [{
"containerPort": 80,
"hostPort": 8080
}],
"livenessProbe": {
"enabled": true,
"type": "http",
"initialDelaySeconds": 30,
"httpGet": {
"path": "/index.html",
"port": "8080"
}
}
}]
}
},
"labels": {
"name": "foo"
}
}
You can see your cluster's pods:
cluster/kubecfg.sh list pods
and delete the pod you just created:
cluster/kubecfg.sh delete pods/php
Look in examples/
for more examples
cd kubernetes
cluster/kube-down.sh