A local development environment for BOSH using warden containers in a Vagrant box.
This readme walks through deploying Cloud Foundry with BOSH Lite. Bosh and BOSH Lite can be used to deploy just about anything once you've got the hang of it.
For all use cases, first prepare this project with bundler
.
-
Install Vagrant.
Known working version:
$ vagrant -v Vagrant 1.4.3
See this blog for special instructions for Windows users of BOSH Lite.
-
Install Ruby + RubyGems + Bundler.
-
Clone this repository and run Bundler from the base directory.
bundle
Below are installation instructions for different Vagrant providers.
- VMWare Fusion
- Virtualbox
- AWS
Fusion is faster, more reliable and we test against it more frequently. Both Fusion and the Vagrant Fusion provider require a license.
Known to work with Fusion version 6.0.2 and Vagrant plugin vagrant-vmware-fusion version 2.2.0 .
-
Install and launch VMWare Fusion. You need to accept the VMWare license agreement if you haven't done so already.
-
Install Vagrant Fusion Plugin and license.
This requires a license file for Fusion. If you don't have one visit http://www.vagrantup.com to purchase a license.
vagrant plugin install vagrant-vmware-fusion vagrant plugin license vagrant-vmware-fusion license.lic
-
Start Vagrant from the base directory of this repository. This uses the Vagrantfile.
vagrant up --provider vmware_fusion
-
Target the BOSH Director and login with admin/admin.
$ bosh target 192.168.50.4 lite Target set to `Bosh Lite Director' $ bosh login Your username: admin Enter password: ***** Logged in as `admin'
-
Add a set of route entries to your local route table to enable direct Warden container access. Your sudo password may be required.
scripts/add-route
-
Start Vagrant from the base directory of this repository. This uses the Vagrantfile.
vagrant up
-
Target the BOSH Director and login with admin/admin.
$ bosh target 192.168.50.4 lite Target set to `Bosh Lite Director' $ bosh login Your username: admin Enter password: ***** Logged in as `admin'
-
Add a set of route entries to your local route table to enable direct Warden container access every time your networking gets reset (e.g. reboot or connect to a different network). Your sudo password may be required.
scripts/add-route
-
Install Vagrant AWS provider
vagrant plugin install vagrant-aws
Known to work for version: vagrant-aws 0.4.1
-
Add dummy AWS box
vagrant box add dummy https://github.com/mitchellh/vagrant-aws/raw/master/dummy.box
-
Set environment variables called
BOSH_AWS_ACCESS_KEY_ID
andBOSH_AWS_SECRET_ACCESS_KEY
with the appropriate values. If you've followed along with other documentation such as these steps to deploy Cloud Foundry on AWS, you may simply need to source yourbosh_environment
file. -
Make sure the EC2 security group you are using in the
Vagrantfile
exists and allows inbound TCP traffic on ports 25555 (for the BOSH director), 22 (for SSH), 80/443 (for Cloud Controller), and 4443 (for Loggregator). -
Run Vagrant from the
aws
folder:cd aws vagrant up --provider=aws cd ..
-
Find out the public IP of the box you just launched. You can see this info at the end of
vagrant up
output. Another way is runningvagrant ssh-config
underaws
folder. -
Target the BOSH Director and login with admin/admin.
$ bosh target <public_ip_of_the_box> Target set to `Bosh Lite Director' $ bosh login Your username: admin Enter password: ***** Logged in as `admin'
-
Edit manifests/cf-stub-spiff.yml to include a 'domain' key under 'properties' that corresponds to a domain you've set up for this Cloud Foundry instance, or if you want to use xip.io, it can be {your.public.ip}.xip.io.
-
Direct future traffic received on the instance to another ip (the HAProxy):
INTERNAL_IP=<internal IP of instance>
sudo iptables -t nat -A PREROUTING -p tcp -d $INTERNAL_IP --dport 80 -j DNAT --to 10.244.0.34:80
sudo iptables -t nat -A PREROUTING -p tcp -d $INTERNAL_IP --dport 443 -j DNAT --to 10.244.0.34:443
sudo iptables -t nat -A PREROUTING -p tcp -d $INTERNAL_IP --dport 4443 -j DNAT --to 10.244.0.34:4443
These rules are cleared on restart. They can be saved and configured to be reloaded on startup if so desired, assuming granted the internal IP address remains the same.
Occasionally you need to restart the BOSH Lite Director to avoid this bug so perhaps always run the following after booting up BOSH Lite:
vagrant ssh -c "sudo sv restart director"
A stemcell is a VM template with an embedded BOSH Agent. BOSH Lite uses the Warden CPI, so we need to use the Warden Stemcell which will be the root file system for all Linux Containers created by the Warden CPI.
-
Download latest Warden stemcell
wget http://bosh-jenkins-gems-warden.s3.amazonaws.com/stemcells/latest-bosh-stemcell-warden.tgz
-
Upload the stemcell
bosh upload stemcell latest-bosh-stemcell-warden.tgz
NOTE: It is possible to do this in one command instead of two, but doing this in two steps avoids having to download the stemcell again when you bring up a new BOSH Lite box.
You can also use 'bosh public stemcells' to list and download the latest Warden stemcell
Example (the versions you see will be different from these):
$ bosh public stemcells
+---------------------------------------------+
| Name |
+---------------------------------------------+
| bosh-stemcell-1722-aws-xen-ubuntu.tgz |
| bosh-stemcell-1722-aws-xen-centos.tgz |
| light-bosh-stemcell-1722-aws-xen-ubuntu.tgz |
| light-bosh-stemcell-1722-aws-xen-centos.tgz |
| bosh-stemcell-1722-openstack-kvm-ubuntu.tgz |
| bosh-stemcell-1722-vsphere-esxi-ubuntu.tgz |
| bosh-stemcell-1722-vsphere-esxi-centos.tgz |
| bosh-stemcell-24-warden-boshlite-ubuntu.tgz |
+---------------------------------------------+
$ bosh download public stemcell bosh-stemcell-24-warden-boshlite-ubuntu.tgz
Alternatively to the steps below, you can also run this script to deploy the latest version of CloudFoundry:
$ ./scripts/provision_cf
-
Install Spiff. Use the latest binary of Spiff, extract it, and make sure that
spiff
is in your$PATH
. -
Clone a copy of cf-release:
cd ~/workspace git clone https://github.com/cloudfoundry/cf-release
-
Decide which final release of Cloud Foundry you wish to deploy by looking at in the releases directory of cf-release. At the time of this writing, cf-169 is the most recent. We will use that as the example, but you are free to substitute any future release.
-
Check out the desired revision of cf-release, (eg, 169)
cd ~/workspace/cf-release ./update git checkout v169
-
Upload final release
Use the version that matches the tag you checked out. For v169 you would use: releases/cf-169.yml
bosh upload release releases/cf-<version>.yml
If the BOSH binary was not found and you use RVM, BOSH was most likely installed into the bosh-lite gemset. Switch to the gemset before uploading:
rvm gemset use bosh-lite bundle bosh upload release releases/cf-<version>.yml
-
Use the
make_manifest_spiff
script to create a cf manifest. This step assumes you have cf-release checked out in ~/workspace. If you have cf-release checked out to somewhere else, you have to update theBOSH_RELEASES_DIR
+environment variable. The script also requires that cf-release is checked out with the tag matching the final release you wish to deploy so that the templates used bymake_manifest_spiff
match the code you are deploying.make_manifest_spiff
will target your BOSH Lite Director, find the UUID, create a manifest stub and run spiff to generate a manifest at manifests/cf-manifest.yml. If this fails, try updating Spiff.cd ~/workspace/bosh-lite ./scripts/make_manifest_spiff
If you want to change the jobs properties for this bosh-lite deployment, e.g. number of nats servers, you can change it in the template located under cf-release/templates/cf-infrastructure-warden.yml.
-
Deploy CF to bosh-lite
bosh deployment manifests/cf-manifest.yml # This will be done for you by make_manifest_spiff bosh deploy # enter yes to confirm # if you run into errors, try updating Spiff to the latest version (1.0 at time of writing). # brew install spiff --- for Mac users # then rerun # ./scripts/make_manifest_spiff # then rerun # bosh deploy
-
Run the cf-acceptance-tests against your new deployment to make sure it's working correctly.
a. Install Go version 1.2.1 64-bit and setup the Go environment.
mkdir -p ~/go export GOPATH=~/go export PATH=$PATH:/usr/local/go/bin
b. Download the cf-acceptance-tests repository
go get github.com/cloudfoundry/cf-acceptance-tests ... cd $GOPATH/src/github.com/cloudfoundry/cf-acceptance-tests
c. Follow the cats instructions on Running the tests.
Install the Cloud Foundry CLI and run the following:
cf target http://api.10.244.0.34.xip.io
cf login admin --password admin
cf create-space me
cf create-org me
cf target -o me
cf create-space development
cf target -s development
Now you are ready to run commands such as cf push
.
Use bosh ssh
to SSH into running jobs of a deployment and run the following command:
scripts/add-route
Now you can now SSH into any VM with bosh ssh
:
$ bosh ssh
1. nats/0
2. syslog_aggregator/0
3. postgres/0
4. uaa/0
5. login/0
6. cloud_controller/0
7. loggregator/0
8. loggregator-router/0
9. health_manager/0
10. dea_next/0
11. router/0
Choose an instance:
The Warden container will be lost after a vm reboot, but you can restore your deployment with bosh cck
, BOSH's command for recovering from unexpected errors.
$ bosh cck
Choose 2
to recreate each missing VM:
Problem 1 of 13: VM with cloud ID `vm-74d58924-7710-4094-86f2-2f38ff47bb9a' missing.
1. Ignore problem
2. Recreate VM using last known apply spec
3. Delete VM reference (DANGEROUS!)
Please choose a resolution [1 - 3]: 2
...
Type yes to confirm at the end:
Apply resolutions? (type 'yes' to continue): yes
Applying problem resolutions
missing_vm 212: Recreate VM using last known apply spec (00:00:13)
missing_vm 215: Recreate VM using last known apply spec (00:00:08)
...
Done 13/13 00:03:48
- Starting over again is often the quickest path to success; you can use
vagrant destroy
from the base directory of this project to remove the VM. - To start with a new VM, just execute the appropriate
vagrant up
command, optionally passing in the provider as shown in the earlier sections.
We publish pre-built Vagrant boxes on Amazon S3. It is recommended to use the latest boxes.
Just get a latest copy of the Vagrantfile from this repo and run vagrant up
.
See the offline documentation