forked from pivotal-cf/docs-pks
-
Notifications
You must be signed in to change notification settings - Fork 0
/
deploy-workloads.html.md.erb
126 lines (84 loc) · 7.23 KB
/
deploy-workloads.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
---
title: Deploying and Accessing Basic Workloads
owner: PKS
---
<strong><%= modified_date %></strong>
This topic describes how to deploy and access basic workloads in Pivotal Container Service (PKS).
If you use Google Cloud Platform (GCP), Amazon Web Services (AWS), or vSphere with NSX-T integration, your cloud provider can configure an external load balancer for your workload.
See [Access Workloads Using an External Load Balancer](#external-lb).
If you use AWS, you can also access your workload using an internal load balancer.
See the [AWS Prerequisites](#aws) section, and then [Access Workloads Using an Internal AWS Load Balancer](#internal-lb).
If you use vSphere without NSX-T, you can choose to configure your own external load balancer or expose static ports to access your workload without a load balancer.
See [Access Workloads without a Load Balancer](#without-lb).
## <a id='aws'></a>AWS Prerequisites
If you use AWS, perform the following steps before you create a load balancer:
1. In the [AWS Management Console](https://aws.amazon.com/console/), create or locate a public subnet for each availability zone (AZ) you are deploying to. A public subnet has a route table that directs Internet-bound traffic to the Internet gateway.
1. On the command line, run `pks cluster CLUSTER-NAME`, replacing `CLUSTER-NAME` with the name of your cluster.
1. Record the unique identifier for the cluster.
1. In the [AWS Management Console](https://aws.amazon.com/console/), tag each public subnet based on the table below, replacing `CLUSTER-UUID` with the unique identifier of the cluster. Leave the **Value** field blank.
<table>
<tr>
<th>Key</th>
<th>Value</th>
</tr>
<tr>
<td><code>kubernetes.io/cluster/service-instance_CLUSTER-UUID</code></td>
<td>empty</td>
</tr>
</table>
<p class='note'><strong>Note</strong>: AWS limits the number of tags on a subnet to 100.</p>
## <a id='external-lb'></a>Access Workloads Using an External Load Balancer
If you use GCP, AWS, or vSphere with NSX-T, follow the steps below to deploy and access basic workloads using a load balancer configured by your cloud provider.
<p class='note'><strong>Note</strong>: This approach creates a dedicated load balancer for each workload. This may be an inefficient use of resources in clusters with many apps.</p>
1. Expose the workload using a Service with `type: LoadBalancer`. See the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer) for more information about the `LoadBalancer` Service type.
1. Download the spec for a basic NGINX app from the [cloudfoundry-incubator/kubo-ci](https://github.com/cloudfoundry-incubator/kubo-ci/blob/master/specs/nginx-lb.yml) GitHub repository.
1. Run `kubectl create -f nginx.yml` to deploy the basic NGINX app. This command creates three pods (replicas) that span three worker nodes.
1. Wait until your cloud provider creates a dedicated load balancer and connects it to the worker nodes on a specific port.
1. Run `kubectl get svc nginx` and retrieve the load balancer IP address and port number.
1. On the command line of a server with network connectivity and visibility to the IP address of the worker node, run `curl http://EXTERNAL-IP:PORT` to access the app. Replace `EXTERNAL-IP` with the IP address of the load balancer and `PORT` with the port number.
## <a id='internal-lb'></a>Access Workloads Using an Internal AWS Load Balancer
If you use AWS, follow the steps below to deploy and access basic workloads using an internal load balancer configured by your cloud provider.
<p class='note'><strong>Note</strong>: This approach creates a dedicated load balancer for each workload. This may be an inefficient use of resources in clusters with many apps.</p>
1. Expose the workload using a Service with `type: LoadBalancer`. See the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/service/#type-loadbalancer) for more information about the `LoadBalancer` Service type.
1. Download the spec for a basic NGINX app from the [cloudfoundry-incubator/kubo-ci](https://github.com/cloudfoundry-incubator/kubo-ci/blob/master/specs/nginx-lb.yml) GitHub repository.
1. In the services metadata section of the manifest, add an `annotations` tag.</br></br>
For example:
```
apiVersion: v1
kind: Service
metadata:
labels:
name: nginx
annotations:
service.beta.kubernetes.io/aws-load-balancer-internal: 0.0.0.0/0
name: nginx
spec:
ports:
- port: 80
selector:
app: nginx
type: LoadBalancer
```
1. Run `kubectl create -f nginx.yml` to deploy the basic NGINX app. This command creates three pods (replicas) that span three worker nodes.
1. Wait until your cloud provider creates a dedicated load balancer and connects it to the worker nodes on a specific port.
1. Run `kubectl get svc nginx` and retrieve the load balancer IP address and port number.
1. On the command line of a server with network connectivity and visibility to the IP address of the worker node, run `curl http://EXTERNAL-IP:PORT` to access the app. Replace `EXTERNAL-IP` with the IP address of the load balancer and `PORT` with the port number.
## <a id='without-lb'></a>Access Workloads without a Load Balancer
If you use vSphere without NSX-T integration, you do not have a load balancer configured by your cloud provider.
You can choose to [configure your own external load balancer](#external-lb) or follow the procedures in this section to access your workloads without a load balancer.
If you do not use an external load balancer, you can configure the NGINX service to expose a static port on each worker node.
From outside the cluster, you can reach the service at `http://NODE-IP:NODE-PORT`.
To expose a static port on your workload, perform the following steps:
1. Download the spec for a basic NGINX app from the [cloudfoundry-incubator/kubo-ci](https://github.com/cloudfoundry-incubator/kubo-ci/blob/master/specs/nginx.yml) GitHub repository.
1. Run `kubectl create -f nginx.yml` to deploy the basic NGINX app. This command creates three pods (replicas) that span three worker nodes.
1. Expose the workload using a Service with `type: NodePort`. See the [Kubernetes documentation](https://kubernetes.io/docs/concepts/services-networking/service/#type-nodeport) for more information about the `NodePort` Service type.
1. Retrieve the IP address for a worker node with a running NGINX pod.
<p class='note'><strong>Note</strong>: If you deployed more than four worker
nodes, some worker nodes may not contain a running NGINX pod. Select a worker
node that contains a running NGINX pod.</p>
You can retrieve the IP address for a worker node with a running NGINX pod in
one of the following ways:
* On the command line, run `kubectl get nodes -L spec.ip`.
* On the Ops Manager command line, run `bosh vms` to find the IP address.
1. On the command line, run `kubectl get svc nginx`. Find the node port number in the `3XXXX` range.
1. On the command line of a server with network connectivity and visibility to the IP address of the worker node, run `curl http://NODE-IP:NODE-PORT` to access the app. Replace `NODE-IP` with the IP address of the worker node, and `NODE-PORT` with the node port number.