forked from pivotal-cf/docs-pks
-
Notifications
You must be signed in to change notification settings - Fork 0
/
installing-nsx-t.html.md.erb
198 lines (139 loc) · 13.2 KB
/
installing-nsx-t.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
---
title: Installing Enterprise PKS on vSphere with NSX-T
owner: PKS
iaas: vSphere-NSX-T
---
<strong><%= modified_date %></strong>
This topic describes how to install and configure <%= vars.product_full %> on vSphere with NSX-T integration.
##<a id='prerequisites'></a>Prerequisites
Before you begin this procedure, ensure that you have successfully completed all preceding steps for installing <%= vars.product_short %> on vSphere with NSX-T, including:
<ul>
<li>
<a href="./vsphere-nsxt-index-prepare.html">Preparing to Install <%= vars.product_short %> on vSphere with NSX-T Data Center</a>
</li>
<li>
<a href="./vsphere-nsxt-rpd-mpd.html">Installing and Configuring NSX-T for <%= vars.product_short %></a>
</li>
<li>
<a href="./nsxt-prepare-mgmt-plane.html">Creating the <%= vars.product_short %> Management Plane</a>
</li>
<li>
<a href="./nsxt-prepare-compute-plane.html">Creating the <%= vars.product_short %> Compute Plane</a>
</li>
<li>
<a href="./vsphere-nsxt-om-deploy.html">Deploying Ops Manager with NSX-T for <%= vars.product_short %></a>
</li>
<li>
<a href="./nsxt-generate-ca-cert.html">Generating and Registering the NSX Manager Certificate for <%= vars.product_short %></a>
</li>
<li>
<a href="./vsphere-nsxt-om-config.html">Configuring BOSH Director with NSX-T for <%= vars.product_short %></a>
</li>
<li>
<a href="./nsxt-generate-pi-cert.html">Generating and Registering the NSX Manager Superuser Principal Identity Certificate and Key for <%= vars.product_short %></a>
</li>
</ul>
##<a id='install'></a> Step 1: Install <%= vars.product_short %>
<%= partial 'install-pks' %>
##<a id='configure'></a> Step 2: Configure <%= vars.product_short %>
Click the orange **<%= vars.product_tile %>** tile to start the configuration process.
<p class="note"><strong>Note</strong>: Configuration of NSX-T or Flannel <strong>cannot</strong> be changed after initial installation and configuration of <%= vars.product_short %>.</p>
![PKS tile on the Ops Manager installation dashboard](images/pks-tile-orange.png)
<p class="note warning"><strong>WARNING</strong>: When you configure the <%= vars.product_tile %> tile,
do not use spaces in any field entries. This includes spaces between characters as well as
leading and trailing spaces. If you use a space in any field entry, the deployment of <%= vars.product_short %> fails.</p>
###<a id='azs-networks'></a> Assign AZs and Networks
Perform the following steps:
1. Click **Assign AZs and Networks**.
1. Select the availability zone (AZ) where you want to deploy the PKS API VM as a singleton job.
<p class="note"><strong>Note</strong>: You must select an additional AZ for balancing other jobs before clicking <strong>Save</strong>, but this selection has no effect in the current version of <%= vars.product_short %>.</p>
![Assign AZs and Networks pane in Ops Manager](images/azs-networks.png)
1. Under **Network**, select the PKS Management Network linked to the `ls-pks-mgmt` NSX-T logical switch you created in the [Create Networks Page](vsphere-nsxt-om-config.html#create-networks) step of _Configuring BOSH Director with NSX-T for <%= vars.product_short %>_. This will provide network placement for the PKS API VM.
1. Under **Service Network**, your selection depends on whether you are installing a new <%= vars.product_short %> deployment or upgrading from a previous version of <%= vars.product_short %>.
* If you are deploying <%= vars.product_short %> with NSX-T for the first time, select the PKS Management Network you specified in the **Network** field.
You do not need to create or define a service network because <%= vars.product_short %> creates the service network for you during the installation process.
* If you are upgrading from a previous version of <%= vars.product_short %>, then select the **Service Network** linked to the `ls-pks-service` NSX-T logical switch that <%= vars.product_short %> created for you during installation. The service network provides network placement for existing on-demand Kubernetes cluster service instances that were created by the <%= vars.product_short %> broker.
1. Click **Save**.
###<a id='pks-api'></a> PKS API
<%= partial 'pks-api' %>
###<a id='plans'></a> Plans
<%= partial 'plans' %>
###<a id='cloud-provider'></a> Kubernetes Cloud Provider
<%= partial 'cloud-provider' %>
###<a id='syslog'></a> (Optional) Logging
<%= partial 'logging' %>
###<a id='networking'></a> Networking
To configure networking, do the following:
1. Click **Networking**.
1. Under **Container Networking Interface**, select **NSX-T**.
![NSX-T Networking configuration pane in PKS tile](images/networking-nsx-t.png)
1. For **NSX Manager hostname**, enter the hostname or IP address of your NSX Manager.
1. For **NSX Manager Super User Principal Identify Certificate**, copy and paste the contents and private key of the Principal Identity certificate you created in [Generating and Registering the NSX Manager Superuser Principal Identity Certificate and Key](nsxt-generate-pi-cert.html).
1. For **NSX Manager CA Cert**, copy and paste the contents of the NSX Manager CA certificate you created in [Generating and Registering the NSX Manager Certificate](nsxt-generate-ca-cert.html). Use this certificate and key to connect to the NSX Manager.
1. The **Disable SSL certificate verification** checkbox is **not** selected by default. In order to disable TLS verification, select the checkbox. You may want to disable TLS verification if you did not enter a CA certificate, or if your CA certificate is self-signed.
<p class="note"><strong>Note</strong>: The <strong>NSX Manager CA Cert</strong> field and the <strong>Disable SSL certificate verification</strong> option are intended to be mutually exclusive. If you disable SSL certificate verification, leave the CA certificate field blank. If you enter a certificate in the <strong>NSX Manager CA Cert</strong> field, do not disable SSL certificate verification. If you populate the certificate field and disable certificate validation, insecure mode takes precedence.</p>
1. If you are using a NAT deployment topology, leave the **NAT mode** checkbox selected. If you are using a No-NAT topology, clear this checkbox. For more information, see [Deployment Topologies](nsxt-topologies.html).
1. Enter the following IP Block settings:
![NSX-T Networking configuration pane in Ops Manager](images/networking-nsx-t-3.png)
[View a larger version of this image.](images/networking-nsx-t-3.png)
* **Pods IP Block ID**: Enter the UUID of the IP block to be used for Kubernetes pods. <%= vars.product_short %> allocates IP addresses for the pods when they are created in Kubernetes. Each time a namespace is created in Kubernetes, a subnet from this IP block is allocated. The current subnet size that is created is /24, which means a maximum of 256 pods can be created per namespace.
* **Nodes IP Block ID**: Enter the UUID of the IP block to be used for Kubernetes nodes. <%= vars.product_short %> allocates IP addresses for the nodes when they are created in Kubernetes. The node networks are created on a separate IP address space from the pod networks. The current subnet size that is created is /24, which means a maximum of 256 nodes can be created per cluster.
For more information, including sizes and the IP blocks to avoid using, see [Plan IP Blocks](nsxt-prepare-env.html#plan-ip-blocks) in _Preparing NSX-T Before Deploying <%= vars.product_short %>_.
1. For **T0 Router ID**, enter the `t0-pks` T0 router UUID. Locate this value in the NSX-T UI router overview.
1. For **Floating IP Pool ID**, enter the `ip-pool-vips` ID that you created for load balancer VIPs. For more information, see [Plan Network CIDRs](nsxt-prepare-env.html#plan-cidrs). <%= vars.product_short %> uses the floating IP pool to allocate IP addresses to the load balancers created for each of the clusters. The load balancer routes the API requests to the master nodes and the data plane.
1. For **Nodes DNS**, enter one or more Domain Name Servers used by the Kubernetes nodes.
1. For **vSphere Cluster Names**, enter a comma-separated list of the vSphere clusters where you will deploy Kubernetes clusters.
The NSX-T pre-check errand uses this field to verify that the hosts from the specified clusters are available in NSX-T. You can specify clusters in this format: `cluster1,cluster2,cluster3`.
1. For **Kubernetes Service Network CIDR Range**, specify an IP address and subnet size depending on the number of Kubernetes services that you plan to deploy within a single Kubernetes cluster, for example: `10.100.200.0/24`. The IP address used here is internal to the cluster and can be anything, such as `10.100.200.0`. A `/24` subnet provides 256 IPs. If you have a cluster that requires more than 256 IPs, define a larger subnet, such as /20.
1. (Optional) Configure a global proxy for all outgoing HTTP and HTTPS traffic from your Kubernetes clusters and the PKS API server. See [Using Proxies with <%= vars.product_short %> on NSX-T](proxies.html) for instructions on how to enable a proxy.
1. Under **Allow outbound internet access from Kubernetes cluster vms (IaaS-dependent)**, ignore the **Enable outbound internet access** checkbox.
1. Click **Save**.
###<a id='uaa'></a> UAA
<%= partial 'uaa' %>
###<a id='monitoring'></a> (Optional) Monitoring
<%= partial 'monitoring' %>
###<a id='telemetry'></a> CEIP and Telemetry
<%= partial 'usage-data' %>
###<a id='errands'></a> Errands
Errands are scripts that run at designated points during an installation.
To configure when post-deploy and pre-delete errands for <%= vars.product_short %> are run, make a selection in the dropdown next to the errand.
<p class="note warning"><strong>WARNING</strong>: You must enable the NSX-T Validation errand to verify and tag required NSX-T objects.</p>
![Errand configuration pane](images/nsxt/nsxt-validation-errand-ON.png)
For more information about errands and their configuration state, see [Managing Errands in Ops Manager](https://docs.pivotal.io/pivotalcf/customizing/managing_errands.html).
<p class="note warning"><strong>WARNING</strong>: Because <%= vars.product_short %> uses floating stemcells, updating the <%= vars.product_tile %> tile with a new stemcell triggers the rolling of every VM in each cluster. Also, updating other product tiles in your deployment with a new stemcell causes the <%= vars.product_tile %> tile to roll VMs. This rolling is enabled by the <strong>Upgrade all clusters errand</strong>. We recommend that you keep this errand turned on because automatic rolling of VMs ensures that all deployed cluster VMs are patched. However, automatic rolling can cause downtime in your deployment.</p>
###<a id='resource-config'></a> (Optional) Resource Config
Edit other resources used by the **Pivotal Container Service** job.
The **Pivotal Container Service** job requires a VM with the following minimum
resources:
<table>
<tr>
<th>CPU</th>
<th>Memory</th>
<th>Disk Space</th>
</tr>
<tr>
<td>2</td>
<td>8 GB</td>
<td>29 GB</td>
</tr>
</table>
![Resource pane configuration](images/resources-vsphere.png)
<p class="note"><strong>Note</strong>: The automatic <b>VM Type</b> value matches the minimum recommended size for the <b>Pivotal Container Service</b> job. If you experience timeouts or slowness when interacting with the PKS API, select a <strong>VM Type</strong> with greater CPU and memory resources.</p>
## <a id='apply-changes'></a> Step 3: Apply Changes
After configuring the <%= vars.product_tile %> tile, follow the steps below to deploy the tile:
<%= partial 'apply-changes' %>
## <a id='clis'></a> Step 4: Install the PKS and Kubernetes CLIs
<%= partial 'install-cli' %>
## <a id='retrieve-endpoint'></a>Step 5: Verify NAT Rules
If you are using NAT mode, verify that you have created the required NAT rules for the <%= vars.product_short %> Management Plane. See [Creating the <%= vars.product_short %> Management Plane](nsxt-prepare-mgmt-plane.html) for details.
In addition, for NAT and no-NAT modes, verify that you created the required NAT rule for Kubernetes master nodes to access NSX Manager. For details, see [Creating the PKS Compute Plane](nsxt-prepare-compute-plane.html).
If you want your developers to be able to access the PKS CLI from their external workstations, create a DNAT rule that maps a routable IP address to the PKS API VM. This must be done after <%= vars.product_short %> is successfully deployed and it has an IP address. See [Create DNAT Rule on T0 Router for External Access to the PKS CLI](nsxt-prepare-mgmt-plane.html#create-dnat-pks) for details.
## <a id='auth'></a> Step 6: Configure Authentication for <%= vars.product_short %>
Follow the procedures in [Setting Up <%= vars.product_short %> Admin Users on vSphere](vsphere-configure-pks-users.html) in *Installing Enterprise PKS > vSphere*.
##<a id='next-steps'></a> Next Steps
After installing <%= vars.product_short %> on vSphere with NSX-T integration, complete the following tasks:
* <%= partial 'harbor' %>
* [Installing the PKS CLI](./installing-pks-cli.html).
* [Installing the Kubectl CLI](./installing-kubectl-cli.html).
* [Setting Up Enterprise PKS Admin Users on vSphere](./vsphere-configure-pks-users.html).
* [Creating an <%= vars.product_short %> Cluster](create-cluster.html).