forked from pivotal-cf/docs-pks
-
Notifications
You must be signed in to change notification settings - Fork 0
/
nsxt-prepare-env.html.md.erb
274 lines (174 loc) · 21.6 KB
/
nsxt-prepare-env.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
---
title: Planning, Preparing, and Configuring NSX-T for PKS
owner: PKS
---
<strong><%= modified_date %></strong>
Before you install PKS on vSphere with NSX-T integration, you must prepare your NSX-T environment. Complete all of the steps listed in the order presented to manually create the NSX-T environment for PKS.
##<a id='plan'></a> Step 1: Plan Network Topology, Subnets, and IP Blocks
###<a id='plan-topology'></a>Plan NSX-T Deployment Topology
Review [vSphere with NSX-T Version Requirements](vsphere-nsxt-requirements.html)
and [Hardware Requirements for PKS on vSphere with NSX-T](vsphere-nsxt-rpd-mpd.html).
Review the [Deployment Toplogies](nsxt-topologies.html) for PKS on vSphere with NSX-T, and the [NSX-T Data Center documentation] (https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.install.doc/GUID-10B1A61D-4DF2-481E-A93E-C694726393F9.html) to ensure that your chosen network topology will enable the following communications:
* vCenter, NSX-T components, and ESXi hosts must be able to communicate with each other.
* The BOSH Director VM must be able to communicate with vCenter and the NSX Manager.
* The BOSH Director VM must be able to communicate with all nodes in all Kubernetes clusters.
* Each PKS-provisioned Kubernetes cluster deploys the NSX-T Node Agent and the Kube Proxy that run as BOSH-managed processes on each worker node.
In addition, the NSX-T Container Plugin (NCP) runs as a BOSH-managed process on the Kubernetes master node. In a multi-master PKS deployment, the NCP process runs on all master nodes. However, the process is active only on one master node. If the NCP process on an active master is unresponsive, BOSH activates another NCP process. Refer to the [NCP documentation](https://docs.vmware.com/en/VMware-NSX-T/index.html) for more information.
###<a id='plan-cidrs'></a>Plan Network CIDRs
Before you install PKS on vSphere with NSX-T, you should plan for the CIDRs and IP blocks that you are using in your deployment.
Plan for the following network CIDRs in the IPv4 address space according to the instructions in the VMware [NSX-T documentation](https://docs.vmware.com/en/VMware-NSX-T/index.html).
* **VTEP CIDRs**: One or more of these networks host your GENEVE Tunnel Endpoints on your NSX Transport Nodes. Size the networks to support all of your expected Host and Edge Transport Nodes. For example, a CIDR of `192.168.1.0/24` provides 254 usable IPs.
* **PKS MANAGEMENT CIDR**: This small network is used to access PKS management components such as Ops Manager, BOSH Director, the PKS Service VM, and the Harbor Registry VM (if deployed). For example, a CIDR of `10.172.1.0/28` provides 14 usable IPs. For the [No-NAT deployment topologies](nsxt-topologies.html#topology-no-nat-virtual-switch), this is a corporate routable subnet /28. For the [NAT deployment topology](nsxt-topologies.html#topology-nat), this is a non-routable subnet /28, and DNAT needs to be configured in NSX-T to access the PKS management components.
* **PKS LB CIDR**: This network provides your load balancing address space for each Kubernetes cluster created by PKS. The network also provides IP addresses for Kubernetes API access and Kubernetes exposed services. For example, `10.172.2.0/24` provides 256 usable IPs. This network is used when creating the `ip-pool-vips` described in [Creating NSX-T Objects for PKS](nsxt-create-objects.html), or when the services are deployed. You enter this network in the
**Floating IP Pool ID** field in the **Networking** pane of the PKS tile.
###<a id='plan-ip-blocks'></a>Plan IP Blocks
When you install PKS on NSX-T, you are required to specify the **Pods IP Block ID** and **Nodes IP Block ID** in the **Networking** pane of the PKS tile. These IDs map to the two IP blocks you must configure in NSX-T: the Pods IP Block for Kubernetes pods, and the Node IP Block for Kubernetes nodes (VMs). For more information, see the [Networking](installing-nsx-t.html#networking) section of _Installing PKS on vSphere with NSX-T Integration_.
<img src="images/nsxt/nsxt-ip-blocks.png" alt="Required IP Blocks for NSX-T">
####<a id='pods-ip-block'></a>Pods IP Block
Each time a Kubernetes namespace is created, a subnet from the **Pods IP Block** is allocated. The subnet size carved out from this block is /24, which means a maximum of 256 pods can be created per namespace. When a Kubernetes cluster is deployed by PKS, by default 3 namespaces are created. Often additional namespaces will be created by operators to facilitate cluster use. As a result, when creating the **Pods IP Block**, you must use a CIDR range larger than /24 to ensure that NSX has enough IP addresses to allocate for all pods. The recommended size is /16. For more information, see [Creating NSX-T Objects for PKS](nsxt-create-objects.html).
<img src="images/nsxt/pods-ip-block.png" alt="Pods IP Block">
<p class="note"><strong>Note</strong>: By default, <strong>Pods IP Block</strong> is a block of non-routable, private IP addresses.
After you deploy PKS, you can define a network profile that specifies a routable IP block for your pods.
The routable IP block overrides the default non-routable <strong>Pods IP Block</strong> when a Kubernetes cluster is deployed using that network profile. For more information, see <a href="network-profiles.html#routable-pods">Routable Pods</a> in <em>Using Network Profiles (NSX-T Only)</em>.</p>
####<a id='nodes-ip-block'></a>Nodes IP Block
Each Kubernetes cluster deployed by PKS owns a /24 subnet. To deploy multiple Kubernetes clusters, set the **Nodes IP Block ID** in the **Networking** pane of the PKS tile to larger than /24. The recommended size is /16. For more information, see [Creating NSX-T Objects for PKS](nsxt-create-objects.html).
<img src="images/nsxt/nodes-ip-block.png" alt="Nodes IP Block">
<p class="note"><strong>Note</strong>: You can use a smaller nodes block size for no-NAT environments with a limited number of routable subnets.
For example, /20 allows up to 16 Kubernetes clusters to be created.</p>
###<a id='reserved-ip-blocks'></a>Reserved IP Blocks
Do not use any of the IP blocks listed in this section for pods or nodes.
If you create Kubernetes clusters with any of the blocks listed below, the Kubernetes worker nodes cannot reach Harbor or internal Kubernetes services.
The Docker daemon on the Kubernetes worker node uses the subnet in the following CIDR range.
Do not use IP addresses in the following CIDR range:
* 172.17.0.1/16
* 172.18.0.1/16
* 172.19.0.1/16
* 172.20.0.1/16
* 172.21.0.1/16
* 172.22.0.1/16
If PKS is deployed with Harbor, Harbor uses the following CIDR ranges for its internal Docker bridges.
Do not use IP addresses in the following CIDR range:
* 172.18.0.0/16
* 172.19.0.0/16
* 172.20.0.0/16
* 172.21.0.0/16
* 172.22.0.0/16
Each Kubernetes cluster uses the following subnet for Kubernetes services.
Do not use the following IP block for the Nodes IP Block:
* 10.100.200.0/24
##<a id='nsx-manager'></a> Step 2: Deploy NSX Manager
Deploy the [NSX Manager Unified Appliance](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.install.doc/GUID-6788EC72-9562-45AD-AD94-7D883B90DE69.html). For instructions, see [Deploy the NSX Manager](nsxt-deploy.html#deploy-nsx-manager).
##<a id='nsx-controllers'></a> Step 3: Deploy NSX Controllers
Deploy one or more [NSX Controllers](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.install.doc/GUID-91955EED-05C4-4BED-88C3-FB5ED7420BAE.html). You must deploy at least one NSX Controller for PKS; three NSX Controllers are recommended. For instructions, see [Deploy NSX Controllers](nsxt-deploy.html#deploy-nsx-controllers).
##<a id='nsx-clusters'></a> Step 4: Create NSX Clusters
Create NSX Clusters for the [Management Plane](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.install.doc/GUID-08A4954C-67E7-4DB9-BF4C-B90969923E73.html) and [Control Plane](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.install.doc/GUID-FAC5A685-D876-4FAA-A209-CBF797F34DC7.html). For instructions, see [Create NSX Clusters](nsxt-deploy.html#create-nsx-cluster).
##<a id='nsx-edges'></a> Step 5: Deploy NSX Edge Nodes
Deploy two or more [NSX Edge Nodes](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.install.doc/GUID-6E035C20-35B4-415F-A0B9-20C4A5674B46.html). Edge Nodes for PKS run load balancers for PKS API traffic, load balancer services for Kubenetes pods, and ingress controllers for Kubernetes pods. For instructions, see [Deploy NSX Edge Nodes](nsxt-deploy.html#deploy-nsx-edges).
PKS supports active/standby Edge Node failover and requires at least two Edge Nodes. In addition, PKS requires the Edge Node Large VM (8 vCPU, 16 GB of RAM, and 120 GB of storage). The default size of the LB provisioned for PKS is small. You can customize this after deploying PKS using <a href="network-profiles.html">Network Profiles</a>.
The table below lists the maximum number of load balancers per Edge Node form factor.
Edge Node Type | LB Small Max | LB Medium Max | LB Large Max
----------------|---------------|---------------|---------------
Edge VM Small | 0 | 0 | 0
Edge VM Medium | 1 | 0 | 0
Edge VM Large | 40 | 4 | 0
Edge Bare Metal | 750 | 100 | 7
Keep in mind the following requirements for NSX Edge Nodes with PKS:
* PKS requires the NSX-T Edge Node large VM (8 vCPU and 16 GB of RAM) or the bare metal Edge Node. For more information, see <a href="./vsphere-nsxt-rpd-mpd.html">Hardware requirements for PKS on vSphere with NSX-T</a>.
* The default load balancer deployed by NSX-T for a PKS-provisioned Kubernetes cluster is the small load balancer. The size of the load balancer can be customized using <a href="./network-profiles-define.html">Network Profiles</a>.
* Edge Node VMs can only be deployed on Intel-based ESXi hosts.
* The large load balancer requires a bare metal Edge Node.
* For high-availability Edge Nodes are deployed as pairs within an Edge Cluster. The minimum number of Edge Nodes per Edge Cluster is 2; the maximum is 10. PKS supports active/standby mode only. In standby mode, the standby LB is not available for use while the active LB is active. To determine the maximum number of load balancers per Edge Cluster, multiply the maximum number of LBs for the Edge Node type by the number of Edge Nodes and divide by 2. For example, with 10 Edge VM Large nodes in an Edge Cluster, you can have up to 200 small LB instances (40 x 10 / 2), or up to 20 medium LB instances (4 x 10 / 2).
* PKS deploys a virtual server for each load balancer instance. For service of type load balancer, it is one virtual server per service. There are two global virtual servers deployed for ingress resources (HTTP and HTTPS). And there is one global virtual server for the PKS API. For more information, see <a href="network-profiles-define.html">Defining Network Profiles</a>.
##<a id='nsx-edge-reg'></a> Step 6: Register NSX Edge Nodes
[Register NSX Edge Nodes](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.install.doc/GUID-11BB4CF9-BC1D-4A76-A32A-AD4C98CBF25B.html) with the NSX Manager. For instructions, see [Register NSX Edge Nodes](nsxt-deploy.html#register-nsx-edges).
##<a id='nsx-vib'></a> Step 7: Enable VIB Repository Service
The VIB repository service provides access to native libraries for NSX Transport Nodes. VIB must be enabled before you proceed further with deploying NSX. For instructions, see [Enable VIB Repository Service on NSX Manager](nsxt-deploy.html#enable-repo).
##<a id='nsx-tep'></a> Step 8: Create TEP IP Pool
Create Tunnel Endpoint IP Pool (TEP IP Pool) within the usable range of the **VTEP CIDR** that was defined in [preparation for installing NSX-T)(#plan-cidrs). The TEP IP Pool is used for [NSX Transport Nodes](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.install.doc/GUID-370D06E1-1BB6-4144-A654-7AF2542C3136.html?hWord=N4IghgNiBcICoFEAKACAcgZQBoFo4gF8g). For instructions, see [Create TEP IP Pool](nsxt-deploy.html#create-tep).
##<a id='nsx-overlay'></a> Step 9: Create Overlay Transport Zone
Create an [NSX Overlay Transport Zone](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.install.doc/GUID-AC4A0159-DF37-47F6-A333-33C7945570D1.html) (TZ-Overlay) for PKS Control Plane services and Kubernetes Cluster deployment overlay networks. For instructions, see [Create Overlay TZ](nsxt-deploy.html#create-tz-overlay).
##<a id='nsx-vlan'></a> Step 10: Create VLAN Transport Zone
Create an [NSX VLAN Transport Zone](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.install.doc/GUID-AC4A0159-DF37-47F6-A333-33C7945570D1.html) (TZ-VLAN) for NSX Edge uplinks (ingress/egress) for PKS-managed Kubernetes clusters. For instructions, see [Create VLAN TZ](nsxt-deploy.html#create-tz-vlan).
##<a id='nsx-uplink'></a> Step 11: Create Uplink Profile for Edge Nodes
Create an [NSX Uplink Profile](https://docs.vmware.com/en/VMware-NSX-T/2.1/com.vmware.nsxt.install.doc/GUID-50FDFDFB-F660-4269-9503-39AE2BBA95B4.html?hWord=N4IghgNiBcIK4AcIEsB2BrEBfIA) for NSX Edge Nodes to be used with PKS. For instructions, see [Create Uplink Profile for Edge Nodes](nsxt-deploy.html#create-uplink).
##<a id='nsx-tn'></a> Step 12: Create Transport Edge Nodes
Create [NSX Edge Transport Nodes](https://docs.vmware.com/en/VMware-NSX-T/2.1/com.vmware.nsxt.install.doc/GUID-53295329-F02F-44D7-A6E0-2E3A9FAE6CF9.html), which allow Edge Nodes to exchange traffic for virtual networks among other NSX nodes. For instructions, see [Create Transport Edge Nodes](nsxt-deploy.html#create-edge-tns).
##<a id='nsx-edge-cluster'></a> Step 13: Create Edge Cluster
Create an [NSX Edge Cluster](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.install.doc/GUID-898099FC-4ED2-4553-809D-B81B494B67E7.html?hWord=N4IghgNiBcIKYBMDmcAEBjCBXAzgFzgCcQBfIA) and add each NSX Edge Transport Node to the Edge Cluster. For instructions, see [Create Edge Cluster](nsxt-deploy.html#create-edge-cluster).
##<a id='nsx-to'></a> Step 14: Create T0 Logical Router for PKS
[NSX Tier-0 Logical Routers](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.admin.doc/GUID-3F163DEE-1EE6-4D80-BEBF-8D109FDB577C.html) are used to route data between the NSX-T virtual network and the physical network. For instructions, see [Create T0 Router](nsxt-deploy.html#create-t0-router).
##<a id='nsx-edge-ha'></a> Step 15: Configure NSX Edge for High Availability (HA)
Configure NSX Edge for high availability (HA) using Active/Standby mode to support failover, as shown in the following figure. For instructions, see [Configure Edge HA](nsxt-deploy.html#configure-edge-ha).
<p class="note"><strong>Note</strong>: If the T0 Router is not <a href="nsxt-deploy.html#configure-edge-ha">properly configured for HA</a>, failover to the standby Edge Node will not occur.</p>
![NSX Edge High Availability](images/vsphere/nsxt-edge-ha.png)
##<a id='nsx-esxi'></a> Step 16: Prepare ESXi Hosts for PKS Compute Plane
An [NSX Transport Node](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.install.doc/GUID-FCC5390E-3489-47E8-ABE6-2F7FD43775BD.html) allows NSX Nodes to exchange traffic for virtual networks. ESXi hosts dedicated to the PKS Compute Cluster must be prepared as tranport nodes. For instructions, see [Prepare Compute Cluster ESXi Hosts](nsxt-deploy.html#prepare-esxi).
<p class="note"><strong>Note</strong>: The Transport Nodes must be placed on free host NICs not already used by other vSwitches on the ESXi host. Use the <code>VTEPS</code> IP pool that allows ESXi hosts to route and communicate with each other, as well as other Edge Transport Nodes.</p>
##<a id='nsx-esxi'></a> Step 17: Create NSX-T Objects for PKS Management Plane
Prepare the vSphere and NSX-T infrastructure for the PKS Management Plane where the PKS, Ops Manager, BOSH Director, and Harbor Registry VMs are deployed. This includes a vSphere resource pool for PKS management components, an NSX [Tier-1 (T1) Logical Switch](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.admin.doc/GUID-23194F9A-416A-40EA-B9F7-346B391C3EF8.html), and an NSX [Tier-1 Logical Router and Port](https://docs.vmware.com/en/VMware-NSX-T-Data-Center/2.3/com.vmware.nsxt.admin.doc/GUID-DAEF8457-8363-4F33-84DA-68AA36A2DE3C.html). For instructions, see [Prepare PKS Management Plane](nsxt-prepare-mgmt-plane.html).
If you are using the <a href="nsxt-topologies.html#topology-nat">NAT Topology</a>, create the following NAT rules on the T0 Router. For instructions, see [Prepare Management Plane](nsxt-prepare-mgmt-plane.html).
<table>
<tr>
<th>Type</th>
<th>For</th>
</tr>
<tr>
<td>DNAT</td>
<td>External > Ops Manager</td>
</tr>
<tr>
<td>DNAT</td>
<td>External > Harbor (optional)</td>
</tr>
<tr>
<td>SNAT</td>
<td>PKS Management Plane > vCenter and NSX-T Manager</td>
</tr>
<tr>
<td>SNAT</td>
<td>PKS Management Plane > DNS</td>
</tr>
<tr>
<td>SNAT</td>
<td>PKS Management Plane > NTP</td>
</tr>
<tr>
<td>SNAT</td>
<td>PKS Management Plane > LDAP/AD (optional)</td>
</tr>
<tr>
<td>SNAT</td>
<td>PKS Management Plane > ESXi</td>
</tr>
</table>
##<a id='nsx-comp-plane'></a> Step 18: Create NSX-T Objects for PKS Compute Plane
Create Resource Pools for AZ-1 and AZ-2, which map to the Availability Zones you will create when you configure BOSH Director and reference when you install the PKS tile. In addition, create SNAT rules on the T0 router:
- One for K8s Master Nodes (hosting NCP) to reach the NSX-T Manager
- One for Kubernetes Master Node Access to LDAP/AD (optional)
For instructions, see [Prepare Compute Plane](nsxt-prepare-compute-plane.html).
##<a id='nsx-om'></a> Step 19: Deploy Ops Manager in the NSX-T Environment
Deploy Ops Manager 2.3.2+ on the NSX-T Management Plane network.
For instructions, see <a href="vsphere-nsxt-om-deploy.html">Deploy Ops Manager on vSphere with NSX-T</a>.
##<a id='nsx-mgr-cert'></a> Step 20: Generate NSX Manager Certificate
Generate the CA Cert for the NSX Manager and import the certificate to NSX Manager.
For instructions, see <a href="generate-nsx-ca-cert.html">Generate the NSX Manager CA Cert</a>.
##<a id='nsx-bosh'></a> Step 21: Configure BOSH Director for vSphere with NSX-T
Create BOSH availability zones (AZs) that map to the Management and Compute resource pools in vSphere, and the Management and Control plane networks in NSX-T.
For instructions, see <a href="vsphere-nsxt-om-config.html">Configure BOSH Director for vSphere with NSX-T</a>.
##<a id='nsx-pi-cert'></a> Step 22: Generate NSX Manager Principal Identity Certificate
Generate the NSX Manager Super User Principal Identity Certificate and register it with the NSX Manager using the NSX API.
For instructions, see <a href="generate-nsx-pi-cert.html">Generate the NSX Manager PI Cert</a>.
##<a id='nsx-pks-obj'></a> Step 23: Create NSX-T Objects for PKS
Create IP blocks for the [node networks](#nodes-ip-block) and the [pod networks](#pods-ip-block). The subnets for both nodes and pods should have a size of 256 (/16). See [Plan IP Blocks](#plan-ip-blocks) and [Reserved IP Blocks](#reserved-ip-blocks) for details.
In addition, create a [Floating IP Pool](https://docs.vmware.com/en/VMware-NSX-Data-Center-for-vSphere/6.3/com.vmware.nsx.admin.doc/GUID-C650F611-7116-4CE1-A552-2DAB09D0D030.html) from which to assign routable IP addresses to components. This network provides your load balancing address space for each Kubernetes cluster created by PKS. The network also provides IP addresses for Kubernetes API access and Kubernetes exposed services.
These [network objects](https://docs.vmware.com/en/VMware-NSX-Data-Center-for-vSphere/6.3/com.vmware.nsx.admin.doc/GUID-C0760D51-16F1-43B8-90D8-D39B47249157.html) are required to configure the PKS tile for NSX-T networking. For instructions, see <a href="./nsxt-create-objects.html">Create NSXT Object for PKS</a>.
##<a id='nsx-pks-install'></a> Step 24: Install PKS on vSphere with NSX-T
At this point your NSX-T environment is prepared for PKS installation using the PKS tile in Ops Manager. For instructions, see <a href="./installing-nsx-t.html">Installing PKS on vSphere with NSX-T</a>.
##<a id='nsx-pks-harbor'></a> Step 25: Install Harbor Harbor Registry for PKS
The VMware Harbor Registry is recommended for PKS. Install Harbor in the NSX Management Plane with other PKS components (PKS API, Ops Manager, and BOSH). For instructions, see <a href="https://docs.pivotal.io/partners/vmware-harbor/integrating-pks.html">Installing Harbor Registry on vSphere with NSX-T</a> in the PKS Harbor documentation.
If you are using the [NAT deployment topology](nsxt-topologies.html#topology-nat) for PKS, create a DNAT rule that maps the private Harbor IP address to a routable IP address from the floating IP pool on the PKS management network. See <a href="https://docs.pivotal.io/partners/vmware-harbor/integrating-pks.html#create-dnat">Create DNAT Rule</a>.
##<a id='nsx-pks-adv'></a> Step 26: Perform Post-Installation NSX-T Configurations as Necessary
Once PKS is installed, you may want to perform additional NSX-T configurations to support customization of Kubernetes clusters at deployment time, such as:
- <a href="./proxies.html">Configuring an HTTTP Proxy</a> to proxy outgoing HTTP/S traffic from NCP, PKS, BOSH, and Ops Manager to vSphere infrastructure components (vCenter, NSX Manager)
- <a href="./network-profiles-define.html">Defining Network Profiles</a> to customize NSX-T networking objects, such as load balancer size, custom Pods IP Block, routable Pods IP Block, configurable CIDR range for the Pods IP Block, custom Floating IP block, and more.
- <a href="./nsxt-multi-t0.html">Configuring Multiple Tier-0 Routers</a> to support customer/tenant isolation