forked from pivotal-cf/docs-pks
-
Notifications
You must be signed in to change notification settings - Fork 0
/
_plans.html.md.erb
64 lines (61 loc) · 6.87 KB
/
_plans.html.md.erb
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
To activate a plan, perform the following steps:
1. Click the **Plan 1**, **Plan 2**, or **Plan 3** tab.
<p class="note"><strong>Note</strong>: A plan defines a set of resource types used for deploying clusters. You can configure up to three plans. You must configure <strong>Plan 1</strong>.</p>
1. Select **Active** to activate the plan and make it available to developers deploying clusters.
<% if current_page.data.iaas == "Azure" %>
![Plan pane configuration](images/azure/plan1.png)
<% else %>
![Plan pane configuration](images/plan1.png)
<% end %>
1. Under **Name**, provide a unique name for the plan.
1. Under **Description**, edit the description as needed.
The plan description appears in the Services Marketplace, which developers can access by using PKS CLI.
1. Under **Master/ETCD Node Instances**, select the default number of Kubernetes master/etcd nodes to provision for each cluster.
You can enter either <code>1</code> or <code>3</code>.
<p class="note"><strong>Note</strong>: If you deploy a cluster with multiple master/etcd node VMs,
confirm that you have sufficient hardware to handle the increased load on disk write and network traffic. For more information, see <a href="https://github.com/etcd-io/etcd/blob/master/Documentation/op-guide/hardware.md#example-hardware-configurations">Hardware recommendations</a> in the etcd documentation.<br><br>
In addition to meeting the hardware requirements for a multi-master cluster, we recommend configuring monitoring for etcd to monitor disk latency, network latency, and other indicators for the health of the cluster. For more information, see <a href="monitor-etcd.html">Monitoring Master/etcd Node VMs</a>.</p>
<p class="note warning"><strong>WARNING</strong>: To change the number of master/etcd nodes for a plan, you must ensure that no existing clusters use the plan. PKS does not support changing the number of master/etcd nodes for plans with existing clusters.
</p>
1. Under **Master/ETCD VM Type**, select the type of VM to use for Kubernetes master/etcd nodes. For more information, see the [Master Node VM Size](vm-sizing.html#master-sizing) section of _VM Sizing for PKS Clusters_.
1. Under **Master Persistent Disk Type**, select the size of the persistent disk for the Kubernetes master node VM.
<% if current_page.data.iaas == "Azure" %>
1. Under **Master/ETCD Availability Zones**, select **null**.
<p class="note"><strong>Note:</strong> Ops Manager on Azure does not support availability zones. By default, BOSH deploys VMs in <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/windows/tutorial-availability-sets">Azure Availability Sets</a>.</p>
<% else %>
1. Under **Master/ETCD Availability Zones**, select one or more AZs for the Kubernetes clusters deployed by PKS. If you select more than one AZ, PKS deploys the master VM in the first AZ and the worker VMs across the remaining AZs.
<% end %>
1. Under **Maximum number of workers on a cluster**, set the maximum number of
Kubernetes worker node VMs that PKS can deploy for each cluster. Enter a number
between `1` and `200`.
<% if current_page.data.iaas == "Azure" %>
<img src="images/azure/plan2.png" alt="Plan pane configuration, part two">
<% else %>
<img src="images/plan2.png" alt="Plan pane configuration, part two" width="375">
<% end %>
1. Under **Worker Node Instances**, select the default number of Kubernetes worker nodes to provision for each cluster.
<br><br>
If the user creating a cluster with the PKS Command Line Interface (CLI) does not specify a number of worker nodes, the cluster is deployed with the default number set in this field. This value cannot be greater than the maximum worker node value you set in the previous field. For more information about creating clusters, see [Creating Clusters](create-cluster.html).
<br>
For high availability, create clusters with a minimum of three worker nodes, or two per AZ if you intend to use PersistentVolumes (PVs). For example, if you deploy across three AZs, you should have six worker nodes. For more information about PVs, see [PersistentVolumes](maintain-uptime.html#persistent-volumes) in *Maintaining Workload Uptime*. Provisioning a minimum of three worker nodes, or two nodes per AZ is also recommended for stateless workloads.
<br><br>
If you later reconfigure the plan to adjust the default number of worker nodes, the existing clusters that have been created from that plan are not automatically upgraded with the new default number of worker nodes.
1. Under **Worker VM Type**, select the type of VM to use for Kubernetes worker node VMs. For more information, see the [Worker Node VM Number and Size](vm-sizing.html#worker-sizing) section of _VM Sizing for PKS Clusters_.
<p class="note"><strong>Note</strong>: If you install PKS in an NSX-T environment, we recommend that you select a <strong>Worker VM Type</strong> with a minimum disk size of 16 GB. The disk space provided by the default <code>medium</code> Worker VM Type is insufficient for PKS with NSX-T.</p>
1. Under **Worker Persistent Disk Type**, select the size of the persistent disk for the Kubernetes worker node VMs.
<% if current_page.data.iaas == "Azure" %>
1. Under **Worker Availability Zones**, select **null**.
<p class="note"><strong>Note:</strong> Ops Manager on Azure does not support availability zones. By default, BOSH deploys VMs in <a href="https://docs.microsoft.com/en-us/azure/virtual-machines/windows/tutorial-availability-sets">Azure Availability Sets</a>.</p>
<% else %>
1. Under **Worker Availability Zones**, select one or more AZs for the Kubernetes worker nodes. PKS deploys worker nodes equally across the AZs you select.
<% end %>
1. Under **Errand VM Type**, select the size of the VM that contains the errand. The smallest instance possible is sufficient, as the only errand running on this VM is the one that applies the **Default Cluster App** YAML configuration.
1. (Optional) Under **(Optional) Add-ons - Use with caution**, enter additional YAML configuration to add custom workloads to each cluster in this plan. You can specify multiple files using `---` as a separator. For more information, see [Adding Custom Workloads](custom-workloads.html).
![Plan pane configuration](images/plan3.png)
1. (Optional) To allow users to create pods with privileged containers, select the **Enable Privileged Containers - Use with caution** option. For more information, see [Pods](https://kubernetes.io/docs/concepts/workloads/pods/pod/#privileged-mode-for-pod-containers) in the Kubernetes documentation.
1. (Optional) To disable the admission controller, select the **Disable DenyEscalatingExec** checkbox. If you select this option, clusters in this plan can create security vulnerabilities that may impact other tiles. Use this feature with caution.
1. Click **Save**.
To deactivate a plan, perform the following steps:
1. Click the **Plan 1**, **Plan 2**, or **Plan 3** tab.
1. Select **Plan Inactive**.
1. Click **Save**.