-
Notifications
You must be signed in to change notification settings - Fork 102
Scaling Cloud Controller
The purpose of this document is to provide operators with guidance on knowing how and when to scale the jobs within capi-release. It is broken down by BOSH job (e.g. cloud_controller_ng
) and highlights some of the key metrics, heuristics, and logs we have found helpful. These lists are not exhaustive by any means, but should serve as a good place to start.
As always, suggestions and contributions to this document are welcome!
The cloud_controller_ng
Ruby process can be considered the main job in capi-release. It, along with nginx_cc
, powers the Cloud Controller API that all users of Cloud Foundry interact with. In addition to serving external clients (e.g. the cf cli, web UIs, log/metrics aggregators), cloud_controller_ng
also provides APIs for internal components within Cloud Foundry, such as the Loggregator and Networking subsystems.
-
cc.requests.outstanding
is at or consistently near 20 -
system.cpu.user
is above 0.85 utilization of a single core on the api vm (see note on BOSH cpu values) -
cc.vitals.cpu_load_avg
is 1 or higher -
cc.vitals.uptime
is consistently low indicating frequent restarts (possibly due to memory pressure)
- Average response latency
- Degraded web UI responsiveness or timeouts
/var/vcap/sys/log/cloud_controller_ng/cloud_controller_ng.log
/var/vcap/sys/log/cloud_controller_ng/nginx-access.log
Before and after scaling Cloud Controller API VMs, its important to verify that the CC's database is not overloaded. All Cloud Controller processes are backed by the same database (ccdb), so heavy load on the database will impact API performance regardless of the number of Cloud Controllers deployed.
In CF deployments with internal MySQL clusters, 1 MySQL database VM with CPU usage over ~80% can be considered overloaded (see footnote). When this happens, the MySQL VMs must be scaled up or the added load of additional Cloud Controllers will only exacerbate the problem.
Cloud Controllers API VMs should primarily be scaled horizontally. Scaling up the number of cores on a single VM is not helpful because of Ruby's Global Interpreter Lock (GIL). This limits the cloud_controller_ng
process so that it can only effectively use a single CPU core on a multi-core machine.
Colloquially known as "local workers," this job is primarily responsible for handling files uploaded to the API VMs during cf push
(e.g. packages
, droplets
, resource matching).
-
cc.job_queue_length.cc-<VM_NAME>-<VM_INDEX>
(ie.cc.job_queue_length.cc-api-0
) is continuously growing -
cc.job_queue_length.total
is continuously growing
-
cf push
is intermittently failing -
cf push
average time is elevated
/var/vcap/sys/log/cloud_controller_ng/cloud_controller_ng.log
Because local workers are colocated with the Cloud Controller API job, they are scaled horizontally along with the API.
Colloquially known as "generic workers" or just "workers", this job (and VM) is responsible for handling asynchronous work, batch deletes, and other periodic tasks scheduled by the cloud_controller_clock
.
-
cc.job_queue_length.cc-<VM_TYPE>-<VM_INDEX>
(ie.cc.job_queue_length.cc-cc-worker-0
) is continuously growing -
cc.job_queue_length.total
is continuously growing
-
cf delete-org ORG_NAME
appears to leave its contained resources around for a long time - Users report slow deletes for other resources
- cf-acceptance-tests succeed generally, but fail during cleanup
/var/vcap/sys/log/cloud_controller_worker/cloud_controller_worker.log
The cc-worker VM can safely scale horizontally in all deployments, but if your worker VMs have CPU/memory headroom you can also use the cc.jobs.generic.number_of_workers
BOSH property to increase the number of worker processes on each VM.
The cloud_controller_clock
runs Diego sync process and schedules periodic background jobs. The cc_deployment_updater
is responsible for handling v3 rolling app deployments.
-
cc.Diego_sync.duration
is continuously increasing over time -
system.cpu.user
is above 80% on the scheduler VM
- Diego domains are frequently unfresh
- The Diego Desired LRP count is larger than the total process instance count reported via the Cloud Controller APIs
- Deployments are slow to increase and decrease instance count
/var/vcap/sys/log/cloud_controller_clock/cloud_controller_clock.log
/var/vcap/sys/log/cc_deployment_updater/cc_deployment_updater.log
Both of these jobs are singletons (only a single instance is active), so extra instances are for failover HA rather than scalability. Performance issues are likely due to database overloading or greedy neighbors on the scheduler VM.
The internal WebDAV blobstore that comes included with CF by default. It is used by the platform to store packages
(app bits), staged droplets
, buildpacks
, and cached app resources. Files are typically uploaded to the internal blobstore via the Cloud Controller local workers and downloaded by Diego when app instances are started.
-
system.cpu.user
is consistently high on the singleton-blobstore VM -
system.disk.persistent.percent
is high indicates the blobstore is running out of room for additional files
-
cf push
is intermittently failing -
cf push
average time is elevated - App droplet downloads are timing out/failing on Diego
/var/vcap/sys/log/blobstore/internal_access.log
The internal WebDAV blobstore cannot be scaled horizontally, not even for availability purposes because of its reliance on the singleton-blobstore
VM's persistent disk for file storage. For this reason, it is not recommended for environments that require high availability. For these environments we recommend an external blobstore be used.
It can be scaled vertically, however, so scaling up the number of CPUs or adding faster disk storage can improve the performance of the internal WebDAV blobstore if it is under high load.
High numbers of concurrent app container starts on Diego can cause stress on the blobstore. This typically can happen during upgrades in environments with a large number of apps and Diego cells. If vertically scaling the blobstore or improving its disk performance is not an option, limiting the max number of concurrent app container starts can be used as a mitigation.