diff --git a/pages/doc/csp_supported_integrations.md b/pages/doc/csp_supported_integrations.md
index 1e51b1f02..21e192d48 100644
--- a/pages/doc/csp_supported_integrations.md
+++ b/pages/doc/csp_supported_integrations.md
@@ -487,5 +487,5 @@ The following integrations do not depend on the subscription type and work as ex
* [Webhooks](webhooks.html)
* [Graphite](graphite.html)
-* [Operations for Applications Usage Integration](wavefront_monitoring.html)
+* [Tanzu Observability Usage Integration](wavefront_monitoring.html)
diff --git a/pages/doc/direct_ingestion.md b/pages/doc/direct_ingestion.md
index f0b37ce72..4eb5ca69d 100644
--- a/pages/doc/direct_ingestion.md
+++ b/pages/doc/direct_ingestion.md
@@ -20,7 +20,7 @@ You can send data to VMware Tanzu Observability (formerly known as VMware Aria O
* **Prevent data loss, optimize network bandwidth** – The proxy buffers and manages data traffic. Even if there’s a connectivity problem, you don’t lose data points.
* **Simple firewall configuration** – The proxy receives metrics from many agents on different hosts and forwards those metrics to the Tanzu Observability service. You don’t need to open internet access for each of the agents.
* **Enrich or filter data** – You can set up the proxy preprocessor to filter data before it’s sent to the Tanzu Observability service.
- * **Examine bottlenecks** – Each proxy generates its own metrics. You can [learn about incoming and outgoing data](monitoring_proxies.html) in the individual proxy dashboards and the **Operations for Applications Service and Proxy Data** dashboard.
+ * **Examine bottlenecks** – Each proxy generates its own metrics. You can [learn about incoming and outgoing data](monitoring_proxies.html) in the individual proxy dashboards and the **Tanzu Observability Service and Proxy Data** dashboard.
diff --git a/pages/doc/integrations_new_changed_2020.md b/pages/doc/integrations_new_changed_2020.md
index 0c893a29a..6d0f232e5 100644
--- a/pages/doc/integrations_new_changed_2020.md
+++ b/pages/doc/integrations_new_changed_2020.md
@@ -26,10 +26,10 @@ Made improvements to the following integrations in October 2020 - December 2020:
* Spring Cloud Data Flow -- New preconfigured dashboard to monitor Native Kafka client
* Kubernetes -- New setup UI
* Slack -- URL unfurler
-* Operations for Applications Usage new dashboards:
- - **Operations for Applications Ingestion Policy Explorer** In environments where [ingestion policies](ingestion_policies.html) have been configured, shows usage for each user and ingestion policy.
+* Tanzu Observability Usage new dashboards:
+ - **Tanzu Observability Ingestion Policy Explorer** In environments where [ingestion policies](ingestion_policies.html) have been configured, shows usage for each user and ingestion policy.
- **Committed Rate and Monthly Usage (PPS P95)** dashboard shows Tanzu Observability monthly usage against committed rate.
- - **Operations for Applications Namespace Usage Explorer**: Tracks the number of metrics received for the first 3 levels of your metric namespace.
+ - **Tanzu Observability Namespace Usage Explorer**: Tracks the number of metrics received for the first 3 levels of your metric namespace.
* Google Cloud Platform (GCP) -- Fixed dashboard queries in Google Kubernetes Engine (GKE) dashboard
* Azure Storage -- Preconfigured dashboard now supports monitoring of the Classic storage type
@@ -69,7 +69,7 @@ Made improvements to the following integrations in October 2019 - October 2020:
Made significant improvements to the dashboards of the following integrations:
-* Operations for Applications Tutorial dashboards (upgrade for V2 UI)
+* Tanzu Observability Tutorial dashboards (upgrade for V2 UI)
* Pivotal Cloud Foundry dashboards
* Kubernetes dashboard
* vSphere dashboards
diff --git a/pages/doc/integrations_new_changed_2021.md b/pages/doc/integrations_new_changed_2021.md
index 5e0cd7c0c..ba8f6a6af 100644
--- a/pages/doc/integrations_new_changed_2021.md
+++ b/pages/doc/integrations_new_changed_2021.md
@@ -77,7 +77,7 @@ Made improvements to the following integrations and dashboards in September 2021
- Updated the query in the top 10 CPU charts of the **AWS: ECS (Fargate)** dashboard to show the correct values.
- Fixed bucket and region count mismatch issue in the **AWS: S3** dashboard.
-* Operations for Applications Tutorial -- Added the following list of new chart types and examples to the **Chart Types** dashboard:
+* Tanzu Observability Tutorial -- Added the following list of new chart types and examples to the **Chart Types** dashboard:
- Gauge
- Pie
- Node map
@@ -113,7 +113,7 @@ Made improvements to the following integrations and dashboards in July 2021:
* Project Pacific -- Renamed the integration from Project Pacific Integration to vSphere with Tanzu Integration.
* VMware Cloud PKS -- Removed the VMware Cloud PKS integration.
* OpenTelemetry -- Updated the steps for configuring the application to send trace data to Tanzu Observability using the trace exporter.
-* Operations for Applications Usage -- Added new charts to **Proxies Overview** section in the **Operations for Applications Service and Proxy Data** dashboard to show **Spans Sampled By Policies**.
+* Tanzu Observability Usage -- Added new charts to **Proxies Overview** section in the **Tanzu Observability Service and Proxy Data** dashboard to show **Spans Sampled By Policies**.
* Azure AD -- Added steps to configure Azure AD using Self-Service SAML.
* Data Platforms -- Added a new dashboard **Data Platform Blueprint2 - Kafka-Spark-Elasticsearch**.
* Kubernetes:
@@ -162,7 +162,7 @@ Made improvements to the following integrations and dashboards in June 2021:
Made improvements to the following integrations and dashboards in May 2021:
* AWS -- Updated the **AWS Summary** dashboard to use Delta Counters.
* Linux -- Updated the Linux integration to list all collected metrics.
-* Operations for Applications Usage:
+* Tanzu Observability Usage:
* The out of the box dashboards are updated to use new delta counters.
* The integration out of the box alerts are updated to use delta counters.
* Kubernetes:
@@ -210,7 +210,7 @@ Made improvements to the following integrations in March 2021:
* OneLogin -- Updates to the integration setup instructions
* vSphere -- Fixes to the out of the box dashboards
* RabbitMQ -- Fixes to the out of the box dashboards
-* Operations for Applications Usage -- Added new alerts to the integration
+* Tanzu Observability Usage -- Added new alerts to the integration
## December 2020 - February 2021
@@ -224,7 +224,7 @@ Made improvements to the following integrations and dashboards in December 2020
* Amazon Web Services Gateway -- New API gateway types
* Spring Cloud Data Flow -- Spring Cloud Data Flow and Spring Cloud Skipper version upgrade
* Microsoft Azure Storage -- New chart showing used capacity
-* Operations for Applications Usage:
+* Tanzu Observability Usage:
* Name changes to the dashboards
* Now includes an **Alerts** tab with predefined alerts
* Java
@@ -232,7 +232,7 @@ Made improvements to the following integrations and dashboards in December 2020
* AppDynamics -- Updates to the setup UI
* Kubernetes -- New out of the box dashboards
* OKTA -- Updates to the setup UI
-* Operations for Applications Tutorial
+* Tanzu Observability Tutorial
* Slack
* Amazon Web Services: Fargate dashboard
* Tanzu Kubernetes Grid Integrated Edition -- Updated to support Tanzu Kubernetes Grid Integrated Edition 1.10
diff --git a/pages/doc/integrations_new_changed_2022.md b/pages/doc/integrations_new_changed_2022.md
index 98f02d239..4cae622e3 100644
--- a/pages/doc/integrations_new_changed_2022.md
+++ b/pages/doc/integrations_new_changed_2022.md
@@ -97,13 +97,13 @@ Logs (Beta) Related Changes:
With the Initial Availability of our Logs (Beta) feature, we have made improvements to the following integrations:
* Linux Host -- Now contains Linux Logs Setup (Beta) instructions. If Logs (Beta) is enabled for you, you can set up your Linux integration to [send logs](logging_send_logs.html) to our service. For details on the Logs (Beta) feature, see [Get Started with Logs (Beta)](logging_overview.html). For detailed steps on setting up the Linux Host integration, see [Linux Logs Setup (Beta)](linux.html).
-* Operations for Applications Usage -- We added a Logs Stats section. It contains charts that track the amount of logs that are successfully delivered and successfully queried by our service. Also, the section shows charts that track the amount of logs that are received, queued, and blocked by the Wavefront proxy. [Read more](wavefront_monitoring.html#logs-stats).
+* Tanzu Observability Usage -- We added a Logs Stats section. It contains charts that track the amount of logs that are successfully delivered and successfully queried by our service. Also, the section shows charts that track the amount of logs that are received, queued, and blocked by the Wavefront proxy. [Read more](wavefront_monitoring.html#logs-stats).
We made improvements and bug fixes to the following integrations in October 2022:
-* Operations for Applications Usage:
+* Tanzu Observability Usage:
- Made significant improvements to the **Committed Rate vs Monthly Usage (PPS P95) for Billable** and **Usage (PPS) vs Remaining Balance (PPS P95) for Burndown** dashboards. You can use the data displayed on the dashboard that suits your commit contract. For example, if you have a billable commit contract, only the **Committed Rate vs Monthly Usage (PPS P95) for Billable** dashboard will contain charts populated with data. The **Usage (PPS) vs Remaining Balance (PPS P95) for Burndown** dashboard will be empty.
- - Made a minor fix to the **Operations for Applications Service and Proxy Data** dashboard.
+ - Made a minor fix to the **Tanzu Observability Service and Proxy Data** dashboard.
* Terraform Provider:
- We added data source support for alerts, dashboards, events, derived metrics, maintenance windows, and external links.
- Added support for checking frequency of Terraform Tanzu Observability Alert.
@@ -116,7 +116,7 @@ We made improvements and bug fixes to the following integrations in October 2022
* Java -- Fixed the links to the Jolokia 2 Agent documentation.
* Kubernetes -- Added a new system alert to the integration. You can now get notified when the Kubernetes observability status becomes unhealthy.
* Google Cloud Platform -- The **Google Dataproc** dashboard is now improved with information that you must create a derived metric if you see a delay in the loading of variables.
-* Operations for Applications Tutorial -- Made some minor fixes to the **Introduction** dashboard.
+* Tanzu Observability Tutorial -- Made some minor fixes to the **Introduction** dashboard.
## September 2022
@@ -125,7 +125,7 @@ We made improvements to the following integrations in September 2022:
* vSphere -- Made fixes to the **Cluster** dashboard. Updated the cluster variable to all charts in the **Virtual Machine Operations for a Data Center - 1 Hour** section.
* Elasticsearch -- Made a fix to the query of the chart that displays the number of nodes, and updated the descriptions of charts.
* Microsoft SQL Server -- Added proxy preprocessor rules in the Microsoft SQL Server setup instructions to avoid database read/write metrics getting dropped because of an extra quote (“) in a few point-tag keys.
-* Operations for Applications Usage -- Made minor updates to the Overview tab of the integration. The link to the service internal metrics is corrected.
+* Tanzu Observability Usage -- Made minor updates to the Overview tab of the integration. The link to the service internal metrics is corrected.
* Slack -- Updated the setup instructions and added information on how to troubleshoot the Slack URL Unfurler.
* Cassandra -- We updated the integration and now you can monitor Cassandra on Kubernetes.
* Tanzu Application Service -- We added three new dashboards for monitoring TAS services:
@@ -171,7 +171,7 @@ We made improvements to the following integrations in July 2022:
* Google Cloud Platform -- Added a **Google Cloud Bigtable** out-of-the-box dashboard which allows you to monitor the Google Cloud Bigtable service.
* Microsoft Azure -- Made fixes to the **Azure Cosmos DB** dashboard to avoid showing the NO DATA message on single-stat charts.
* Fluentd -- Improved the **Fluentd** dashboard and added two new sections to the dashboard: **Buffer** and **Fluentd Statistics**.
-* Operations for Applications Usage -- Made some fixes and standardized the **Operations for Applications Service and Proxy Data** dashboard.
+* Tanzu Observability Usage -- Made some fixes and standardized the **Tanzu Observability Service and Proxy Data** dashboard.
* Kubernetes -- Improved the Kubernetes Metrics Collector Troubleshooting dashboard to show correctly whether the desired number of Collector instances are ready.
## June 2022
@@ -209,7 +209,7 @@ We made improvements to the following integrations in June 2022:
* We improved the setup instructions with information on how to enable a Prometheus endpoint.
-* Operations for Applications Usage -- Updated the dashboard descriptions and made fixes to alerts.
+* Tanzu Observability Usage -- Updated the dashboard descriptions and made fixes to alerts.
## May 2022
@@ -270,7 +270,7 @@ We made improvements to the following integrations in March 2022:
* You can enable the control plane metrics with helm, or using manual configuration. To see a full list of supported control plane metrics, visit our [github repo](https://github.com/wavefrontHQ/observability-for-kubernetes/blob/main/docs/collector/metrics.md#control-plane-metrics).
-* Operations for Applications Usage
+* Tanzu Observability Usage
* Added two new system dashboards to the integration: **Committed rate vs Monthly Usage (PPS P95) Billable** and **Usage vs Remaining Balance (PPS P95) Burndown**.
* Added three new system alerts: **Percentage of Usage Scanned**, **Percentage of Usage Ingested**, and **Remaining Balance**.
diff --git a/pages/doc/integrations_new_changed_2023.md b/pages/doc/integrations_new_changed_2023.md
index 0923f8b69..e3b08ca34 100644
--- a/pages/doc/integrations_new_changed_2023.md
+++ b/pages/doc/integrations_new_changed_2023.md
@@ -60,7 +60,7 @@ Also, we made improvements to the following integrations in October 2023:
* Removed thresholds from the **K8s pod CPU usage too high** system alert.
* Updated the **Kubernetes Workloads Troubleshooting** dashboard overview with information about the Operator compatibility.
* Tanzu Application Service - Updated the **Error Rate per Minute** chart in the **Workload Monitoring** dashboard to include the 4xx and 5xx HTTP request error counts.
-* Operations for Applications Usage -- Enabled the **Include Obsolete Metrics** option for all charts in the **Operations for Applications Service and Proxy Data** dashboard.
+* Tanzu Observability Usage -- Enabled the **Include Obsolete Metrics** option for all charts in the **Tanzu Observability Service and Proxy Data** dashboard.
* VMware GemFire -- Updated the queries of the GemFire system alerts with new prefixes.
* Go -- Removed references of deprecated SDKs.
* C Sharp -- Removed references of deprecated libraries.
@@ -102,7 +102,7 @@ We made improvements to the following integrations in August 2023:
For the latest list of integrations, see [Integrations Supported for Onboarded Subscriptions](integrations_onboarded_subscriptions.html).
-* Operations for Applications Usage -- Made bug fixes to the **Committed Rate vs Monthly Usage (PPS P95) for Billable** and **Usage (PPS) vs Remaining Balance (PPS P95) for Burndown** dashboards.
+* Tanzu Observability Usage -- Made bug fixes to the **Committed Rate vs Monthly Usage (PPS P95) for Billable** and **Usage (PPS) vs Remaining Balance (PPS P95) for Burndown** dashboards.
* Fluentd -- Updated the setup steps and instructions. You can now set up the integration and the Kubernetes Metrics Collector by using the Observability for Kubernetes Operator.
* Ceph -- Updated the setup steps and instructions. You can now set up the integration and the Kubernetes Metrics Collector by using the Observability for Kubernetes Operator.
@@ -126,7 +126,7 @@ We made improvements to the following integrations in July 2023:
* VMware GemFire -- Updated the setup steps and instructions. You can now set up the integration and the Kubernetes Metrics Collector by using the Observability for Kubernetes Operator. Also updated some of the dashboard queries to a new format.
* Uptime -- Updated the integration with the new Uptime logo.
* Windows Host -- The setup steps now use a URL parameter in the Wavefront proxy configuration.
-* Operations for Applications Usage -- Fixed issues in the predefined dashboards.
+* Tanzu Observability Usage -- Fixed issues in the predefined dashboards.
* Terraform Provider -- Fixed a discrepancy in the Terraform `resource_alert` provider resulting in erroneous Terraform change plan.
@@ -202,7 +202,7 @@ We made improvements to the following integrations in January 2023:
* Microsoft SQL Server -- Updated the charts in the **SQL Server Metrics** dashboard to use the instance variables.
-* Operations for Applications Usage -- Made fixes to the integration and now dashboards are populated with data depending on your type of contract (Billable vs. Burndown).
+* Tanzu Observability Usage -- Made fixes to the integration and now dashboards are populated with data depending on your type of contract (Billable vs. Burndown).
* Tanzu Application Service -- Made updates to the TAS system alerts and removed some of the alerts that are no longer needed, such as:
- TAS Active Locks Alerts
diff --git a/pages/doc/logging_faq.md b/pages/doc/logging_faq.md
index 65d09ca21..4b67f9385 100644
--- a/pages/doc/logging_faq.md
+++ b/pages/doc/logging_faq.md
@@ -80,9 +80,9 @@ Use the methods listed below to track the incoming log data and the number of lo
### See the Data on the Predefined Logs Stats Charts
1. Select **Dashboards** > **All Dashboards**.
-1. Search for **Operations for Applications Service and Proxy Data** in the search bar, and click the dashboard.
- ![A screenshot of the results you get when you search for the Operations for Applications Service and Proxy Data dashboard. ](images/logs_wavefront_service_and_proxy_data.png)
-1. Examine the charts in the **Logs Stats** section of the [Operations for Applications Service and Proxy Data dashboard](wavefront_monitoring.html#operations-for-applications-service-and-proxy-data-dashboard) to get details about the logs you are sending.
+1. Search for **Tanzu Observability Service and Proxy Data** in the search bar, and click the dashboard.
+ ![A screenshot of the results you get when you search for the Tanzu Observability Service and Proxy Data dashboard. ](images/logs_wavefront_service_and_proxy_data.png)
+1. Examine the charts in the **Logs Stats** section of the [Tanzu Observability Service and Proxy Data dashboard](wavefront_monitoring.html#operations-for-applications-service-and-proxy-data-dashboard) to get details about the logs you are sending.
![A screenshot of the proxy dashboard with the preconfigured charts.](images/logging_proxy_logs_dashboard.png)
diff --git a/pages/doc/missing_data_troubleshooting.md b/pages/doc/missing_data_troubleshooting.md
index 78e5c3b54..aacaa6de6 100644
--- a/pages/doc/missing_data_troubleshooting.md
+++ b/pages/doc/missing_data_troubleshooting.md
@@ -204,7 +204,7 @@ Cloud integrations do not use a Wavefront proxy, but for many integrations, data
-There are several possible reasons for queues at the proxy. The [Monitoring Wavefront Proxies](monitoring_proxies.html) and the **Queuing Reasons** chart in the **Operations for Applications Service and Proxy Data** dashboard are especially helpful for identifying the cause for queuing, discussed next:
+There are several possible reasons for queues at the proxy. The [Monitoring Wavefront Proxies](monitoring_proxies.html) and the **Queuing Reasons** chart in the **Tanzu Observability Service and Proxy Data** dashboard are especially helpful for identifying the cause for queuing, discussed next:
* [Pushback from Backend](#proxy-queue-reasons-pushback-from-backend)
* [Proxy Rate Limit](#proxy-queue-reasons-proxy-rate-limit)
@@ -219,22 +219,22 @@ If the rate of data ingestion is higher than backend limit, the proxy queues dat
**Troubleshooting & Further Investigation**
-1. Look for pushback in the **Queuing Reasons** chart of the **Operations for Applications Service and Proxy Data** dashboard.
-2. Use the query in the **Data Ingestion Rate (Points)** chart of the **Operations for Applications Service and Proxy Data** dashboard to keep track of your ingestion rate. Ensure the ingestion rate is within contractual limits to avoid overages. While it's possible to ask Support to raise the backend limit such a change can result in overages.
+1. Look for pushback in the **Queuing Reasons** chart of the **Tanzu Observability Service and Proxy Data** dashboard.
+2. Use the query in the **Data Ingestion Rate (Points)** chart of the **Tanzu Observability Service and Proxy Data** dashboard to keep track of your ingestion rate. Ensure the ingestion rate is within contractual limits to avoid overages. While it's possible to ask Support to raise the backend limit such a change can result in overages.
#### Proxy Queue Reasons: Proxy Rate Limit
-If the proxy is configured with a rate limit, and the rate of data sent to the proxy is above the limit, the proxy starts queuing data. The **Proxy Rate Limiter Active** chart in the **Operations for Applications Service and Proxy Data** dashboard provides insight into whether data is coming in faster than the proxy rate limit supports.
+If the proxy is configured with a rate limit, and the rate of data sent to the proxy is above the limit, the proxy starts queuing data. The **Proxy Rate Limiter Active** chart in the **Tanzu Observability Service and Proxy Data** dashboard provides insight into whether data is coming in faster than the proxy rate limit supports.
**Troubleshooting & Further Investigation**
1. Confirm whether data is coming in faster than the proxy's rate limit configuration (`pushRateLimit`). If so, look into ways to reduce your data rate.
- 1. On the **Operations for Applications Service and Proxy Data dashboard** find the **Proxy Troubleshooting** section and examine the **Proxy Rate Limiter Active** chart to see whether the rate limiter is active on the different proxies in your environment.
+ 1. On the **Tanzu Observability Service and Proxy Data dashboard** find the **Proxy Troubleshooting** section and examine the **Proxy Rate Limiter Active** chart to see whether the rate limiter is active on the different proxies in your environment.
2. Confirm the `pushRateLimit` of each proxy by looking at the proxy configuration file or by querying `--proxyconfig.pushRateLimit`.
-2. Go to the **Received Points/Distributions/Spans Per Second** charts in the **Operations for Applications Service and Proxy Data** dashboard
+2. Go to the **Received Points/Distributions/Spans Per Second** charts in the **Tanzu Observability Service and Proxy Data** dashboard
1. Examine the ingest rate for the proxy that seems to have rate limit problems.
2. Use the Filter feature at the top of each dashboard or chart or specifying a specific source name in the underlying queries to filter for the proxy you are interested in.
@@ -249,7 +249,7 @@ Because rate limits are set assuming a steady rate, that burst of 60,000 PPS for
**Troubleshooting & Further Investigation**
-1. Explore the **Received Points/Distributions/Spans Max Burst Rate (top 20)** charts in the **Operations for Applications Service and Proxy Data** dashboard provides to understand the burstiness of your data rate. The queuing ability of the proxy normally helps smooth out the data rate through momentary queuing.
+1. Explore the **Received Points/Distributions/Spans Max Burst Rate (top 20)** charts in the **Tanzu Observability Service and Proxy Data** dashboard provides to understand the burstiness of your data rate. The queuing ability of the proxy normally helps smooth out the data rate through momentary queuing.
2. If you find that the proxy queues sustain and continue to grow, then the overall data ingest rate is too high.
3. Either reduce the ingest rate or request that the backend limit be raised (this could result in overages).
@@ -263,7 +263,7 @@ As the proxy processes data in the memory buffers, space is freed up for new inc
**Troubleshooting & Further Investigation**
-1. Find the **Queuing Reasons** chart in the **Operations for Applications Service and Proxy Data** dashboard and look for `bufferSize`.
+1. Find the **Queuing Reasons** chart in the **Tanzu Observability Service and Proxy Data** dashboard and look for `bufferSize`.
2. If you see problems, consider lowering the ingestion rate or distributing the load among several proxies.
3. In some situations, it might make sense to adjust the `pushMemoryBufferLimit` proxy property.
* Raising this value results in higher memory usage.
@@ -275,7 +275,7 @@ If network issues prevent or slow down requests from the proxy to the Tanzu Obse
**Troubleshooting & Further Investigation:**
-1. Go to the **Network Latency** chart in the **Proxy Troubleshooting** section of the **Operations for Applications Service and Proxy Data** dashboard. This chart tracks the amount of time from when the proxy sends out a data point to when it receives an acknowledgment from the backend.
+1. Go to the **Network Latency** chart in the **Proxy Troubleshooting** section of the **Tanzu Observability Service and Proxy Data** dashboard. This chart tracks the amount of time from when the proxy sends out a data point to when it receives an acknowledgment from the backend.
2. Ensure that this amount of time is in the range of hundreds of milliseconds. If the time reaches the range of seconds, check for network latency issues.
@@ -286,7 +286,7 @@ The proxy configuration property `memGuardFlushThreshold` is meant to protect ag
**Troubleshooting & Further Investigation:**
-1. Find the **Queueing Reasons** chart in the **Operations for Applications Service and Proxy Data** dashboard and examine the `memoryPressure` metric.
+1. Find the **Queueing Reasons** chart in the **Tanzu Observability Service and Proxy Data** dashboard and examine the `memoryPressure` metric.
2. If there's a problem, consider increasing memory limits for the host server.
@@ -297,7 +297,7 @@ If your data travels through a pipeline before reaching the Wavefront proxy or b
**Troubleshooting & Further Investigation**
-Examine the **Data Received Lag** charts in the **Proxy Troubleshooting** section of the **Operations for Applications Service and Proxy Data** dashboard.
+Examine the **Data Received Lag** charts in the **Proxy Troubleshooting** section of the **Tanzu Observability Service and Proxy Data** dashboard.
These charts can help if the data points are timestamped at or near the source of the data. The underlying metric used in these charts tracks the difference between the system time of the proxy host and the timestamp of data points. This difference can provide insight into how long it takes for a data point to traverse the data pipeline and reach the proxy.
@@ -314,7 +314,7 @@ Each time the service detects a new name, it generates a new ID. ID generation a
**Troubleshooting & Further Investigation**
-The **Operations for Applications Usage** integration includes several alerts that you can customize to be alerted when there is a high rate of new IDs.
+The **Tanzu Observability Usage** integration includes several alerts that you can customize to be alerted when there is a high rate of new IDs.
* A high rate of new IDs can happen when you start sending new data to Tanzu Observability.
* A high rate of new IDs could also indicate a **cardinality issue** with the data shape of the data you're sending to Tanzu Observability. For instance, if a timestamp was included as a point tag, a high number of unique point tags results. This can be a problem when you send the data to Tanzu Observability, but also causes problems later when you query the data. See [Data Naming Best Practices](wavefront_data_naming.html) for best practices.
diff --git a/pages/doc/monitoring_overview.md b/pages/doc/monitoring_overview.md
index ac22c1b71..bde18a842 100644
--- a/pages/doc/monitoring_overview.md
+++ b/pages/doc/monitoring_overview.md
@@ -51,7 +51,7 @@ Administrators (and often other team members) are interested in usage data at al
1. Users who install Wavefront proxies can examine the [proxy information](monitoring_proxies.html) on the out-of-the-box dashboards. Larger environments or production environments rely on a team of load-balanced proxies, as discussed by Clement Pang in [this video about proxies](https://vmwaretv.vmware.com/media/t/1_5wfjti3m).
Having usage data for the proxy helps administrators during installation and also helps with proxy sizing later.
-2. View the points flowing into the system from the [Overall Data Rate section](wavefront_monitoring.html#overall-data-rate) of the **Operations for Applications Service and Proxy Data** dashboard.
+2. View the points flowing into the system from the [Overall Data Rate section](wavefront_monitoring.html#overall-data-rate) of the **Tanzu Observability Service and Proxy Data** dashboard.
3. Create custom charts with internal metrics. Our system dashboard information is a great start, but you might benefit from other [internal metrics](wavefront-internal-metrics.html) and it's easy to create a dashboard with custom charts.
**Note**: We've include the internal metrics that are most useful in the documentation.
diff --git a/pages/doc/proxies_histograms.md b/pages/doc/proxies_histograms.md
index fae312707..de4c641df 100644
--- a/pages/doc/proxies_histograms.md
+++ b/pages/doc/proxies_histograms.md
@@ -265,4 +265,4 @@ The aggregation intervals do not overlap. If you are aggregating by the minute,
## Monitoring Histogram Points
-You can use `~histogram` metrics to monitor histogram ingestion. See the [Ingest Rate by Source](wavefront_monitoring.html#ingest-rate-by-source) section of the Operations for Applications Service and Proxy Data dashboard.
+You can use `~histogram` metrics to monitor histogram ingestion. See the [Ingest Rate by Source](wavefront_monitoring.html#ingest-rate-by-source) section of the Tanzu Observability Service and Proxy Data dashboard.
diff --git a/pages/doc/proxies_installing.md b/pages/doc/proxies_installing.md
index 6f6281304..75f5d46ad 100644
--- a/pages/doc/proxies_installing.md
+++ b/pages/doc/proxies_installing.md
@@ -29,7 +29,7 @@ In most cases, a Wavefront proxy must be running in your environment before metr
- Operating system and JRE - Wavefront proxy is a Java application and can run on operating systems supported by Java. Java 8, 9, 10 or 11 is required. See the requirements in the [Wavefront Proxy README file](https://github.com/wavefrontHQ/wavefront-proxy#requirements).
- Other - Maven
-{% include note.html content="The proxy uses disk space only for queue and buffering of metrics. The size of the buffer depends on the metrics size and the number of data points received and sent by the proxy. The individual proxy dashboards and the **Operations for Applications Service and Proxy Data** dashboard have several charts that allow you to examine proxy backlog size and related metrics. See [Monitoring Proxies](monitoring_proxies.html)." %}
+{% include note.html content="The proxy uses disk space only for queue and buffering of metrics. The size of the buffer depends on the metrics size and the number of data points received and sent by the proxy. The individual proxy dashboards and the **Tanzu Observability Service and Proxy Data** dashboard have several charts that allow you to examine proxy backlog size and related metrics. See [Monitoring Proxies](monitoring_proxies.html)." %}
diff --git a/pages/doc/proxies_preprocessor_rules.md b/pages/doc/proxies_preprocessor_rules.md
index a73d77f44..4eb157372 100644
--- a/pages/doc/proxies_preprocessor_rules.md
+++ b/pages/doc/proxies_preprocessor_rules.md
@@ -758,7 +758,7 @@ The following example illustrates using a `limitLength` for a point tag. The lim
```yaml
# Make sure that the limit that you are setting is not higher
- # than the default Operations for Applications limit.
+ # than the default Tanzu Observability limit.
################################################################
- rule : limit-point-tag-length
action : limitLength
@@ -1360,4 +1360,4 @@ To apply the Wavefront proxy preprocessor rules when certain conditions are met,
## Learn More!
-To monitor the time a proxy is spending with preprocessing rules, examine the [**Proxy Troubleshooting**](monitoring_proxies.html#proxy-troubleshooting) section on the **Operations for Applications Service and Proxy Data** dashboard.
+To monitor the time a proxy is spending with preprocessing rules, examine the [**Proxy Troubleshooting**](monitoring_proxies.html#proxy-troubleshooting) section on the **Tanzu Observability Service and Proxy Data** dashboard.
diff --git a/pages/doc/proxies_troubleshooting.md b/pages/doc/proxies_troubleshooting.md
index 2eeda6fa8..6a5697669 100644
--- a/pages/doc/proxies_troubleshooting.md
+++ b/pages/doc/proxies_troubleshooting.md
@@ -230,8 +230,8 @@ INFO [AbstractReportableEntityHandler:reject] [] blocked input: [WF-300 Ca
* Potential Resolution:
- 1. Log in to your service instance and navigate to the **Operations for Applications Usage** integration.
- 2. In the **Operations for Applications Service and Proxy Data** dashboard check if the proxy's queue and backlog are staying the same size or growing.
+ 1. Log in to your service instance and navigate to the **Tanzu Observability Usage** integration.
+ 2. In the **Tanzu Observability Service and Proxy Data** dashboard check if the proxy's queue and backlog are staying the same size or growing.
* If they're growing, then the attempted rate of ingest is higher than allowed by the backend limit. Either lower the rate of data that is at the proxies, or contact our Technical Support team to request a higher backend limit. If your overall rate of data ingestion is higher than your contract rate, you may incur overage charges.
* If the proxy's queue size is spiky (going up and coming down, close to 0), then the proxy is effectively smoothing out bursts in your rate of data ingestion. This is normal behavior and is not a cause for concern.