From 7b5e82fbd8f403bad291181aed1b248d5b63f0ed Mon Sep 17 00:00:00 2001 From: April M Date: Wed, 16 Oct 2024 15:45:37 -0700 Subject: [PATCH] fix after preview --- modules/ROOT/pages/astream-faq.adoc | 7 ++-- .../astream-subscriptions-exclusive.adoc | 2 +- .../pages/astream-subscriptions-failover.adoc | 16 ++++---- .../astream-subscriptions-keyshared.adoc | 40 +++++++++---------- .../pages/astream-subscriptions-shared.adoc | 16 ++++---- modules/ROOT/pages/astream-subscriptions.adoc | 2 +- modules/developing/nav.adoc | 2 +- .../developing/pages/astream-functions.adoc | 17 ++++---- modules/developing/pages/astream-kafka.adoc | 14 ++++--- modules/developing/pages/clients/index.adoc | 14 +++---- .../pages/clients/spring-produce-consume.adoc | 19 ++++----- modules/operations/pages/astream-limits.adoc | 10 ++--- .../operations/pages/monitoring/index.adoc | 17 +++++--- .../pages/monitoring/integration.adoc | 11 +---- modules/operations/pages/onboarding-faq.adoc | 18 +++++---- 15 files changed, 105 insertions(+), 100 deletions(-) diff --git a/modules/ROOT/pages/astream-faq.adoc b/modules/ROOT/pages/astream-faq.adoc index 6ea842f..b0fc6f1 100644 --- a/modules/ROOT/pages/astream-faq.adoc +++ b/modules/ROOT/pages/astream-faq.adoc @@ -12,11 +12,12 @@ See xref:operations:astream-pricing.adoc[]. == Why is {product} based on Apache Pulsar? -See our https://www.datastax.com/blog/four-reasons-why-apache-pulsar-essential-modern-data-stack[blog post] that explains why we are excited about Apache Pulsar and why we decided it was the best technology to base {product} on. +For information about the decision to use Apache Pulsar, see https://www.datastax.com/blog/four-reasons-why-apache-pulsar-essential-modern-data-stack[Four Reasons Why Apache Pulsar is Essential to the Modern Data Stack]. -== What will happen to Kesque? +== What happened to Kesque? -{product} is based heavily on technology originally created as part of Kesque. With the launch of {product} we will begin the process of shutting down the Kesque service and migrating customers to the new {product} platform. +{product} is based heavily on technology originally created as part of Kesque. +With the launch of {product}, {company} began shutting down the Kesque service and migrated customers to {product}. == Who should use {product}? diff --git a/modules/ROOT/pages/astream-subscriptions-exclusive.adoc b/modules/ROOT/pages/astream-subscriptions-exclusive.adoc index 8313567..c50a45e 100644 --- a/modules/ROOT/pages/astream-subscriptions-exclusive.adoc +++ b/modules/ROOT/pages/astream-subscriptions-exclusive.adoc @@ -85,4 +85,4 @@ Caused by: org.apache.pulsar.client.api.PulsarClientException$ConsumerBusyExcept * xref:astream-subscriptions.adoc[Subscriptions in Pulsar] * xref:astream-subscriptions-shared.adoc[Shared subscriptions] * xref:astream-subscriptions-failover.adoc[Failover subscriptions] -* xref:astream-subscriptions-keyshared.adoc[Key_shared subscriptions] +* xref:astream-subscriptions-keyshared.adoc[Key shared subscriptions] diff --git a/modules/ROOT/pages/astream-subscriptions-failover.adoc b/modules/ROOT/pages/astream-subscriptions-failover.adoc index 41c969e..a4c57ef 100644 --- a/modules/ROOT/pages/astream-subscriptions-failover.adoc +++ b/modules/ROOT/pages/astream-subscriptions-failover.adoc @@ -11,6 +11,14 @@ If the primary consumer disconnects, the standby consumers begin consuming the s This page explains how to use Pulsar's failover subscription model to manage your topic consumption. +.Failover subscription video +[%collapsible] +==== +This video from the *Five Minutes About Pulsar* series demonstrates failover subscriptions: + +video::ckB87OLs5eM[youtube, list=PL2g2h-wyI4SqeKH16czlcQ5x4Q_z-X7_m, height=445px,width=100%] +==== + include::ROOT:partial$subscription-prereq.adoc[] [#example] @@ -84,15 +92,9 @@ In the second `SimplePulsarConsumer` terminal, the backup consumer begins consum You can configure as many backup consumers as you like. To test them, you can progressively end each `SimplePulsarConsumer` process, and then check that the next backup consumer has begun receiving messages. -=== Failover subscription video - -Follow along with this video from our *Five Minutes About Pulsar* series to see failover subscriptions in action: - -video::ckB87OLs5eM[youtube, list=PL2g2h-wyI4SqeKH16czlcQ5x4Q_z-X7_m, height=445px,width=100%] - == See also * xref:astream-subscriptions.adoc[Subscriptions in Pulsar] * xref:astream-subscriptions-exclusive.adoc[Exclusive subscriptions] * xref:astream-subscriptions-shared.adoc[Shared subscriptions] -* xref:astream-subscriptions-keyshared.adoc[Key_shared subscriptions] \ No newline at end of file +* xref:astream-subscriptions-keyshared.adoc[Key shared subscriptions] \ No newline at end of file diff --git a/modules/ROOT/pages/astream-subscriptions-keyshared.adoc b/modules/ROOT/pages/astream-subscriptions-keyshared.adoc index 50176eb..e39e3c8 100644 --- a/modules/ROOT/pages/astream-subscriptions-keyshared.adoc +++ b/modules/ROOT/pages/astream-subscriptions-keyshared.adoc @@ -1,5 +1,5 @@ = Key shared subscriptions in Pulsar -:navtitle: Key_Shared +:navtitle: Key shared :page-tag: pulsar-subscriptions,quickstart,admin,dev,pulsar _Subscriptions_ in Pulsar describe which consumers are consuming data from a topic and how they want to consume that data. @@ -15,13 +15,21 @@ Keys are generated with hashing that converts arbitrary values like `topic-name` This page explains how to use Pulsar's key shared subscription model to manage your topic consumption. +.Key shared subscription video +[%collapsible] +==== +This video from the *Five Minutes About Pulsar* series demonstrates key shared subscriptions: + +video::_49wlA53L_8[youtube, list=PL2g2h-wyI4SqeKH16czlcQ5x4Q_z-X7_m, height=445px,width=100%] +==== + include::ROOT:partial$subscription-prereq.adoc[] [#example] == Key shared subscription example -To try out a Pulsar key shared subscription, add `.subscriptionType(SubscriptionType.Key_Shared)` to the `pulsarConsumer` in `SimplePulsarConsumer.java`: - +. To try out a Pulsar key shared subscription, add `.subscriptionType(SubscriptionType.Key_Shared)` to the `pulsarConsumer` in `SimplePulsarConsumer.java`: ++ .SimplePulsarConsumer.java [source,java] ---- @@ -37,19 +45,15 @@ pulsarConsumer = pulsarClient.newConsumer(Schema.JSON(DemoBean.class)) .keySharedPolicy(KeySharedPolicy.autoSplitHashRange()) .subscribe(); ---- - -=== keySharedPolicy - ++ The `keySharedPolicy` defines the how hashed values are assigned to subscribed consumers. ++ +The above example uses `autoSplitHashRange`, which is an auto-hashing policy. +Running multiple consumers with auto-hashing balances the messaging load across all available consumers, like a xref:astream-subscriptions-shared.adoc[shared subscription] ++ +If you want to manually set a hash range, use `KeySharedPolicy.stickyHashRange()`, as demonstrated in the following steps. -The above example used `autoSplitHashRange`, which is an auto-hashing policy. -Running multiple consumers with auto-hashing balances the messaging load across all available consumers. - -If you want to manually set a hash range, use `KeySharedPolicy.stickyHashRange()`. - -==== Test sticky hashed key shared subscriptions - -. To test out sticky hashed key shared subscriptions, import the following additional classes: +. To use a sticky hashed key shared subscription, import the following classes to `SimplePulsarConsumer.java`: + .SimplePulsarConsumer.java [source,java] @@ -143,15 +147,9 @@ Caused by: org.apache.pulsar.client.api.PulsarClientException$ConsumerAssignExce at com.datastax.pulsar.SimplePulsarConsumer.main(SimplePulsarConsumer.java:47) ---- -. To run multiple consumers with sticky hashing, modify the `SimplePulsarConsumer.java` configuration to split the hash range between consumers. +. To run multiple consumers with sticky hashing, modify the `SimplePulsarConsumer.java` configuration to split the hash range between consumers or use auto-hashing. Then, you can launch multiple instances of `SimplePulsarConsumer.java` to consume messages from different hash ranges. -== Key shared subscription video - -Follow along with this video from our *Five Minutes About Pulsar* series to see key shared subscriptions in action: - -video::_49wlA53L_8[youtube, list=PL2g2h-wyI4SqeKH16czlcQ5x4Q_z-X7_m, height=445px,width=100%] - == See also * xref:astream-subscriptions.adoc[Subscriptions in Pulsar] diff --git a/modules/ROOT/pages/astream-subscriptions-shared.adoc b/modules/ROOT/pages/astream-subscriptions-shared.adoc index 2e32535..c44b183 100644 --- a/modules/ROOT/pages/astream-subscriptions-shared.adoc +++ b/modules/ROOT/pages/astream-subscriptions-shared.adoc @@ -10,6 +10,14 @@ However, there is a risk of losing message ordering guarantees and acknowledgeme This page explains how you can use Pulsar's shared subscription model to manage your topic consumption. +.Shared subscription video +[%collapsible] +==== +This video from the *Five Minutes About Pulsar* series demonstrates shared subscriptions: + +video::mmukXqGsauA[youtube, list=PL2g2h-wyI4SqeKH16czlcQ5x4Q_z-X7_m, height=445px,width=100%] +==== + include::ROOT:partial$subscription-prereq.adoc[] [#example] @@ -83,15 +91,9 @@ If you run this test with xref:astream-subscriptions-exclusive.adoc[exclusive su To continue testing the shared subscription configuration, you can continue running new instances of `SimplePulsarConsumer.java` in new temrinal windows. All the consumers subscribe to the topic and consume messages in a round-robin fashion. -== Shared subscription video - -Follow along with this video from our *Five Minutes About Pulsar* series to see shared subscriptions in action: - -video::mmukXqGsauA[youtube, list=PL2g2h-wyI4SqeKH16czlcQ5x4Q_z-X7_m, height=445px,width=100%] - == See also * xref:astream-subscriptions.adoc[Subscriptions in Pulsar] * xref:astream-subscriptions-exclusive.adoc[Exclusive subscriptions] * xref:astream-subscriptions-failover.adoc[Failover subscriptions] -* xref:astream-subscriptions-keyshared.adoc[Key_shared subscriptions] +* xref:astream-subscriptions-keyshared.adoc[Key shared subscriptions] diff --git a/modules/ROOT/pages/astream-subscriptions.adoc b/modules/ROOT/pages/astream-subscriptions.adoc index 8e2fbaa..607612e 100644 --- a/modules/ROOT/pages/astream-subscriptions.adoc +++ b/modules/ROOT/pages/astream-subscriptions.adoc @@ -31,7 +31,7 @@ pulsarConsumer = pulsarClient.newConsumer(Schema.BYTES) * xref:astream-subscriptions-exclusive.adoc[Exclusive subscriptions] * xref:astream-subscriptions-shared.adoc[Shared subscriptions] * xref:astream-subscriptions-failover.adoc[Failover subscriptions] -* xref:astream-subscriptions-keyshared.adoc[Key_shared subscriptions] +* xref:astream-subscriptions-keyshared.adoc[Key shared subscriptions] == See also diff --git a/modules/developing/nav.adoc b/modules/developing/nav.adoc index 8c42056..c4998bb 100644 --- a/modules/developing/nav.adoc +++ b/modules/developing/nav.adoc @@ -23,4 +23,4 @@ * xref:gpt-schema-translator.adoc[] * xref:astra-cli.adoc[] * xref:astream-cdc.adoc[] -* xref:streaming-learning:pulsar-io:connectors/index.adoc[IO Connectors] \ No newline at end of file +* xref:streaming-learning:pulsar-io:connectors/index.adoc[IO connectors] \ No newline at end of file diff --git a/modules/developing/pages/astream-functions.adoc b/modules/developing/pages/astream-functions.adoc index 5b45982..4848e17 100644 --- a/modules/developing/pages/astream-functions.adoc +++ b/modules/developing/pages/astream-functions.adoc @@ -198,6 +198,7 @@ Must have 4 steps to maintain numbering. //// ====== +[start=5] . If you haven't done so already, xref:configure-pulsar-env.adoc[set up your environment for the Pulsar binaries]. . Create a deployment configuration YAML file that defines the function metadata and associated topics: @@ -262,7 +263,7 @@ Organizations on the *Free* plan can use xref:streaming-learning:functions:index [TIP] ==== If your Python function contains only a single script and no dependencies, you can deploy the `.py` file directly, without packaging it into a `.zip` file or creating a configuration file: -+ + [source,bash,subs="+quotes"] ---- $ ./pulsar-admin functions create \ @@ -306,16 +307,16 @@ See <> for more information Upload your own code:: + -- -. Select *Upload my own code*. +.. Select *Upload my own code*. -. Select your function file: +.. Select your function file: + * `.py`: A single, independent Python script * `.zip`: A Python script with dependencies * `.jar`: A Java function * `.go`: A Go function -. Based on the uploaded file, select the specific class (function) to deploy. +.. Based on the uploaded file, select the specific class (function) to deploy. + {astra_db} generates a list of acceptable classes detected in the code. A file can contain multiple classes, but only one is used per deployment. @@ -332,7 +333,7 @@ image::astream-exclamation-function.png[Exclamation Function] Use {company} transform function:: + -- -. Select *Use {company} transform function*. +.. Select *Use {company} transform function*. This is the only function option available on the {product} *Free* plan. @@ -395,7 +396,7 @@ The trigger sends the message string to the function. Your function should output the result of processing the message. [#controlling-your-function] -=== Control functions +=== Stop and start functions In the {astra_ui}, on your tenant's *Functions* tab, you can use *Function Controls* to start, stop, and restart functions. @@ -408,7 +409,7 @@ image::astream-function-log.png[Function Log] If you specified a log topic when deploying your function, function logs also output to that topic. -== Edit functions +=== Edit functions . In the {astra_ui}, on your tenant's *Functions* tab, click *Update Function*. @@ -422,7 +423,7 @@ If you specified a log topic when deploying your function, function logs also ou If you need to change any other function settings, you must delete and redeploy the function with the desired settings. -== Delete functions +=== Delete functions [IMPORTANT] ==== diff --git a/modules/developing/pages/astream-kafka.adoc b/modules/developing/pages/astream-kafka.adoc index f7c48d6..e207e96 100644 --- a/modules/developing/pages/astream-kafka.adoc +++ b/modules/developing/pages/astream-kafka.adoc @@ -12,6 +12,14 @@ By integrating two popular event streaming ecosystems, {kafka_for_astra} unlocks This document will help you get started producing and consuming Kafka messages on a Pulsar cluster. +.Starlight for Kafka video +[%collapsible] +==== +This video from the *Five Minutes About Pulsar* series explains how to migrate from Kafka to Pulsar: + +video::Qy2ZlelLjXg[youtube, list=PL2g2h-wyI4SqeKH16czlcQ5x4Q_z-X7_m, height=445px,width=100%] +==== + == {kafka_for_astra} Quickstart :page-tag: starlight-kafka,quickstart,install,admin,dev,pulsar,kafka @@ -98,12 +106,6 @@ Your Kafka messages are being produced and consumed in a Pulsar cluster: + image::astream-kafka-monitor.png[Monitor Kafka Activity] -== Starlight for Kafka video - -Follow along with this video from our *Five Minutes About Pulsar* series to migrate from Kafka to Pulsar: - -video::Qy2ZlelLjXg[youtube, list=PL2g2h-wyI4SqeKH16czlcQ5x4Q_z-X7_m, height=445px,width=100%] - == See also * https://github.com/datastax/starlight-for-kafka[{company} Starlight for Kafka project] diff --git a/modules/developing/pages/clients/index.adoc b/modules/developing/pages/clients/index.adoc index fbd81f9..166d6e5 100644 --- a/modules/developing/pages/clients/index.adoc +++ b/modules/developing/pages/clients/index.adoc @@ -7,11 +7,11 @@ To connect to your service, use the open-source client APIs provided by the Apache Pulsar project. {product} is running Pulsar version {pulsar_version}. You should use this API version or higher. -Popular Pulsar clients include the following: +For more information and examples, see the following: -* xref:clients/csharp-produce-consume.adoc[image:csharp-icon.png[] C#] -* xref:clients/golang-produce-consume.adoc[image:golang-icon.png[] Golang] -* xref:clients/java-produce-consume.adoc[image:java-icon.png[] Java] -* xref:clients/nodejs-produce-consume.adoc[image:node-icon.png[] Node.js] -* xref:clients/python-produce-consume.adoc[image:python-icon.png[] Python] -* xref:clients/spring-produce-consume.adoc[image:spring-boot-icon.png[] Spring Boot] \ No newline at end of file +* xref:clients/csharp-produce-consume.adoc[C#] +* xref:clients/golang-produce-consume.adoc[Golang] +* xref:clients/java-produce-consume.adoc[Java] +* xref:clients/nodejs-produce-consume.adoc[Node.js] +* xref:clients/python-produce-consume.adoc[Python] +* xref:clients/spring-produce-consume.adoc[Spring Boot] \ No newline at end of file diff --git a/modules/developing/pages/clients/spring-produce-consume.adoc b/modules/developing/pages/clients/spring-produce-consume.adoc index c5735d8..c38bde6 100644 --- a/modules/developing/pages/clients/spring-produce-consume.adoc +++ b/modules/developing/pages/clients/spring-produce-consume.adoc @@ -5,23 +5,20 @@ You can produce and consume messages with the Java Pulsar client, {product}, and Spring. -== Prerequisites +Go to the https://github.com/datastax/astra-streaming-examples[examples repo] for the complete source of this example. -You will need the following prerequisites in place to complete this guide: +== Prerequisites -* JRE 8 installed (https://sdkman.io/[install now,title=Install java]) -* (https://maven.apache.org/install.html[Maven,title=Maven]) or (https://gradle.org/install/[Gradle,title=Gradle]) installed -* A working Pulsar topic (get started xref:getting-started:index.adoc[here] if you don't have a topic) -* A basic text editor or IDE +For this example, you need the following: -[TIP] -==== -Visit our https://github.com/datastax/astra-streaming-examples[examples repo] to see the complete source of this example. -==== +* JRE 8 +* https://maven.apache.org/install.html[Maven] +* A Pulsar topic in {product} +* A text editor or IDE == Create a Maven project in Spring Initializr -Spring Initializr is a great tool to quickly create a project with the dependencies you need. Let's use it to create a project with the Pulsar client dependency. +You can use Spring Initializr to quickly create a Java project with the required dependencies, including the Pulsar client dependency. . Go to https://start.spring.io/[Spring Initializr] to initialize a new project. diff --git a/modules/operations/pages/astream-limits.adoc b/modules/operations/pages/astream-limits.adoc index 707a85b..54cd127 100644 --- a/modules/operations/pages/astream-limits.adoc +++ b/modules/operations/pages/astream-limits.adoc @@ -12,7 +12,7 @@ Guardrails are provisioned in the default settings for {product}. You can't directly change these guardrails. For dedicated clusters, some guardrails can be changed by contacting {support_url}[{company} Support]. -[cols="1,1"] +[cols="1,1,1"] |=== |Guardrail |Limit |Comments @@ -329,10 +329,10 @@ The following configurations can't be changed: * Data persistency (`En`, `Qw`, `Qa`) * `Managedledger` policy/deletion * Namespace bundle configurations: - * Bundle split - * Bundle level clear backlog - * Bundle level unload - * Bundle level subscribe and unsubscribe +** Bundle split +** Bundle level clear backlog +** Bundle level unload +** Bundle level subscribe and unsubscribe * Replication * Delayed delivery * Offload policy diff --git a/modules/operations/pages/monitoring/index.adoc b/modules/operations/pages/monitoring/index.adoc index a6a9f56..5362413 100644 --- a/modules/operations/pages/monitoring/index.adoc +++ b/modules/operations/pages/monitoring/index.adoc @@ -1,9 +1,11 @@ = Monitor streaming tenants :navtitle: Monitoring overview -Because {product} is a software-as-a-service product, not all Apache Pulsar metrics (https://pulsar.apache.org/docs/reference-metrics/[Pulsar Metrics Reference]) are exposed for external integration purposes. At a high level, {product} only exposes metrics that are related to namespaces. Other metrics that are not directly namespace related are not exposed externally, such as the Bookkeeper ledger and journal metrics and Zookeeper metrics. +Because {product} is a managed SaaS offering, some https://pulsar.apache.org/docs/reference-metrics/[Apache Pulsar metrics] aren't exposed for external integration purposes. +At a high level, {product} only exposes metrics related to namespaces. +Metrics that are not directly related to namespaces aren't exposed externally, such as the Bookkeeper ledger and journal metrics and Zookeeper metrics. -In the following sections, we'll explore each of the {product} metrics categories that are available for external integration, and recommended metrics for external integration. +Additionally, of the exposed metrics, not all metrics are recommended for external integration. == Pulsar raw metrics @@ -113,7 +115,12 @@ For more information, see the https://prometheus.io/docs/prometheus/latest/query {company} recommends the following PromQL query patterns for aggregating raw {product} metrics. The following examples use the `pulsar_msg_backlog` raw metric to demonstrate the patterns. -Additionally, in accordance with the recommendations in <>, these patterns aggregate messages at the parent topic level or higher, and they use the following expression to exclude system topics: +In accordance with the recommendations in <>, the example patterns aggregate messages at the parent topic level or higher and they exclude system topics. + +.Filter system topics +[%collapsible] +==== +You can use the following expression to filter system topics: [source,pgsql] ---- @@ -130,6 +137,7 @@ persistent://some_tenant/__kafka/__consumer_offsets_partition_0 To use this expression, your applications' namespace and topic names don't contain double underscores. If they do, they will also be excluded by this filter. +==== ==== Get the total message backlog of a specific parent topic, excluding system topics @@ -202,5 +210,4 @@ If your receive too many false alarms, adjust the alert threshold to a higher va * xref:monitoring/metrics.adoc[] * xref:monitoring/integration.adoc[] -* xref:monitoring/new-relic.adoc[] - +* xref:monitoring/new-relic.adoc[] \ No newline at end of file diff --git a/modules/operations/pages/monitoring/integration.adoc b/modules/operations/pages/monitoring/integration.adoc index 708979a..5e26616 100644 --- a/modules/operations/pages/monitoring/integration.adoc +++ b/modules/operations/pages/monitoring/integration.adoc @@ -26,11 +26,6 @@ The values in your `config.yml` depend on your tenant configuration. .config.yml [source,yaml,subs="+quotes"] ---- -global: - scrape_interval: 60s - evaluation_interval: 60s - -scrape_configs: - job_name: "astra-pulsar-metrics-demo" scheme: 'https' @@ -100,10 +95,8 @@ kubectl rollout status daemonset \ . To confirm that {product} metrics are integrated with your external Prometheus server, go to your external Prometheus server UI. Make sure the additional scrape job is in `UP` status. -If not, there are issues in the previous configuration procedures. - -== Troubleshoot 401 Unauthorized errors - +If not, review the configuration instructions and YAML examples to ensure your configuration is correct. ++ If the additional scrape job returns a `401 Unauthorized` error, make sure your Pulsar JWT isn't expired. For more information, see xref:astream-token-gen.adoc[]. diff --git a/modules/operations/pages/onboarding-faq.adoc b/modules/operations/pages/onboarding-faq.adoc index c5dd9ad..bcbee45 100644 --- a/modules/operations/pages/onboarding-faq.adoc +++ b/modules/operations/pages/onboarding-faq.adoc @@ -1,10 +1,10 @@ = {product} enrollment FAQ :navtitle: Enrollment FAQ -:description: These are the most common questions we receive about getting started with {product}. +:description: Common questions about getting started with {product}. :page-tag: astra-streaming,onboarding,Orientation When considering {product} for your production workloads, you might have questions about what options are available for connecting and consuming your serverless clusters. -This page answers the most common questions we receive about enrollment. +This page answers some common questions about getting started with {product}. == Why does {company} call {product} "serverless"? @@ -75,16 +75,18 @@ xref:streaming-learning:use-cases-architectures:starlight/index.adoc[Learn more] === How do I separate messaging traffic? -It is common to have a hierarchy of development environments which app changes are promoted through before reaching production. +It is common to have a hierarchy of development environments through which you promote app changes before they reach production. The configurations of middleware and platforms supporting the app should be kept in parity to promote stability and fast iterations with low volatility. By Tenant:: -To support the hierarchy of development environments pattern, we recommend using Tenants to represent each development environment. -This gives you the greatest flexibility to balance a separation of roles with consistent service configuration. -All tokens created within a Tenant are limited to that Tenant. +To support the hierarchy of development environments, {company} recommends creating separate tenants for each development environment. +This gives you the greatest flexibility to balance separation of roles with consistent service configuration. + -For example, start with a tenant named `Dev`` that development teams have access to (and create tokens from), then create other tenants named `Staging` and `Production`. -Each Tenant has progressively less permissions to create tokens, but maintains parity between the three running environments. +All tokens created within a tenant are limited to that tenant. ++ +For example, start with a tenant named `Dev` that your development teams can access and create tokens for, and then create other tenants named `Staging` and `Production`. +At each level of the hierarchy, there are fewer users with access to the environment's tenant, which means fewer opportunities to create tokens that can programmatically access that tenant. +Yet, you still maintain parity across the three environments. By Namespace:: Alternatively, you might choose to separate development environments by namespace within your {product} tenant.