Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Yifei gen docs #19

Merged
merged 2 commits into from
Aug 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
27 changes: 2 additions & 25 deletions docs/index.md
Original file line number Diff line number Diff line change
@@ -1,28 +1,5 @@
---
# generated by https://github.com/hashicorp/terraform-plugin-docs
page_title: "automq Provider"
subcategory: ""
page_title: "Provider: AutoMQ"
description: |-

---

# automq Provider



## Example Usage

```terraform
provider "scaffolding" {
# example configuration here
}
```

<!-- schema generated by tfplugindocs -->
## Schema

### Optional
The AutoMQ Terraform provider is used to manage AutoMQ Cloud BYOC and SaaS instances and Kafka resources within them.

- `automq_byoc_access_key_id` (String) Example provider attribute
- `automq_byoc_host` (String) Example provider attribute
- `automq_byoc_secret_key` (String) Example provider attribute
30 changes: 15 additions & 15 deletions docs/resources/integration.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@
page_title: "automq_integration Resource - automq"
subcategory: ""
description: |-
Integration resource
AutoMQ uses automq_integration to describe external third-party data transmission. By creating integrations and associating them with AutoMQ instances, you can forward instance Metrics and other data to external systems. Currently supported integration types are Prometheus and CloudWatch.
---

# automq_integration (Resource)

Integration resource
AutoMQ uses `automq_integration` to describe external third-party data transmission. By creating integrations and associating them with AutoMQ instances, you can forward instance Metrics and other data to external systems. Currently supported integration types are Prometheus and CloudWatch.

## Example Usage

Expand Down Expand Up @@ -52,40 +52,40 @@ resource "automq_integration" "example" {

### Required

- `environment_id` (String) Target environment ID
- `name` (String) Name of the integration
- `type` (String) Type of the integration
- `environment_id` (String) Target AutoMQ BYOC environment, this attribute is specified during the deployment and installation process.
- `name` (String) The integrated name identifies different configurations and contains 3 to 64 characters, including letters a to z or a to z, digits 0 to 9, underscores (_), and hyphens (-).
- `type` (String) Type of integration, currently support `kafka` and `cloudwatch`

### Optional

- `cloudwatch_config` (Attributes) CloudWatch (see [below for nested schema](#nestedatt--cloudwatch_config))
- `endpoint` (String) Endpoint of the integration
- `kafka_config` (Attributes) Kafka configuration (see [below for nested schema](#nestedatt--kafka_config))
- `prometheus_config` (Attributes) Prometheus (see [below for nested schema](#nestedatt--prometheus_config))
- `cloudwatch_config` (Attributes) CloudWatch integration configurations. When Type is `cloudwatch`, it must be set. (see [below for nested schema](#nestedatt--cloudwatch_config))
- `endpoint` (String) Endpoint of integration. When selecting Prometheus and Kafka integration, you need to configure the corresponding endpoints. For detailed configuration instructions, please refer to the [documentation](https://docs.automq.com/automq-cloud/manage-environments/byoc-environment/manage-integrations).
- `kafka_config` (Attributes) Kafka integration configurations. When Type is `kafka`, it must be set. (see [below for nested schema](#nestedatt--kafka_config))
- `prometheus_config` (Attributes) Prometheus integration configurations. When Type is `prometheus`, it must be set. (see [below for nested schema](#nestedatt--prometheus_config))

### Read-Only

- `created_at` (String)
- `id` (String) Integration identifier
- `id` (String) Integration identifier, Used for binding and association with the instance.
- `last_updated` (String)

<a id="nestedatt--cloudwatch_config"></a>
### Nested Schema for `cloudwatch_config`

Optional:

- `namespace` (String) Namespace
- `namespace` (String) Set cloudwatch namespace, AutoMQ will write all Metrics data under this namespace. The namespace name must contain 1 to 255 valid ASCII characters and may be alphanumeric, periods, hyphens, underscores, forward slashes, pound signs, colons, and spaces, but not all spaces.


<a id="nestedatt--kafka_config"></a>
### Nested Schema for `kafka_config`

Required:

- `sasl_mechanism` (String) SASL mechanism for Kafka
- `sasl_password` (String) SASL password for Kafka
- `sasl_username` (String) SASL username for Kafka
- `security_protocol` (String) Security protocol for Kafka
- `sasl_mechanism` (String) SASL mechanism for external kafka cluster, currently support `PLAIN`, `SCRAM-SHA-256` and `SCRAM-SHA-512`
- `sasl_password` (String) SASL password for Kafka, The username and password are declared and returned when creating the kafka_user resource in AutoMQ.
- `sasl_username` (String) SASL username for Kafka, The username and password are declared and returned when creating the kafka_user resource in AutoMQ.
- `security_protocol` (String) Security protocol for external kafka cluster, currently support `PLAINTEXT` and `SASL_PLAINTEXT`


<a id="nestedatt--prometheus_config"></a>
Expand Down
22 changes: 11 additions & 11 deletions docs/resources/kafka_acl.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@
page_title: "automq_kafka_acl Resource - automq"
subcategory: ""
description: |-
Kafka ACL resource
automq_kafka_acl provides an Access Control List (ACL) Policy in AutoMQ Cluster. AutoMQ supports ACL authorization for Cluster, Topic, Consumer Group, and Transaction ID resources, and simplifies the complex API actions of Apache Kafka through Operation Groups.
---

# automq_kafka_acl (Resource)

Kafka ACL resource
`automq_kafka_acl` provides an Access Control List (ACL) Policy in AutoMQ Cluster. AutoMQ supports ACL authorization for Cluster, Topic, Consumer Group, and Transaction ID resources, and simplifies the complex API actions of Apache Kafka through Operation Groups.

## Example Usage

Expand Down Expand Up @@ -54,18 +54,18 @@ resource "automq_kafka_acl" "example" {

### Required

- `environment_id` (String) Target Kafka environment
- `kafka_instance_id` (String) Target Kafka instance ID
- `operation_group` (String) Operation group for ACL
- `pattern_type` (String) Pattern type for resource
- `principal` (String) Principal for ACL
- `resource_name` (String) Name of the resource for ACL
- `resource_type` (String) Resource type for ACL
- `environment_id` (String) Target AutoMQ BYOC environment, this attribute is specified during the deployment and installation process.
- `kafka_instance_id` (String) Target Kafka instance ID, each instance represents a kafka cluster. The instance id looks like kf-xxxxxxx.
- `operation_group` (String) Set the authorized operation group. For the Topic resource type, the supported operations are `ALL (all permissions)`, `PRODUCE (produce messages only)`, and `CONSUME (consume messages only)`. For other resource types, only `ALL (all permissions)` is supported.
- `pattern_type` (String) Set the resource name matching pattern, supporting `LITERAL` and `PREFIXED`. `LITERAL` represents exact matching, while `PREFIXED` represents prefix matching.
- `principal` (String) Set the authorized target principal, which currently supports Kafka User type principals, i.e., User:xxxx. Specify the Kafka user name. Principal must start with `User:` and contact with kafka_username.
- `resource_name` (String) The target resource name for Kafka ACL authorization, can be a specific resource name or a resource name prefix (when using prefix matching, only the prefix needs to be provided without ending with "\*"). If only "\*" is specified, it represents all resources.
- `resource_type` (String) The Kafka ACL authorized resource types, currently support `CLUSTER`, `TOPIC`, `CONSUMERGROUP` and `TRANSACTION_ID`.

### Optional

- `permission` (String) Permission type for ACL
- `permission` (String) Set the permission type, which supports `ALLOW` and `DENY`. `ALLOW` grants permission to perform the operation, while `DENY` prohibits the operation. `DENY` takes precedence over `ALLOW`.

### Read-Only

- `id` (String) Kafka instance ID
- `id` (String) The Kafka ACL Resource ID is returned upon successful creation of the ACL.
48 changes: 24 additions & 24 deletions docs/resources/kafka_instance.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@
page_title: "automq_kafka_instance Resource - automq"
subcategory: ""
description: |-
AutoMQ Kafka instance resource
Using the automq_kafka_instance resource type, you can create and manage Kafka instances, where each instance represents a physical cluster.
---

# automq_kafka_instance (Resource)

AutoMQ Kafka instance resource
Using the `automq_kafka_instance` resource type, you can create and manage Kafka instances, where each instance represents a physical cluster.

## Example Usage

Expand Down Expand Up @@ -57,48 +57,48 @@ resource "automq_kafka_instance" "example" {

### Required

- `cloud_provider` (String) The cloud provider of the Kafka instance
- `compute_specs` (Attributes) The compute specs of the Kafka instance (see [below for nested schema](#nestedatt--compute_specs))
- `environment_id` (String) Target Kafka environment
- `name` (String) The name of the Kafka instance
- `networks` (Attributes List) The networks of the Kafka instance (see [below for nested schema](#nestedatt--networks))
- `region` (String) The region of the Kafka instance
- `cloud_provider` (String) To set up a Kafka instance, you need to specify the target cloud provider environment for deployment. Currently, 'aws' is supported.
- `compute_specs` (Attributes) The compute specs of the instance, contains aku and version. (see [below for nested schema](#nestedatt--compute_specs))
- `environment_id` (String) Target AutoMQ BYOC environment, this attribute is specified during the deployment and installation process.
- `name` (String) The name of the Kafka instance. It can contain letters (a-z or A-Z), numbers (0-9), underscores (_), and hyphens (-), with a length limit of 3 to 64 characters.
- `networks` (Attributes List) To configure the network settings for an instance, you need to specify the availability zone(s) and subnet information. Currently, you can set either one availability zone or three availability zones. (see [below for nested schema](#nestedatt--networks))
- `region` (String) To set up an instance, you need to specify the target region for deployment. Refer to the RegionId list provided by each cloud provider for available regions. Using AWS as an example, refer to this [documentation](https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/using-regions-availability-zones.html) to set the correct `RegionId`.

### Optional

- `acl` (Boolean) The ACL of the Kafka instance
- `configs` (Map of String) Additional configuration for the Kafka topic
- `description` (String) The description of the Kafka instance
- `integrations` (List of String) The integrations of the Kafka instance
- `acl` (Boolean) Configure ACL enablement. Default is false (disabled).
- `configs` (Map of String) Additional configuration for the Kafka Instance. The currently supported parameters can be set by referring to the [documentation](https://docs.automq.com/automq-cloud/release-notes).
- `description` (String) The instance description are used to differentiate the purpose of the instance. They support letters (a-z or A-Z), numbers (0-9), underscores (_), spaces( ) and hyphens (-), with a length limit of 3 to 128 characters.
- `integrations` (List of String) Configure integration settings. AutoMQ supports integration with external products like `prometheus` and `cloudwatch`, forwarding instance Metrics data to Prometheus and CloudWatch.
- `timeouts` (Block, Optional) (see [below for nested schema](#nestedblock--timeouts))

### Read-Only

- `created_at` (String)
- `endpoints` (Attributes List) The endpoints of the Kafka instance (see [below for nested schema](#nestedatt--endpoints))
- `id` (String) The ID of the Kafka instance
- `instance_status` (String) The status of the Kafka instance
- `endpoints` (Attributes List) The bootstrap endpoints of instance. AutoMQ supports multiple access protocols; therefore, the Endpoint is a list. (see [below for nested schema](#nestedatt--endpoints))
- `id` (String) The ID of the Kafka instance.
- `instance_status` (String) The status of instance. Currently supports statuses: `Creating`, `Running`, `Deleting`, `Changing` and `Abnormal`. For definitions and limitations of each status, please refer to the [documentation](https://docs.automq.com/automq-cloud/using-automq-for-kafka/manage-instances#lifecycle).
- `last_updated` (String)

<a id="nestedatt--compute_specs"></a>
### Nested Schema for `compute_specs`

Required:

- `aku` (Number) The template of the compute specs
- `aku` (Number) AutoMQ defines AKU (AutoMQ Kafka Unit) to measure the scale of the cluster. Each AKU provides 20 MiB/s of read/write throughput. For more details on AKU, please refer to the [documentation](https://docs.automq.com/automq-cloud/subscriptions-and-billings/byoc-env-billings/billing-instructions-for-byoc). The currently supported AKU specifications are 6, 8, 10, 12, 14, 16, 18, 20, 22, and 24. If an invalid AKU value is set, the instance cannot be created.

Optional:

- `version` (String) The version of the compute specs
- `version` (String) The software version of AutoMQ. By default, there is no need to set version; the latest version will be used. If you need to specify a version, refer to the [documentation](https://docs.automq.com/automq-cloud/release-notes) to choose the appropriate version number.


<a id="nestedatt--networks"></a>
### Nested Schema for `networks`

Required:

- `subnets` (List of String) The subnets of the network
- `zone` (String) The zone of the network
- `subnets` (List of String) Specify the subnet under the corresponding availability zone for deploying the instance. Currently, only one subnet can be set for each availability zone.
- `zone` (String) The availability zone ID of the cloud provider.


<a id="nestedblock--timeouts"></a>
Expand All @@ -115,8 +115,8 @@ Optional:

Read-Only:

- `bootstrap_servers` (String) The bootstrap servers of the endpoint
- `display_name` (String) The display name of the endpoint
- `mechanisms` (String) The mechanisms of the endpoint
- `network_type` (String) The network type of the endpoint
- `protocol` (String) The protocol of the endpoint
- `bootstrap_servers` (String) The bootstrap servers of endpoint
- `display_name` (String) The name of endpoint
- `mechanisms` (String) The supported mechanisms of endpoint. Currently support `PLAIN`, `SCRAM-SHA-256`, `SCRAM-SHA-512`.
- `network_type` (String) The network type of endpoint. Currently support `VPC` and `INTERNET`. `VPC` type is generally used for internal network access, while `INTERNET` type is used for accessing the AutoMQ cluster from the internet.
- `protocol` (String) The protocol of endpoint. Currently support `PLAINTEXT` and `SASL_PLAINTEXT`.
16 changes: 8 additions & 8 deletions docs/resources/kafka_topic.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@
page_title: "automq_kafka_topic Resource - automq"
subcategory: ""
description: |-
Kafka Topic resource
automq_kafka_topic provides a Kafka Topic resource that enables creating and deleting Kafka Topics on a Kafka cluster on AutoMQ BYOC environment.
---

# automq_kafka_topic (Resource)

Kafka Topic resource
`automq_kafka_topic` provides a Kafka Topic resource that enables creating and deleting Kafka Topics on a Kafka cluster on AutoMQ BYOC environment.

## Example Usage

Expand Down Expand Up @@ -54,15 +54,15 @@ resource "automq_kafka_topic" "example" {

### Required

- `environment_id` (String) Target Kafka environment
- `kafka_instance_id` (String) Target Kafka instance ID
- `name` (String) Name of the Kafka topic
- `environment_id` (String) Target AutoMQ BYOC environment, this attribute is specified during the deployment and installation process.
- `kafka_instance_id` (String) Target Kafka instance ID, each instance represents a kafka cluster. The instance id looks like kf-xxxxxxx.
- `name` (String) Name is the unique identifier of a topic. It can only contain letters a to z or A to z, digits 0 to 9, underscores (_), hyphens (-), and dots (.). The value contains 1 to 249 characters.

### Optional

- `configs` (Map of String) Additional configuration for the Kafka topic
- `partition` (Number) Number of partitions for the Kafka topic
- `configs` (Map of String) Additional configuration for the Kafka topic. Please refer to the [documentation](https://docs.automq.com/automq-cloud/using-automq-for-kafka/restrictions#topic-level-configuration) to set the current supported custom parameters.
- `partition` (Number) Number of partitions for the Kafka topic. The number of partitions must be at least greater than the number of consumers.

### Read-Only

- `topic_id` (String) Kafka topic identifier
- `topic_id` (String) Kafka topic identifier, this id is generated by automq.
14 changes: 7 additions & 7 deletions docs/resources/kafka_user.md
Original file line number Diff line number Diff line change
Expand Up @@ -3,12 +3,12 @@
page_title: "automq_kafka_user Resource - automq"
subcategory: ""
description: |-
Kafka User resource
automq_kafka_user provides acl user identity information for more secure access to kafka clusters.
---

# automq_kafka_user (Resource)

Kafka User resource
`automq_kafka_user` provides acl user identity information for more secure access to kafka clusters.

## Example Usage

Expand Down Expand Up @@ -49,11 +49,11 @@ resource "automq_kafka_user" "example" {

### Required

- `environment_id` (String) Target environment ID
- `kafka_instance_id` (String) Target Kafka instance ID
- `password` (String) Password for the Kafka user
- `username` (String) Username for the Kafka user
- `environment_id` (String) Target AutoMQ BYOC environment, this attribute is specified during the deployment and installation process.
- `kafka_instance_id` (String) Target Kafka instance ID, each instance represents a kafka cluster. The instance id looks like kf-xxxxxxx.
- `password` (String) Password for the Kafka user, limited to 8-24 characters.
- `username` (String) Username for the Kafka user, limited to 4-64 characters.

### Read-Only

- `id` (String) Kafka user identifier
- `id` (String) Kafka user identifier.
6 changes: 3 additions & 3 deletions internal/provider/provider.go
Original file line number Diff line number Diff line change
Expand Up @@ -43,15 +43,15 @@ func (p *AutoMQProvider) Schema(ctx context.Context, req provider.SchemaRequest,
resp.Schema = schema.Schema{
Attributes: map[string]schema.Attribute{
"automq_byoc_access_key_id": schema.StringAttribute{
MarkdownDescription: "Example provider attribute",
MarkdownDescription: "Set the Access Key Id of client. AutoMQ Cloud (BYOC) requires Access Keys to manage access and authentication to different parts of the service. An Access Key consists of an access key id and a secret key. You can create and manage Access Keys by using the AutoMQ Cloud BYOC Console. Learn more about AutoMQ Cloud BYOC Console access [here](https://docs.automq.com/automq-cloud/manage-identities-and-access).",
Optional: true,
},
"automq_byoc_secret_key": schema.StringAttribute{
MarkdownDescription: "Example provider attribute",
MarkdownDescription: "Set the Secret Access Key of client. AutoMQ Cloud (BYOC) requires Access Keys to manage access and authentication to different parts of the service. An Access Key consists of an access key id and a secret key. You can create and manage Access Keys by using the AutoMQ Cloud BYOC Console. Learn more about AutoMQ Cloud BYOC Console access [here](https://docs.automq.com/automq-cloud/manage-identities-and-access).",
Optional: true,
},
"automq_byoc_host": schema.StringAttribute{
MarkdownDescription: "Example provider attribute",
MarkdownDescription: "Set the AutoMQ BYOC environment endpoint. The endpoint like http://{hostname}:8080. You can get this endpoint when deploy environment complete.",
Optional: true,
},
},
Expand Down
Loading
Loading