Skip to content

Commit

Permalink
DOC-4227: Migrate original ZDM documentation to new site (#165)
Browse files Browse the repository at this point in the history
* changed address to data-migration

* troubleshooting table formatting with right nav ui bug

* troubleshooting table formatting, fixed except for CDM page

* fixed CDM table formatting so it doesn't overlap navigation menu

* removed mention of interactive diagrams, fixed formatting and made phase images static to work with new UI

* link updates

* fixed cassandra-data-migrator.adoc tables to be proportional and added backticks back in for code formatting after UI workaround was no longer necessary

* final readthrough, intro, components, faq, and feasibility checks pages are done

* final readthrough updates

* final readthrough edits. Deployment, create, understand, and phase 1 set up pages done

* final readthrough edits. Phase 1 deploy, configure, and connect pages done

* final readthrough edits. Phase 1 leverage, manage, Phase 2 CDM, and DSBulk migrator pages done

* final readthrough edits. Phase 3 and 4 pages done

* final readthrough edits. Phase 5, troubleshooting, troubleshooting tips, and troubleshooting scenarios pages are done.

* final readthrough edits. Glossary, contribution guidelines, and release notes are complete

* fixed local-preview-playbook.adoc

* moved faqs between troubleshooting and glossary
  • Loading branch information
emeliawilkinson24 authored Mar 26, 2024
1 parent 7c28a63 commit 7a40f62
Show file tree
Hide file tree
Showing 34 changed files with 1,335 additions and 758 deletions.
7 changes: 4 additions & 3 deletions antora.yml
Original file line number Diff line number Diff line change
@@ -1,5 +1,5 @@
name: zero-downtime-migration
title: Zero Downtime Migration
name: data-migration
title: Data Migration
version: ~
start_page: introduction.adoc

Expand All @@ -21,4 +21,5 @@ asciidoc:
db-classic: 'Classic'
astra-cli: 'Astra CLI'
url-astra: 'https://astra.datastax.com'
link-astra-portal: '{url-astra}[{astra_ui}^]'
link-astra-portal: '{url-astra}[{astra_ui}]'
astra-db-serverless: 'Astra DB Serverless'
2 changes: 1 addition & 1 deletion local-preview-playbook.yml
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,7 @@ git:

site:
title: DataStax Docs
start_page: zero-downtime-migration::introduction.adoc
start_page: data-migration::index.adoc
robots: disallow

content:
Expand Down
Binary file modified modules/ROOT/images/zdm-ansible-container-ls3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
18 changes: 10 additions & 8 deletions modules/ROOT/nav.adoc
Original file line number Diff line number Diff line change
@@ -1,32 +1,34 @@
.{product}
* xref:introduction.adoc[]
* xref:components.adoc[]
* xref:faqs.adoc[]
* Preliminary steps
** xref:preliminary-steps.adoc[]
* xref:preliminary-steps.adoc[]
** xref:feasibility-checklists.adoc[]
** xref:deployment-infrastructure.adoc[]
** xref:create-target.adoc[]
** xref:rollback.adoc[]
* Phase 1: Deploy {zdm-proxy} and connect client applications
** xref:phase1.adoc[]
//phase 1
* xref:phase1.adoc[]
** xref:setup-ansible-playbooks.adoc[]
** xref:deploy-proxy-monitoring.adoc[]
** xref:tls.adoc[]
** xref:connect-clients-to-proxy.adoc[]
** xref:metrics.adoc[]
** xref:manage-proxy-instances.adoc[]
* Phase 2: Migrate and validate data
** xref:migrate-and-validate-data.adoc[]
//phase 2
* xref:migrate-and-validate-data.adoc[]
** xref:cassandra-data-migrator.adoc[]
** xref:dsbulk-migrator.adoc[]
//phase 3
* xref:enable-async-dual-reads.adoc[]
//phase 4
* xref:change-read-routing.adoc[]
//phase 5
* xref:connect-clients-to-target.adoc[]
* Troubleshooting
** xref:troubleshooting.adoc[]
** xref:troubleshooting-tips.adoc[]
** xref:troubleshooting-scenarios.adoc[]
* xref:faqs.adoc[]
* xref:glossary.adoc[]
* xref:contributions.adoc[]
* xref:release-notes.adoc[]
* xref:release-notes.adoc[]
256 changes: 173 additions & 83 deletions modules/ROOT/pages/cassandra-data-migrator.adoc

Large diffs are not rendered by default.

31 changes: 22 additions & 9 deletions modules/ROOT/pages/change-read-routing.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -5,7 +5,7 @@ ifndef::env-github,env-browser,env-vscode[:imagesprefix: ]

This topic explains how you can configure the {zdm-proxy} to route all reads to Target instead of Origin.

include::partial$lightbox-tip.adoc[]
//include::partial$lightbox-tip.adoc[]

image::{imagesprefix}migration-phase4ra9.png["Phase 4 diagram shows read routing on ZDM Proxy was switched to Target."]

Expand All @@ -19,7 +19,9 @@ This operation is a configuration change that can be carried out as explained xr

[TIP]
====
If you performed the optional steps described in the prior topic, xref:enable-async-dual-reads.adoc[] -- to verify that your Target cluster was ready and tuned appropriately to handle the production read load -- be sure to disable async dual reads when you're done testing. If you haven't already, revert `read_mode` in `vars/zdm_proxy_core_config.yml` to `PRIMARY_ONLY` when switching sync reads to Target. Example:
If you performed the optional steps described in the prior topic, xref:enable-async-dual-reads.adoc[] -- to verify that your Target cluster was ready and tuned appropriately to handle the production read load -- be sure to disable async dual reads when you're done testing.
If you haven't already, revert `read_mode` in `vars/zdm_proxy_core_config.yml` to `PRIMARY_ONLY` when switching sync reads to Target.
Example:
[source,yml]
----
Expand All @@ -30,6 +32,7 @@ Otherwise, if you don't disable async dual reads, {zdm-proxy} instances would co
====

== Changing the read routing configuration

If you're not there already, `ssh` back into the jumphost:

[source,bash]
Expand Down Expand Up @@ -60,11 +63,14 @@ Run the playbook that changes the configuration of the existing {zdm-proxy} depl
ansible-playbook rolling_update_zdm_proxy.yml -i zdm_ansible_inventory
----

Wait for the {zdm-proxy} instances to be restarted by Ansible, one by one. All instances will now send all reads to Target instead of Origin. In other words, Target is now the primary cluster, but the {zdm-proxy} is still keeping Origin up-to-date via dual writes.
Wait for the {zdm-proxy} instances to be restarted by Ansible, one by one.
All instances will now send all reads to Target instead of Origin.
In other words, Target is now the primary cluster, but the {zdm-proxy} is still keeping Origin up-to-date via dual writes.

== Verifying the read routing change

Once the read routing configuration change has been rolled out, you may want to verify that reads are correctly sent to Target as expected. This is not a required step, but you may wish to do it for peace of mind.
Once the read routing configuration change has been rolled out, you may want to verify that reads are correctly sent to Target as expected.
This is not a required step, but you may wish to do it for peace of mind.

[TIP]
====
Expand All @@ -80,11 +86,18 @@ Although `DESCRIBE` requests are not system requests, they are also generally re

Verifying that the correct routing is taking place is a slightly cumbersome operation, due to the fact that the purpose of the ZDM process is to align the clusters and therefore, by definition, the data will be identical on both sides.

For this reason, the only way to do a manual verification test is to force a discrepancy of some test data between the clusters. To do this, you could consider using the xref:connect-clients-to-proxy.adoc#_themis_client[Themis sample client application]. This client application connects directly to Origin, Target and the {zdm-proxy}, inserts some test data in its own table and allows you to view the results of reads from each source. Please refer to its README for more information.
For this reason, the only way to do a manual verification test is to force a discrepancy of some test data between the clusters.
To do this, you could consider using the xref:connect-clients-to-proxy.adoc#_themis_client[Themis sample client application].
This client application connects directly to Origin, Target and the {zdm-proxy}, inserts some test data in its own table and allows you to view the results of reads from each source.
Please refer to its README for more information.

Alternatively, you could follow this manual procedure:

* Create a small test table on both clusters, for example a simple key/value table (it could be in an existing keyspace, or in one that you create specifically for this test). For example `CREATE TABLE test_keyspace.test_table(k TEXT PRIMARY KEY, v TEXT);`.
* Use `cqlsh` to connect *directly to Origin*. Insert a row with any key, and with a value specific to Origin, for example `INSERT INTO test_keyspace.test_table(k, v) VALUES ('1', 'Hello from Origin!');`.
* Now, use `cqlsh` to connect *directly to Target*. Insert a row with the same key as above, but with a value specific to Target, for example `INSERT INTO test_keyspace.test_table(k, v) VALUES ('1', 'Hello from Target!');`.
* Now, use `cqlsh` to connect to the {zdm-proxy} (see xref:connect-clients-to-proxy.adoc#_connecting_cqlsh_to_the_zdm_proxy[here] for how to do this) and issue a read request for this test table: `SELECT * FROM test_keyspace.test_table WHERE k = '1';`. The result will clearly show you where the read actually comes from.
* Create a small test table on both clusters, for example a simple key/value table (it could be in an existing keyspace, or in one that you create specifically for this test).
For example `CREATE TABLE test_keyspace.test_table(k TEXT PRIMARY KEY, v TEXT);`.
* Use `cqlsh` to connect *directly to Origin*.
Insert a row with any key, and with a value specific to Origin, for example `INSERT INTO test_keyspace.test_table(k, v) VALUES ('1', 'Hello from Origin!');`.
* Now, use `cqlsh` to connect *directly to Target*.
Insert a row with the same key as above, but with a value specific to Target, for example `INSERT INTO test_keyspace.test_table(k, v) VALUES ('1', 'Hello from Target!');`.
* Now, use `cqlsh` to connect to the {zdm-proxy} (see xref:connect-clients-to-proxy.adoc#_connecting_cqlsh_to_the_zdm_proxy[here] for how to do this) and issue a read request for this test table: `SELECT * FROM test_keyspace.test_table WHERE k = '1';`.
The result will clearly show you where the read actually comes from.
75 changes: 47 additions & 28 deletions modules/ROOT/pages/components.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -3,28 +3,36 @@
ifdef::env-github,env-browser,env-vscode[:imagesprefix: ../images/]
ifndef::env-github,env-browser,env-vscode[:imagesprefix: ]

The main component of the {company} {zdm-product} product suite is **{zdm-proxy}**, which by design is a simple and lightweight proxy that handles all the real-time requests generated by your client applications. {zdm-proxy} is open-source software (OSS) and available in its Public GitHub repo, https://github.com/datastax/zdm-proxy. You can view the source files and contribute code for potential inclusion via Pull Requests (PRs) initiated on a fork of the repo.
The main component of the {company} {zdm-product} product suite is **{zdm-proxy}**, which by design is a simple and lightweight proxy that handles all the real-time requests generated by your client applications.

{zdm-proxy} is open-source software (OSS) and available in its https://github.com/datastax/zdm-proxy[Public GitHub repo].
You can view the source files and contribute code for potential inclusion via Pull Requests (PRs) initiated on a fork of the repo.

The {zdm-proxy} itself doesn't have any capability to migrate data or knowledge that a migration may be ongoing, and it is not coupled to the migration process in any way.

* {company} {zdm-product} also provides the **{zdm-utility}** and **{zdm-automation}** to set up and run the Ansible playbooks that deploy and manage the {zdm-proxy} and its monitoring stack.
* Two data migration tools are available -- **{cstar-data-migrator}** and **{dsbulk-migrator}** -- to migrate your data. See the xref:introduction.adoc#_data_migration_tools[summary of features] below.
* Multiple data migration tools such as **{cstar-data-migrator}** and **{dsbulk-migrator}** are available.
== Role of {zdm-proxy}

We created {zdm-proxy} to function between the application and both databases (Origin and Target). The databases can be any CQL-compatible data store (e.g. Apache Cassandra, DataStax Enterprise and {astra_db}). The proxy always sends every write operation (Insert, Update, Delete) synchronously to both clusters at the desired Consistency Level:
We created {zdm-proxy} to function between the application and both databases (Origin and Target).
The databases can be any CQL-compatible data store (e.g. Apache Cassandra, DataStax Enterprise and {astra_db}).
The proxy always sends every write operation (Insert, Update, Delete) synchronously to both clusters at the desired Consistency Level:

* If the write is successful in both clusters, it returns a successful acknowledgement to the client application
* If the write is successful in both clusters, it returns a successful acknowledgement to the client application.
* If the write fails on either cluster, the failure is passed back to the client application so that it can retry it as appropriate, based on its own retry policy.

This design ensures that new data is always written to both clusters, and that any failure on either cluster is always made visible to the client application. {zdm-proxy} also sends all reads to the primary cluster (initially Origin, and later Target) and returns the result to the client application.
This design ensures that new data is always written to both clusters, and that any failure on either cluster is always made visible to the client application.
{zdm-proxy} also sends all reads to the primary cluster (initially Origin, and later Target) and returns the result to the client application.

{zdm-proxy} is designed to be highly available. It can be scaled horizontally, so typical deployments are made up of a minimum of 3 servers. {zdm-proxy} can be restarted in a rolling fashion, for example, to change configuration for different phases of the migration.
{zdm-proxy} is designed to be highly available. It can be scaled horizontally, so typical deployments are made up of a minimum of 3 servers.
{zdm-proxy} can be restarted in a rolling fashion, for example, to change configuration for different phases of the migration.

[TIP]
====
{zdm-proxy} has been designed to run in a **clustered** fashion so that it is never a single point of failure. Unless it is for a demo or local testing environment, a {zdm-proxy} deployment should always comprise multiple {zdm-proxy} instances.
{zdm-proxy} has been designed to run in a **clustered** fashion so that it is never a single point of failure.
Unless it is for a demo or local testing environment, a {zdm-proxy} deployment should always comprise multiple {zdm-proxy} instances.
The term {zdm-proxy} indicates the whole deployment, and {zdm-proxy} instance refers to an individual proxy process in the deployment.
====
Expand All @@ -37,25 +45,34 @@ The term {zdm-proxy} indicates the whole deployment, and {zdm-proxy} instance re

* Bifurcates writes synchronously to both clusters during the migration process.

* Returns (for read operations) the response from the primary cluster, which is its designated source of truth. During a migration, Origin is typically the primary cluster. Near the end of the migration, you'll shift the primary cluster to be Target.
* Returns (for read operations) the response from the primary cluster, which is its designated source of truth.
During a migration, Origin is typically the primary cluster.
Near the end of the migration, you'll shift the primary cluster to be Target.

* Can be configured to also read asynchronously from Target. This capability is called **Asynchronous Dual Reads** (also known as **Read Mirroring**) and allows you to observe what read latencies and throughput Target can achieve under the actual production load.
* Can be configured to also read asynchronously from Target.
This capability is called **Asynchronous Dual Reads** (also known as **Read Mirroring**) and allows you to observe what read latencies and throughput Target can achieve under the actual production load.
** Results from the asynchronous reads executed on Target are not sent back to the client application.
** This design implies that failure on asynchronous reads from Target does not cause an error on the client application.
** Asynchronous dual reads can be enabled and disabled dynamically with a rolling restart of the {zdm-proxy} instances.

[NOTE]
====
When using Asynchronous Dual Reads, any additional read load on Target may impact its ability to keep up with writes. This behavior is expected and desired. The idea is to mimic the full read and write load on Target so there are no surprises during the last migration phase; that is, after cutting over completely to Target.
When using Asynchronous Dual Reads, any additional read load on Target may impact its ability to keep up with writes.
This behavior is expected and desired.
The idea is to mimic the full read and write load on Target so there are no surprises during the last migration phase; that is, after cutting over completely to Target.
====

== {zdm-utility} and {zdm-automation}

https://www.ansible.com/[Ansible] is a suite of software tools that enables infrastructure as code. It is open source and its capabilities include software provisioning, configuration management, and application deployment functionality.
https://www.ansible.com/[Ansible] is a suite of software tools that enables infrastructure as code.
It is open source and its capabilities include software provisioning, configuration management, and application deployment functionality.

The Ansible automation for {zdm-shortproduct} is organized into playbooks, each implementing a specific operation. The machine from which the playbooks are run is known as the Ansible Control Host. In {zdm-shortproduct}, the Ansible Control Host will run as a Docker container.
The Ansible automation for {zdm-shortproduct} is organized into playbooks, each implementing a specific operation.
The machine from which the playbooks are run is known as the Ansible Control Host.
In {zdm-shortproduct}, the Ansible Control Host will run as a Docker container.

You will use the **{zdm-utility}** to set up Ansible in a Docker container, and **{zdm-automation}** to run the Ansible playbooks from the Docker container created by {zdm-utility}. In other words,the {zdm-utility} creates the Docker container acting as the **Ansible Control Host**, from which the {zdm-automation} allows you to deploy and manage the {zdm-proxy} instances and the associated monitoring stack - Prometheus metrics and Grafana visualization of the metric data.
You will use the **{zdm-utility}** to set up Ansible in a Docker container, and **{zdm-automation}** to run the Ansible playbooks from the Docker container created by {zdm-utility}.
In other words,the {zdm-utility} creates the Docker container acting as the **Ansible Control Host**, from which the {zdm-automation} allows you to deploy and manage the {zdm-proxy} instances and the associated monitoring stack - Prometheus metrics and Grafana visualization of the metric data.

{zdm-utility} and {zdm-automation} expect that you have already provisioned the recommended infrastructure, as outlined in xref:deployment-infrastructure.adoc[].

Expand All @@ -68,29 +85,31 @@ For details, see:

== Data migration tools

As part of the overall migration process, you can use {cstar-data-migrator} and/or {dsbulk-migrator} to migrate your data. Or you can use other technologies, such as Apache Spark™, to write your own custom data migration process.
As part of the overall migration process, you can use {cstar-data-migrator} and/or {dsbulk-migrator} to migrate your data.
Other technologies such as Apache Spark™ can be used to write your own custom data migration process.

=== {cstar-data-migrator}

[TIP]
====
An important **prerequisite** to use {cstar-data-migrator} is that you already have the matching schema on Target.
====

Use {cstar-data-migrator} to:

* Migrate your data from any CQL-supported Origin to any CQL-supported Target. Examples of databases that support CQL are Apache Cassandra, DataStax Enterprise and {astra_db}.
* Validate migration accuracy and performance using examples that provide a smaller, randomized data set
* Preserve internal `writetime` timestamps and Time To Live (TTL) values
* Take advantage of advanced data types (Sets, Lists, Maps, UDTs)
* Filter records from the Origin data, using Cassandra's internal `writetime` timestamp
* Use SSL Support, including custom cipher algorithms
* Migrate your data from any CQL-supported Origin to any CQL-supported Target.
Examples of databases that support CQL are Apache Cassandra, DataStax Enterprise and {astra_db}.
* Validate migration accuracy and performance using examples that provide a smaller, randomized data set.
* Preserve internal `writetime` timestamps and Time To Live (TTL) values.
* Take advantage of advanced data types (Sets, Lists, Maps, UDTs).
* Filter records from the Origin data, using Cassandra's internal `writetime` timestamp.
* Use SSL Support, including custom cipher algorithms.

Cassandra Data Migrator is designed to:

* Connect to and compare your Target database with Origin
* Report differences in a detailed log file
* Optionally reconcile any missing records and fix any data inconsistencies in Target, if you enable `autocorrect` in a config file

[TIP]
====
An important **prerequisite** is that you already have the matching schema on Target.
====
* Connect to and compare your Target database with Origin.
* Report differences in a detailed log file.
* Optionally reconcile any missing records and fix any data inconsistencies in Target by enabling `autocorrect` in a config file.

=== {dsbulk-migrator}

Expand Down
Loading

0 comments on commit 7a40f62

Please sign in to comment.