Skip to content

Commit

Permalink
DOC-4225: New index page (#167)
Browse files Browse the repository at this point in the history
* new index page
  • Loading branch information
beajohnson authored Jun 14, 2024
1 parent cb90bf6 commit 62e7b12
Show file tree
Hide file tree
Showing 5 changed files with 33 additions and 115 deletions.
2 changes: 1 addition & 1 deletion antora.yml
Original file line number Diff line number Diff line change
@@ -1,7 +1,7 @@
name: data-migration
title: Data Migration
version: ~
start_page: introduction.adoc
start_page: index.adoc

nav:
- modules/ROOT/nav.adoc
Expand Down
Binary file added modules/ROOT/images/migration-phase2ra9a.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
34 changes: 0 additions & 34 deletions modules/ROOT/nav.adoc

This file was deleted.

24 changes: 24 additions & 0 deletions modules/ROOT/pages/index.adoc
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
= Introduction to data migration
:page-tag: migration,zdm,zero-downtime,zdm-proxy, introduction
ifdef::env-github,env-browser,env-vscode[:imagesprefix: ../images/]
ifndef::env-github,env-browser,env-vscode[:imagesprefix: ]

Enterprises today want to reliably migrate mission-critical client applications and data to cloud environments with zero downtime or near zero downtime during the migration.

{company} has developed a set of thoroughly tested self-service tools to walk you through well-defined migration options.
These tools provide features that help you migrate your data from any Cassandra origin (Apache Cassandra®, {company} Enterprise (DSE), {company} {astra_db}) to any Cassandra target (Apache Cassandra®, DSE, {company} {astra_db}).

== Migration process and tools

A migration is a workflow that encompasses the lifecycle of uploading and importing your data to the selected databases.
{company} can migrate all data, however critical, with acceptable or zero downtime.
When the migration is complete, the data is present in the new database and all client applications connect exclusively to the new database. The old database becomes obsolete and can be removed.

The migration tools are:

* https://docs.datastax.com/en/data-migration/introduction.html[*Zero Downtime Migration*] (ZDM) Proxy: You can continue to run your current application and migrate data from the Origin to the Target database without any downtime.
The proxy helps to manage the activity in transition.
* xref:cassandra-data-migrator.adoc[*Cassandra Data Migrator*]: It can be used in conjunction with the ZDM Proxy for a migration with zero downtime. It can also be used on its own for migrations with acceptable downtime.
* https://docs.datastax.com/en/dsbulk/overview/dsbulk-about.html[*DSBulk Loader*]: In addition to loading and unloading CSV and JSON data, DSBulk can transfer data between databases.
It can read data from a table from your origin database and write it to a table in your target database.
It can be used as an alternative to Cassandra Data Migrator (CDM).
88 changes: 8 additions & 80 deletions modules/ROOT/pages/introduction.adoc
Original file line number Diff line number Diff line change
Expand Up @@ -4,17 +4,11 @@
ifdef::env-github,env-browser,env-vscode[:imagesprefix: ../images/]
ifndef::env-github,env-browser,env-vscode[:imagesprefix: ]

Enterprises today depend on the ability to reliably migrate mission-critical client applications and data to cloud environments with zero downtime during the migration.

At {company}, we've developed a set of thoroughly-tested self-service tools, automation scripts, examples, and documented procedures that walk you through well-defined migration phases.

We call this product suite {company} {zdm-product} ({zdm-shortproduct}).

{zdm-shortproduct} provides a simple and reliable way for you to migrate applications from any CQL-based cluster (https://cassandra.apache.org/_/index.html[Apache Cassandra®], https://www.datastax.com/products/datastax-enterprise[DataStax Enterprise (DSE)], https://www.datastax.com/products/datastax-astra[{astra_db}], or any type of CQL-based database) to any other CQL-based cluster, without any interruption of service to the client applications and data.
{zdm-product} provides a simple and reliable way for you to migrate applications from any CQL-based cluster (https://cassandra.apache.org/_/index.html[Apache Cassandra®], https://www.datastax.com/products/datastax-enterprise[DataStax Enterprise (DSE)], https://www.datastax.com/products/datastax-astra[{astra_db}], or any type of CQL-based database) to any other CQL-based cluster, without any interruption of service to the client applications and data.
* You can move your application to {astra_db}, DSE, or Cassandra with no downtime and with minimal configuration changes.
* Your clusters will be kept in sync at all times by a dual-write logic configuration.
* You can xref:rollback.adoc[rollback] at any point, for complete peace of mind.
* Your clusters are kept in sync at all times by a dual-write logic configuration.
* You can xref:rollback.adoc[roll back] at any point, for complete peace of mind.
include::partial$note-downtime.adoc[]
Expand Down Expand Up @@ -50,8 +44,6 @@ First, a couple of key terms used throughout the ZDM documentation and software
* **Target:** This cluster is the new environment to which you want to migrate client applications and data.
For additional terms, see the xref:glossary.adoc[glossary].
=== Migration diagram
Discover the migration concepts, software components, and sequence of operations.
Expand All @@ -62,8 +54,8 @@ The highlighted components in each phase emphasize how your client applications
==== Pre-migration client application operations
Let's look at a pre-migration from a high-level view.
At this point, your client applications are performing read/write operations with an existing CQL-compatible database: Apache Cassandra, DSE, or Astra DB.
Here's a look at a pre-migration from a high-level view.
At this point, your client applications are performing read/write operations with an existing CQL-compatible database such as Apache Cassandra, DSE, or {astra_db}.
image:pre-migration0ra9.png["Pre-migration environment."]
Expand All @@ -81,10 +73,10 @@ image:migration-phase1ra9.png["Migration Phase 1."]
==== Phase 2: Migrate data
In this phase, migrate existing data using Cassandra Data Migrator and/or DSBulk Migrator.
In this phase, migrate existing data using Cassandra Data Migrator and/or DSBulk Loader.
Validate that the migrated data is correct, while continuing to perform dual writes.
image:migration-phase2ra9.png["Migration Phase 2."]
image:migration-phase2ra9a.png["Migration Phase 2."]
'''
Expand Down Expand Up @@ -113,71 +105,7 @@ Once that happens, the migration is complete.
image:migration-phase5ra9.png["Migration Phase 5."]
////
=== Migration interactive diagram
Click the *Start* button on the interactive diagram below to begin a walkthrough of the migration phases.
Click the components to open a larger view.
[.swiper]
====
[.slide]
--
.Walk through the illustrated migration phases
Discover the migration concepts, software components, and sequence of operations.
image:migration-introduction9.png["Introductory page prompts you to click the Start button to begin the graphical presentation."]
--
[.slide]
--
.Phase 0: Before your migration starts
Let's look at a pre-migration, high-level view. At this point, your client applications are performing read/write operations with an existing CQL-compatible database. That is, Apache Cassandra, DSE, or Astra DB.
image:pre-migration0ra9.png["Illustrates a pre-migration environment, as summarized in the text. Back and Next buttons are available for navigation within the graphic."]
--
[.slide]
--
.Phase 1: Deploy ZDM Proxy and connect client applications
In this first phase, deploy the ZDM Proxy instances and connect client applications to the proxies. This phase activates the dual-write logic. Writes are bifurcated (sent to both Origin and Target), while reads are executed on Origin only.
image:migration-phase1ra9.png["Illustrates migration Phase 1, as summarized in the text. Back and Next buttons are available for navigation within the graphic."]
--
[.slide]
--
.Phase 2: Migrate data
In this phase, migrate existing data using Cassandra Data Migrator and/or DSBulk Migrator. Validate that the migrated data is correct, while continuing to perform dual writes.
image:migration-phase2ra9.png["Illustrates migration Phase 2, as summarized in the text. Back and Next buttons are available for navigation within the graphic."]
--
[.slide]
--
.Phase 3: Enable asynchronous dual reads
In this phase, you can optionally enable asynchronous dual reads. The idea is to test performance and verify that Target can handle your application's live request load before cutting over from Origin to Target.
image:migration-phase3ra9.png["Illustrates migration Phase 3, as summarized in the text. Back and Next buttons are available for navigation within the graphic."]
--
[.slide]
--
.Phase 4: Route reads to Target
In this phase, read routing on the ZDM Proxy is switched to Target so that all reads are executed on it, while writes are still sent to both clusters. In other words, Target becomes the primary cluster.
image:migration-phase4ra9.png["Illustrates migration Phase 4, as summarized in the text. Back and Next buttons are available for navigation within the graphic."]
--
[.slide]
--
.Phase 5: Connect directly to Target
In this phase, move your client applications off the ZDM Proxy and connect the apps directly to Target. Once that happens, the migration is complete.
image:migration-phase5ra9.png["Illustrates migration Phase 5, as summarized in the text. Back and Restart buttons are available for navigation within the graphic."]
--
====
////
'''
== A fun way to learn: {zdm-product} Interactive Lab
Expand Down

0 comments on commit 62e7b12

Please sign in to comment.