Skip to content

Latest commit

 

History

History
165 lines (128 loc) · 5.98 KB

step6.md

File metadata and controls

165 lines (128 loc) · 5.98 KB
Zero Downtime Migration Lab ℹ️ For technical support, please contact us via email.
⬅️ Back Step 6 Next ➡️
Phase 1d: Connect the client application to the proxy

Phase 1d

🎯 Goal: switching the client application from a direct connection to Origin to a connection through the proxy (while still keeping Origin as the primary DB).

The sample client application used in this exercise is a simple FastAPI process: we will have to stop it (killing the process running the API) and start it again specifying a different connection mode.

Before doing that, however, let's finish writing the required settings in the .env file. Check the full path of the secure-connect-bundle zipfile you downloaded:

if you went through the Astra CLI path, do so with:

### logs
# locate the bundle zipfile (if Astra CLI setup followed)
grep ASTRA_DB_SECURE_BUNDLE_PATH /workspace/zdm-scenario-katapod/.env

otherwise, you can get the zipfile path by running:

### logs
# locate the bundle zipfile (if Astra Web UI setup followed)
ls /workspace/zdm-scenario-katapod/secure*zip

Get the IP address of the proxy instance as well:

### logs
. /workspace/zdm-scenario-katapod/scenario_scripts/find_addresses.sh

(this time, the commands above will run and print their output on the still-unused "zdm-proxy-logs" console for your convenience while editing the dot-env file.)

and finally make sure you have the "Client ID" and the "Client Secret" found in your Astra DB Token. Now you can insert the values of ASTRA_DB_SECURE_BUNDLE_PATH, ASTRA_DB_CLIENT_ID, ASTRA_DB_CLIENT_SECRET and ZDM_HOST_IP:

### host
nano +7,30 /workspace/zdm-scenario-katapod/client_application/.env

Note: nano might occasionally fail to start. In that case, hitting Ctrl-C in the console and re-launching the command would help.

Once you save the changes (Ctrl-X, then Y, then Enter in the nano editor), restart the API by executing the following, which kills the process in the "api-console" and launches it again:

### {"terminalId": "api", "macrosBefore": ["ctrl_c"]}
# A Ctrl-C to stop the running process ... followed by:
CLIENT_CONNECTION_MODE=ZDM_PROXY uvicorn api:app

This time, the API connects to the proxy. You should see no disruptions in the requests that are running in the "api-client-console".

Speaking of which: quick, run the following! It will simply stop the loop that writes to the API and start another one with just different message strings ("ModeProxy" and a timestamp), to better track what's going on:

### {"terminalId": "client", "macrosBefore": ["ctrl_c"]}
while true; do
  NEW_STATUS="ModeProxy_`date +'%H-%M-%S'`";
  echo -n "Setting status to ${NEW_STATUS} ... ";
  curl -s -XPOST -o /dev/null "localhost:8000/status/eva/${NEW_STATUS}";
  echo "done. Sleeping a little ... ";
  sleep 20;
done

After the loop has restarted, check you get the new rows back by querying the API:

### host
curl -s -XGET "localhost:8000/status/eva?entries=3" | jq -r '.[] | "\(.when)\t\(.status)"'

The API is connecting to the ZDM Proxy. The proxy, in turn, is propagating writes to both the Origin and Target databases. To verify this, check that you can read the "ModeProxy" status rows from Origin:

### host
docker exec \
  -it cassandra-origin-1 \
  cqlsh -u cassandra -p cassandra \
  -e "SELECT * FROM zdmapp.user_status WHERE user='eva' limit 3;"

Likewise, you can do the same check on Target, i.e. Astra DB: if you went through the Astra CLI path, you can run the following (editing the database name if different from zdmtarget):

### host
astra db cqlsh zdmtarget \
  -k zdmapp \
  -e "SELECT * FROM zdmapp.user_status WHERE user='eva' limit 3;"

otherwise, paste this SELECT statement directly in the Astra DB Web CQL Console:

### {"execute": false}
SELECT * FROM zdmapp.user_status WHERE user='eva' limit 3;

Note that rows inserted before this switch are not present on Target. To remedy this shortcoming, you must do something more.

🗒️ The proxy is doing its job: in order to guarantee that the two databases have the same content, including historical data, it's time to run a migration process.

Schema, phase 1d

🔎 Monitoring suggestion

The proxy is now routing actual traffic, and this will be reflected on the dashboard (note that the metrics take a few seconds to be reflected on graphs). The instance-level metrics, which refer to an individual proxy instance, show the read operations as they are issued with the curl GETs above, as well as the regular writes sent in by the running loop. You can also look at the read/write latency graphs and at the number of client connections.

The node-level metrics (further down in the dashboard) refer to Origin and Target: with those graphs, you can confirm that writes are indeed going to both clusters, while reads only go to Origin.

⬅️ Back Next ➡️