Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

A few updates. #4

Open
wants to merge 2 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
33 changes: 21 additions & 12 deletions lab2/README.md
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
# Lab 2: Create Google Cloud Infrastrcuture Components

In this lab, you are going to:
* Create a VPC
* Create two external static IP addresses
* Create a GKE cluster
* Create a GKE cluster in the VPC created in the first step
* Provision managed Anthos Service Mesh on the GKE cluster
* Create a CloudSQL PostgreSQL database instance

Create a VPC:
```bash
export VPC_NETWORK="redis-vpc-network"
export SUBNETWORK=$VPC_NETWORK
gcloud compute networks create $VPC_NETWORK \
--subnet-mode=auto
```

On success, you should view your VPC network as follows:
![VPC Network](./img/Redis_VPC_Network.png)


Reserve external static IP addresses:
```bash
Expand All @@ -29,9 +29,16 @@ gcloud compute addresses create redis-client-host-ip --region us-central1
export REDIS_CLIENT_HOST_IP="$(gcloud compute addresses describe redis-client-host-ip --region=us-central1 --format='value(address)')"
```

Make sure above static IP addresses are acquired by printing the values to the console.

```
echo $REDIS_API_GATEWAY_IP
echo $REDIS_CLIENT_HOST_IP
```

On success, you should see the newly created reserved public IP addresses as shown below:
![Reserved IPs](./img/reserved_ips.png)

Create a GKE cluster:
```bash
export PROJECT_ID=$(gcloud info --format='value(config.project)')
Expand All @@ -50,9 +57,10 @@ gcloud container clusters create $CLUSTER_NAME \
--labels="mesh_id=proj-${PROJECT_NUMBER}"
```

The GKE cluster creation can take anywhere between 5 to 10 minutes.
On success, you should see your newly created GKE cluster like below:
![GKE](./img/GKE_Cluster.png)

Provision Anthos Service Mesh:
Enable Anthos Service Mesh on your project's Fleet:
```bash
Expand All @@ -67,7 +75,7 @@ gcloud container fleet memberships register $CLUSTER_NAME-membership \
```
On success, you can verify the GKE cluster's fleet membership in Google Cloud Console:
![ASM Fleet Membership](./img/ASM_Fleet_Membership_Reg.png)

Provision managed Anthos Service Mesh on the cluster using the Fleet API:
```bash
gcloud container fleet mesh update \
Expand Down Expand Up @@ -126,7 +134,7 @@ spec:
status:
phase: Active
```

Create the Source DB - Cloud SQL for PostgreSQL:
Note: **database-flags=cloudsql.logical_decoding=on** enables logical replication workflows and change data capture (CDC) workflows which is required by RDI.
Create PostgreSQL instance:
Expand All @@ -143,13 +151,15 @@ gcloud sql instances create $POSTGRESQL_INSTANCE \
--root-password=postgres \
--database-flags=cloudsql.logical_decoding=on
```
The above command may take anywhere from 5 to 10mins to finish and create a PostgresSQL instance for you.

On success, you can see your CloudSQL PostgreSQL database in Google Cloud console like the following:
![Cloud SQL](./img/CloudSQL.png)
Capture the `Public IP address` for later use in the lab in an environment variable:
```bash
export POSTGRESQL_INSTANCE_IP=$(gcloud sql instances describe $POSTGRESQL_INSTANCE --format=json | jq -r '.ipAddresses[] | select(.type == "PRIMARY") | .ipAddress')
```

By default, PostgreSQL database superuser (postgres) does not have permission to create a replication slot which is required by RDI. Run the following commands to grant the permission:
```bash
cat <<EOF > alter_postgres_replication.sql
Expand All @@ -165,6 +175,5 @@ On success, you should see the following output:
Connecting to database with SQL user [postgres].Password:
ALTER ROLE
```

[<< Previous Lab (1) <<](../lab1/README.md) | [>> Next Lab (3) >>](../lab3/README.md)

[<< Previous Lab (1) <<](../lab1/README.md) | [>> Next Lab (3) >>](../lab3/README.md)
17 changes: 13 additions & 4 deletions lab3/README.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
# Lab 3: Create a Redis Enterprise Cloud subscription on Google Cloud

In this lab, you are going to:
* Create a Redis Enterprise Cloud subscription
* Collect Redis Enterprise Database connection information

Create a Redis Cloud subscription:
* Follow this [link](https://docs.redis.com/latest/rc/rc-quickstart/#create-an-account) through step 6.
* In step 4, choose Google Cloud. Then come back here to continue on section 3 below to initialize two environments for this workshop.
Expand All @@ -27,6 +27,15 @@ export REDIS_TARGET_DB_HOST=<Redis Target db endpoint>
export REDIS_TARGET_DB_PORT=<Redis Target db endpoint port>
export REDIS_TARGET_DB_PASSWORD=<Redis Target db password>
```

[<< Previous Lab (2) <<](../lab2/README.md) | [>> Next Lab (4) >>](../lab4/README.md)

To double check all the above environment variables are set correctly, print the values to the console.
```
echo $REDIS_URI
echo $REDIS_INSIGHT_PORT
echo $REDIS_TARGET_DB_HOST
echo $REDIS_TARGET_DB_PORT
echo $REDIS_TARGET_DB_PASSWORD
```


[<< Previous Lab (2) <<](../lab2/README.md) | [>> Next Lab (4) >>](../lab4/README.md)
73 changes: 41 additions & 32 deletions lab5/README.md
Original file line number Diff line number Diff line change
@@ -1,11 +1,11 @@
# Lab 5: Set up Redis Data Integration (RDI)

In this lab, you are going to:
* Set up Redis Data Integration (RDI)
* Create & Deploy two RDI jobs to replicate data from a CloudSQL PostgreSQL database to Redis Enterprise

![RDI - CloudSQL](./img/RDI_Ingest_cloudsql.png)

Deploy a Redis Enterprise cluster:
```bash
kubectl create namespace redis
Expand All @@ -25,24 +25,24 @@ spec:
nodes: 3
EOF

kubectl apply -f rec.yaml -n redis
kubectl apply -f rec.yaml -n redis
```
It will take between 6 and 8 minutes to complete. You can run the following command to see the progress:
```bash
watch kubectl get all
```

Then, retrieve the password for the Redis Enterprise Cluster's default uesr: [email protected]:
```bash
export REC_PWD=$(kubectl get secrets -n redis rec -o jsonpath="{.data.password}" | base64 --decode)
```

Note: You can open another Google Cloud Shell to grab the $REC_PWD and display its value in the shell for later use:
```
export REC_PWD=$(kubectl get secrets -n redis rec -o jsonpath="{.data.password}" | base64 --decode)
echo $REC_PWD
```

Install Redis Gears:
```bash
kubectl exec -it rec-0 -n redis -- curl -s https://redismodules.s3.amazonaws.com/redisgears/redisgears_python.Linux-ubuntu18.04-x86_64.1.2.6.zip -o /tmp/redis-gears.zip
Expand All @@ -55,7 +55,16 @@ Defaulted container "redis-enterprise-node" out of: redis-enterprise-node, boots
Defaulted container "redis-enterprise-node" out of: redis-enterprise-node, bootstrapper
{"action_uid":"e0a88d27-4c52-4e9e-b1f1-6095baa4d184","author":"RedisLabs","capabilities":["types","crdb","failover_migrate","persistence_aof","persistence_rdb","clustering","backup_restore","reshard_rebalance","eviction_expiry","intershard_tls","intershard_tls_pass","ipv6"],"command_line_args":"Plugin gears_python CreateVenv 1","config_command":"RG.CONFIGSET","crdb":{},"dependencies":{"gears_python":{"sha256":"5206dfc7199e66c6cfe7a9443c5705e72ceccaccc02d229607e844337e00ce7f","url":"http://redismodules.s3.amazonaws.com/redisgears/redisgears-python.Linux-ubuntu18.04-x86_64.1.2.6.tgz"}},"description":"Dynamic execution framework for your Redis data","display_name":"RedisGears","email":"[email protected]","homepage":"http://redisgears.io","is_bundled":false,"license":"Redis Source Available License Agreement","min_redis_pack_version":"6.0.12","min_redis_version":"6.0.0","module_name":"rg","semantic_version":"1.2.6","sha256":"ca9c81c7c0e523a5ea5cf41c95ea53abcd6b90094be2f0901814dd5fdbc135d6","uid":"d97a561c5e94e78d60c5b2dfa48a427a","version":10206}
```

Instead if you see an error like this,
```
kubectl exec -it rec-0 -n redis -- curl -k -s -u "[email protected]:${REC_PWD}" -F "module=@/tmp/redis-gears.zip" https://localhost:9443/v2/modules
Defaulted container "redis-enterprise-node" out of: redis-enterprise-node, bootstrapper
Defaulted container "redis-enterprise-node" out of: redis-enterprise-node, bootstrapper
{"description":"Cannot perform operation while cluster is unstable","error_code":"cluster_unstable"}
```
your cluster is not ready yet. Wait for another 5 minutes or so and re-run above `Install Redis Gears` command.


Install RDI CLI container:
```bash
kubectl config set-context --current --namespace=default
Expand Down Expand Up @@ -88,19 +97,19 @@ spec:
EOF
kubectl apply -f /tmp/redis-di-cli-pod.yml
```

Create a new RDI database:
```bash
kubectl exec -it -n default pod/redis-di-cli -- redis-di create --cluster-host localhost
kubectl exec -it -n default pod/redis-di-cli -- redis-di create --cluster-host localhost
```

Use the following input and answer the rest of the prompt:
```bash
Host/IP of Redis Enterprise Cluster: rec.redis.svc.cluster.local
Redis Enterprise Cluster username: [email protected]
Redis Enterprise Cluster Password: grab password from $REC_PWD
Everything else take the default valuest
Password for the new RDI Database: redis
Password for the new RDI Database: redis
```
On success, you should see similar output like below:
```
Expand All @@ -119,41 +128,41 @@ Setting up RDI Engine on port 12001
Successfully configured RDI database on port 12001
Default Context created successfully
```
Edit config.yaml:

Now make sure you are in the `REDIS_REPO` directory and edit the `config.yaml` file:
Update the value of the following fields in the `connections:target:` section:
```
host: <Redis Enterprise database host in Lab 3>
port: <Redis Enterprise database port in Lab 3>
user: default
password: <Redis Enterprise database user in Lab3>
```

Create a ConfigMap for Redis Data Integration:
```bash
kubectl create configmap redis-di-config --from-file=config.yaml -n default
```
You might need to wait for 30 seconds or more for the configmap to be ready for next step.
You might need to wait for 30 seconds or more for the configmap to be ready for next step.

Deploy the RDI configuration:
```bash
kubectl exec -n default -it pod/redis-di-cli -- redis-di deploy
```
When prompted for password (RDI Database Password []:), enter `redis` and hit return.
Edit application.properties:


Make sure you are in the `REDIS_REPO` directory and edit the `application.properties`:
Update the value of the following fields with the CloudSQL PostgreSQL's public IP address. You can run `echo $POSTGRESQL_INSTANCE_IP` to display the value of the IP address.
```
debezium.source.database.hostname=
```

Create a ConfigMap for Debezium Server:
```bash
kubectl create configmap debezium-config --from-file=application.properties -n default
```
You might need to wait for 30 seconds or more for the configmap to be ready for next step.
You might need to wait for 30 seconds or more for the configmap to be ready for next step.

Create the Debezium Server Pod:
```bash
cat << EOF > /tmp/debezium-server-pod.yml
Expand Down Expand Up @@ -185,7 +194,7 @@ spec:
EOF
kubectl apply -f /tmp/debezium-server-pod.yml
```

Create a ConfigMap for the two RDI jobs for replicating order information from CloudSQL to Redis:
```bash
kubectl create configmap redis-di-jobs --from-file=./rdi_jobs
Expand All @@ -200,14 +209,14 @@ You should similar output if the jobs are successfully created:
```
INFO - Reading orders.yaml job
INFO - Reading orderProducts.yaml job
RDI Database Password []:
RDI Database Password []:
WARNING - Property 'json_update_strategy' will be deprecated in future releases. Use 'on_update' job-level property to define the json update strategy.
Deploying settings to 10.96.0.22:12001
INFO - Connected to target database
INFO - RedisJSON is installed on the target Redis DB
Done
```

Check if the job has been created:
```bash
kubectl exec -it -n default pod/redis-di-cli -- redis-di list-jobs
Expand All @@ -223,7 +232,7 @@ Ingest Jobs
| orderProducts | | | | orderProducts | Yes | No | No |
+---------------+--------+----+--------+---------------+-----------------+--------+-----+
```

Verify the job status in RDI:
```bash
kubectl exec -n default -it pod/redis-di-cli -- redis-di status
Expand All @@ -237,12 +246,12 @@ started

Engine State
Sync mode: cdc
Last data retrieved (source): 07/22/2023 23:26:56.000000
Last data updated (target): 07/22/2023 23:26:57.075254
Last data retrieved (source): 07/22/2023 23:26:56.000000
Last data updated (target): 07/22/2023 23:26:57.075254
Last snapshot:
Number of processes: 4
Start: 07/22/2023 21:26:12.722103
End: 07/22/2023 21:30:34.350942
Start: 07/22/2023 21:26:12.722103
End: 07/22/2023 21:30:34.350942

Connections
+--------+-------+--------------------------------------------------------+-------+----------+---------+----------+-----------+
Expand Down Expand Up @@ -276,5 +285,5 @@ Performance Statistics per Batch (batch size: 2000)
Last run(s) duration (ms): [4]
Average run duration (ms): 2.00
```

[<< Previous Lab (4) <<](../lab4/README.md) | [>> Next Lab (6) >>](../lab6/README.md)
8 changes: 4 additions & 4 deletions lab8/README.md
Original file line number Diff line number Diff line change
@@ -1,8 +1,8 @@
# Lab 8: Document Question Answering with Langchain, VertexAI and Redis

In this lab, you are going to:
* Exercise a colab to develop a document-based Question & Answering app with Langchain, VertexAI and Redis

Access the [colab](https://colab.research.google.com/github/gmflau/google-dev-day-workshop/blob/main/lab8/VertexAI_LangChain_Redis.ipynb) for this lab
[<< Previous Lab (7) <<](../lab7/README.md)

[<< Previous Lab (7) <<](../lab7/README.md) | [>> Next Lab (9) >>](../lab9/README.md)
26 changes: 26 additions & 0 deletions lab9/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,26 @@
# Lab 9: Cleanup

In this lab, you are going to:
* Cleanup all resources so that you do not incur un-intended charges from Google Cloud.

You can delete the runtime of your Colab environment from Lab8, by simply clicking on `Disconnect and delete runtime` option.

![Colab Shutdown](./img/lab9-img0.png)

In GCP, if yo delete the project, all the cloud resources you have provisioned, are all shutdown. So, the clean up is as simple as deleting a project. But before you do so, make sure your project Billing is not set.
Go to `Billing` on your project.
![Project Billing](./img/lab9-img1.png)

If you are using a vanilla GCP account with free credit, the billing is not started on it. You can confirm this by looking at the `Free trial credit` panel. The fact that `ACTIVATE` button is enabled, means your billing has not started. Do NOT ACTIVATE it, if you have credits still remaining. But you are good because Billing has not started yet.
![Free Trial](./img/lab9-img2.png)

Now you can go ahead and delete the Project, by selecting it and hitting `DELETE` button.

![Project Shutdown](./img/lab9-img3.png)

Project may be flagged for deletiong at a later date, but do not worry. All of your cloud resources are shutdown.
![Project Pending Shutdown](./img/lab9-img4.png)

You are all good.

[<< Previous Lab (8) <<](../lab8/README.md)
Binary file added lab9/img/lab9-img0.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added lab9/img/lab9-img1.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added lab9/img/lab9-img2.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added lab9/img/lab9-img3.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added lab9/img/lab9-img4.png
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.