This proof of concept shows a simple usage of debezium, connecting to an oracle 19c database and streaming events and transactions from the database to kafka.
Log in your Openshift cluster and execute following command:
oc new-project <your namespace>
Access the Administrator area of Openshift console and install Operator:
{:height="400px" width="400px"}
After, deploy Kafka cluster:
{:height="400px" width="400px"}
In next step, deploy a Oracle database with this tutorial: https://github.com/osvaldormelo/oracle19cDatabase
Create a user for application in oracle pod:
sqlplus sys/123456@localhost as sysdba
CREATE USER OT IDENTIFIED BY Orcl1234;
GRANT CONNECT, RESOURCE, DBA TO OT;
Login with user created and create table for application:
CREATE TABLE ot.keys(id NUMBER GENERATED BY DEFAULT AS IDENTITY, key VARCHAR2(50) NOT NULL, value VARCHAR2(50) NOT NULL, PRIMARY KEY(id));
INSERT INTO ot.keys(key,value) VALUES('99999999999','[email protected]');
SELECT * FROM ot.keys WHERE key = '99999999999'AND rownum = 1;
Login in oracle with sys user and execute following commands:
sqlplus sys as sysdba --after put the root password
ALTER SESSION SET CONTAINER = CDB$ROOT;
alter system set db_recovery_file_dest_size = 4gb;
--alter system set db_recovery_file_dest = '/opt/oracle/oradata/recovery_area' scope=spfile;
--alter system set enable_goldengate_replication=true;
shutdown immediate
startup mount
alter database archivelog;
alter database open;
-- Should show "Database log mode: Archive Mode"
archive log list
exit;
Now, set supplemental log data and create tablespace on Oracle:
sqlplus sys/123456@localhost as sysdba
ALTER DATABASE ADD SUPPLEMENTAL LOG DATA;
ALTER TABLE ot.keys ADD SUPPLEMENTAL LOG DATA (ALL) COLUMNS;
CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ABC/logminer_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
ALTER SESSION SET CONTAINER = <your PDB>;
CREATE TABLESPACE logminer_tbs DATAFILE '/opt/oracle/oradata/ABC/ABCPDB1/logminer_tbs.dbf' SIZE 25M REUSE AUTOEXTEND ON MAXSIZE UNLIMITED;
Create a user for debezium:
CREATE USER c##dbzuser IDENTIFIED BY dbz DEFAULT TABLESPACE logminer_tbs QUOTA UNLIMITED ON logminer_tbs CONTAINER=ALL;
Grant permissions for user created:
GRANT CREATE SESSION TO c##dbzuser CONTAINER=ALL;
GRANT SET CONTAINER TO c##dbzuser CONTAINER=ALL;
GRANT SELECT ON V_$DATABASE to c##dbzuser CONTAINER=ALL;
GRANT FLASHBACK ANY TABLE TO c##dbzuser CONTAINER=ALL;
GRANT SELECT ANY TABLE TO c##dbzuser CONTAINER=ALL;
GRANT SELECT_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL;
GRANT EXECUTE_CATALOG_ROLE TO c##dbzuser CONTAINER=ALL;
GRANT SELECT ANY TRANSACTION TO c##dbzuser CONTAINER=ALL;
GRANT LOGMINING TO c##dbzuser CONTAINER=ALL;
GRANT CREATE TABLE TO c##dbzuser CONTAINER=ALL;
GRANT LOCK ANY TABLE TO c##dbzuser CONTAINER=ALL;
GRANT ALTER ANY TABLE TO c##dbzuser CONTAINER=ALL;
GRANT CREATE SEQUENCE TO c##dbzuser CONTAINER=ALL;
GRANT EXECUTE ON DBMS_LOGMNR TO c##dbzuser CONTAINER=ALL;
GRANT EXECUTE ON DBMS_LOGMNR_D TO c##dbzuser CONTAINER=ALL;
GRANT SELECT ON V_$LOG TO c##dbzuser CONTAINER=ALL;
GRANT SELECT ON V_$LOG_HISTORY TO c##dbzuser CONTAINER=ALL;
GRANT SELECT ON V_$LOGMNR_LOGS TO c##dbzuser CONTAINER=ALL;
GRANT SELECT ON V_$LOGMNR_CONTENTS TO c##dbzuser CONTAINER=ALL;
GRANT SELECT ON V_$LOGMNR_PARAMETERS TO c##dbzuser CONTAINER=ALL;
GRANT SELECT ON V_$LOGFILE TO c##dbzuser CONTAINER=ALL;
GRANT SELECT ON V_$ARCHIVED_LOG TO c##dbzuser CONTAINER=ALL;
GRANT SELECT ON V_$ARCHIVE_DEST_STATUS TO c##dbzuser CONTAINER=ALL;
Clone this repo and download the Oracle JDBC Driver from this link and copy to driver folder. After, build your custom image with command:
podman build -t <your registry>/<project>/debezium-container-for-oracle:latest .
Login in your private registry and push your image to your registry with this command:
podman push <your registry>/<project>/debezium-container-for-oracle:latest
First, login in your cluster with oc login command and create a secret:
cat << EOF | oc create -f -
apiVersion: v1
kind: Secret
metadata:
name: debezium-secret
type: Opaque
data:
username: YyMjZGJ6dXNlcg==
password: ZGJ6
EOF
Create a Role:
cat << EOF | oc create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: connector-configuration-role
rules:
- apiGroups: [""]
resources: ["secrets"]
resourceNames: ["debezium-secret"]
verbs: ["get"]
EOF
Create a RoleBinding:
cat << EOF | oc create -f -
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: connector-configuration-role-binding
subjects:
- kind: ServiceAccount
name: debezium-connect-cluster-connect
roleRef:
kind: Role
name: connector-configuration-role
apiGroup: rbac.authorization.k8s.io
EOF
Finally, create a Kafka Connect:
cat << EOF | oc create -f -
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnect
metadata:
name: debezium-connect-cluster
annotations:
strimzi.io/use-connector-resources: "true"
spec:
version: 3.1.0
replicas: 1
bootstrapServers: <your bootstrap server>:9092
config:
config.providers: secrets
config.providers.secrets.class: io.strimzi.kafka.KubernetesSecretConfigProvider
group.id: connect-cluster
offset.storage.topic: connect-cluster-offsets
config.storage.topic: connect-cluster-configs
status.storage.topic: connect-cluster-status
# -1 means it will use the default replication factor configured in the broker
config.storage.replication.factor: -1
offset.storage.replication.factor: -1
status.storage.replication.factor: -1
image: <your registry>/debezium-container-for-oracle:latest
EOF
A file model with these resources are in yaml-files folder.
cat << EOF | oc create -f -
apiVersion: kafka.strimzi.io/v1beta2
kind: KafkaConnector
metadata:
name: debezium-connector-oracle
labels:
strimzi.io/cluster: debezium-connect-cluster
spec:
class: io.debezium.connector.oracle.OracleConnector
tasksMax: 1
config:
tasks.max: 1
connector.class: io.debezium.connector.oracle.OracleConnector
database.server.name: <your server>
database.hostname: <your server>
database.port: 1521
database.user: c##dbzuser
database.password: dbz
database.dbname: <your SID>
database.pdb.name: <your pdb>
database.history.kafka.bootstrap.servers: <your bootstrap servers>:9092
database.history.kafka.topic: oracle19-history
table.include.list: ot.keys
message.key.columns: ot.keys:id
transaction.topic: keys
EOF
A file model with this resource is in yaml-files folder.
After deploy, insert a line on table and check in the kafka topics. To help you view of kafka cluster you can deploy a kafka ui.
{:height="400px" width="400px"}
Example of message on kafka topic:
{:height="400px" width="400px"}
Congratulations! You have a debezium ready for use!
https://debezium.io/documentation/
https://debezium.io/documentation/reference/stable/operations/openshift.html
https://debezium.io/documentation/reference/stable/connectors/oracle.html