Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] pg cluster restore backup failed: FATAL: password authentication failed for user "postgres" #5429

Closed
JashBook opened this issue Oct 13, 2023 · 2 comments
Assignees
Labels
bug kind/bug Something isn't working severity/major Great chance user will encounter the same problem
Milestone

Comments

@JashBook
Copy link
Collaborator

JashBook commented Oct 13, 2023

Describe the bug
pg cluster restore pg-basebackup backup failed: all pod role is secondary
failed: FATAL: password authentication failed for user "postgres"

To Reproduce
Steps to reproduce the behavior:

  1. create pg cluster
kbcli cluster create postgresql pg-test --mode replication --termination-policy WipeOut --availability-policy none
  1. do pg-basebackup backup
kbcli cluster backup pg-test --method pg-basebackup
  1. restore pg-basebackup backup cluster
kbcli cluster restore pg-backup --backup backup-default-pg-test-20231013105545
  1. See error
kubectl get cluster 
NAME        CLUSTER-DEFINITION   VERSION             TERMINATION-POLICY   STATUS     AGE
pg-backup   postgresql           postgresql-14.8.0   WipeOut              Creating   2m23s

➜  ~ kbcli cluster describe pg-backup
Name: pg-backup	 Created Time: Oct 13,2023 10:56 UTC+0800
NAMESPACE   CLUSTER-DEFINITION   VERSION             STATUS     TERMINATION-POLICY   
default     postgresql           postgresql-14.8.0   Creating   WipeOut              

Endpoints:
COMPONENT    MODE        INTERNAL                                              EXTERNAL   
postgresql   ReadWrite   pg-backup-postgresql.default.svc.cluster.local:5432   <none>     
                         pg-backup-postgresql.default.svc.cluster.local:6432              

Topology:
COMPONENT    INSTANCE                 ROLE        STATUS    AZ              NODE                                                        CREATED-TIME                 
postgresql   pg-backup-postgresql-0   secondary   Running   us-central1-f   gke-cicd-gke-test-cicd-gke-test-6c648ea8-0hqz/10.128.0.24   Oct 13,2023 10:56 UTC+0800   
postgresql   pg-backup-postgresql-1   secondary   Running   us-central1-f   gke-cicd-gke-test-cicd-gke-test-6c648ea8-0hqz/10.128.0.24   Oct 13,2023 10:56 UTC+0800   

Resources Allocation:
COMPONENT    DEDICATED   CPU(REQUEST/LIMIT)   MEMORY(REQUEST/LIMIT)   STORAGE-SIZE   STORAGE-CLASS   
postgresql   false       500m / 500m          512Mi / 512Mi           data:20Gi      kb-default-sc   

Images:
COMPONENT    TYPE         IMAGE                                             
postgresql   postgresql   docker.io/apecloud/spilo:14.8.0-pgvector-v0.5.0   

Show cluster events: kbcli cluster list-events -n default pg-backup

logs pod

logs pg-backup-postgresql-0
Defaulted container "postgresql" out of: postgresql, pgbouncer, metrics, kb-checkrole, config-manager, action-0, action-1, kb-role-probe, pg-init-container (init), role-agent-installer (init)
2023-10-13 02:57:33,309 - bootstrapping - INFO - Figuring out my environment (Google? AWS? Openstack? Local?)
2023-10-13 02:57:33,318 - bootstrapping - INFO - Looks like you are running google
2023-10-13 02:57:33,341 - bootstrapping - INFO - kubeblocks generate local configuration: 
bootstrap:
  dcs:
    postgresql:
      parameters:
        archive_command: /bin/true
        archive_mode: 'on'
        autovacuum_analyze_scale_factor: '0.1'
        autovacuum_max_workers: '3'
        autovacuum_vacuum_scale_factor: '0.05'
        checkpoint_completion_target: '0.9'
        log_autovacuum_min_duration: '10000'
        log_checkpoints: 'True'
        log_connections: 'False'
        log_disconnections: 'False'
        log_min_duration_statement: '1000'
        log_statement: ddl
        log_temp_files: 128kB
        max_connections: '56'
        max_locks_per_transaction: '64'
        max_prepared_transactions: '100'
        max_replication_slots: '16'
        max_wal_senders: '64'
        max_worker_processes: '8'
        tcp_keepalives_idle: 45s
        tcp_keepalives_interval: 10s
        track_commit_timestamp: 'False'
        track_functions: pl
        wal_compression: 'True'
        wal_keep_size: '0'
        wal_level: replica
        wal_log_hints: 'False'
  initdb:
  - auth-host: md5
  - auth-local: trust
postgresql:
  config_dir: /home/postgres/pgdata/conf
  custom_conf: /home/postgres/conf/postgresql.conf
  parameters:
    log_destination: csvlog
    log_directory: log
    log_filename: postgresql-%Y-%m-%d.log
    logging_collector: 'True'
    pg_stat_statements.track_utility: 'False'
    shared_buffers: 128MB
  pg_hba:
  - host     all             all             0.0.0.0/0                md5
  - host     all             all             ::/0                     md5
  - local    all             all                                     trust
  - host     all             all             127.0.0.1/32            trust
  - host     all             all             ::1/128                 trust
  - local     replication     all                                    trust
  - host      replication     all             0.0.0.0/0               md5
  - host      replication     all             ::/0                    md5

2023-10-13 02:57:33,402 - bootstrapping - INFO - Configuring crontab
2023-10-13 02:57:33,402 - bootstrapping - INFO - Skipping creation of renice cron job due to lack of SYS_NICE capability
2023-10-13 02:57:33,403 - bootstrapping - INFO - Configuring pgbouncer
2023-10-13 02:57:33,403 - bootstrapping - INFO - No PGBOUNCER_CONFIGURATION was specified, skipping
2023-10-13 02:57:33,403 - bootstrapping - INFO - Configuring pam-oauth2
2023-10-13 02:57:33,403 - bootstrapping - INFO - No PAM_OAUTH2 configuration was specified, skipping
2023-10-13 02:57:33,403 - bootstrapping - INFO - Configuring log
2023-10-13 02:57:33,403 - bootstrapping - INFO - Configuring wal-e
2023-10-13 02:57:33,403 - bootstrapping - INFO - Configuring pgqd
2023-10-13 02:57:33,404 - bootstrapping - INFO - Configuring certificate
2023-10-13 02:57:33,404 - bootstrapping - INFO - Generating ssl self-signed certificate
2023-10-13 02:57:34,404 - bootstrapping - INFO - Configuring bootstrap
2023-10-13 02:57:34,405 - bootstrapping - INFO - Configuring patroni
2023-10-13 02:57:34,414 - bootstrapping - INFO - Writing to file /run/postgres.yml
2023-10-13 02:57:34,415 - bootstrapping - INFO - Configuring standby-cluster
2023-10-13 02:57:35,928 INFO: Selected new K8s API server endpoint https://10.128.0.124:443
2023-10-13 02:57:36,064 INFO: No PostgreSQL configuration items changed, nothing to reload.
2023-10-13 02:57:36,071 WARNING: Postgresql is not running.
2023-10-13 02:57:36,071 INFO: Lock owner: None; I am pg-backup-postgresql-0
2023-10-13 02:57:36,073 INFO: pg_controldata:
  pg_control version number: 1300
  Catalog version number: 202107181
  Database system identifier: 7289263266622885975
  Database cluster state: in archive recovery
  pg_control last modified: Fri Oct 13 02:39:39 2023
  Latest checkpoint location: 0/3001140
  Latest checkpoint's REDO location: 0/3001108
  Latest checkpoint's REDO WAL file: 000000010000000000000003
  Latest checkpoint's TimeLineID: 1
  Latest checkpoint's PrevTimeLineID: 1
  Latest checkpoint's full_page_writes: on
  Latest checkpoint's NextXID: 0:931
  Latest checkpoint's NextOID: 24576
  Latest checkpoint's NextMultiXactId: 1
  Latest checkpoint's NextMultiOffset: 0
  Latest checkpoint's oldestXID: 727
  Latest checkpoint's oldestXID's DB: 1
  Latest checkpoint's oldestActiveXID: 931
  Latest checkpoint's oldestMultiXid: 1
  Latest checkpoint's oldestMulti's DB: 1
  Latest checkpoint's oldestCommitTsXid: 0
  Latest checkpoint's newestCommitTsXid: 0
  Time of latest checkpoint: Fri Oct 13 02:39:26 2023
  Fake LSN counter for unlogged rels: 0/3E8
  Minimum recovery ending location: 0/4000000
  Min recovery ending loc's timeline: 1
  Backup start location: 0/0
  Backup end location: 0/0
  End-of-backup record required: no
  wal_level setting: replica
  wal_log_hints setting: on
  max_connections setting: 56
  max_worker_processes setting: 8
  max_wal_senders setting: 64
  max_prepared_xacts setting: 100
  max_locks_per_xact setting: 64
  track_commit_timestamp setting: off
  Maximum data alignment: 8
  Database block size: 8192
  Blocks per segment of large relation: 131072
  WAL block size: 8192
  Bytes per WAL segment: 16777216
  Maximum length of identifiers: 64
  Maximum columns in an index: 32
  Maximum size of a TOAST chunk: 1996
  Size of a large-object chunk: 2048
  Date/time type storage: 64-bit integers
  Float8 argument passing: by value
  Data page checksum version: 0
  Mock authentication nonce: 1a2749042f2cb6072a43e2201f6e964d53c9c123dfdf80e963a28c189428b2b9

2023-10-13 02:57:36,103 INFO: Lock owner: None; I am pg-backup-postgresql-0
2023-10-13 02:57:36,166 INFO: starting as a secondary
2023-10-13 02:57:36,167 INFO: Lock owner: pg-backup-postgresql-1; I am pg-backup-postgresql-0
2023-10-13 02:57:36,167 INFO: restarting after failure in progress
2023-10-13 02:57:36 GMT [100]: [1-1] 6528b220.64 0     LOG:  Auto detecting pg_stat_kcache.linux_hz parameter...
2023-10-13 02:57:36 GMT [100]: [2-1] 6528b220.64 0     LOG:  pg_stat_kcache.linux_hz is set to 1000000
2023-10-13 02:57:36,926 INFO: postmaster pid=100
/var/run/postgresql:5432 - no response
2023-10-13 02:57:37 GMT [100]: [3-1] 6528b220.64 0     LOG:  redirecting log output to logging collector process
2023-10-13 02:57:37 GMT [100]: [4-1] 6528b220.64 0     HINT:  Future log output will appear in directory "log".
2023-10-13 02:57:37,317 INFO: Lock owner: pg-backup-postgresql-1; I am pg-backup-postgresql-0
2023-10-13 02:57:37,362 INFO: restarting after failure in progress
/var/run/postgresql:5432 - accepting connections
/var/run/postgresql:5432 - accepting connections
2023-10-13 02:57:38,060 INFO: Lock owner: pg-backup-postgresql-1; I am pg-backup-postgresql-0
2023-10-13 02:57:38,060 INFO: establishing a new patroni connection to the postgres cluster
2023-10-13 02:57:38,093 INFO: Local timeline=1 lsn=0/4000000
2023-10-13 02:57:38,202 ERROR: Exception when working with leader
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/patroni/postgresql/rewind.py", line 69, in check_leader_is_not_in_recovery
    with get_connection_cursor(connect_timeout=3, options='-c statement_timeout=2000', **conn_kwargs) as cur:
  File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__
    return next(self.gen)
  File "/usr/local/lib/python3.10/dist-packages/patroni/postgresql/connection.py", line 43, in get_connection_cursor
    conn = psycopg.connect(**kwargs)
  File "/usr/local/lib/python3.10/dist-packages/patroni/psycopg.py", line 42, in connect
    ret = _connect(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 122, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: connection to server at "10.0.11.46", port 5432 failed: FATAL:  password authentication failed for user "postgres"
connection to server at "10.0.11.46", port 5432 failed: FATAL:  password authentication failed for user "postgres"

server signaled
2023-10-13 02:57:38,389 INFO: no action. I am (pg-backup-postgresql-0), a secondary, and following a leader (pg-backup-postgresql-1)
2023-10-13 02:57:47,733 INFO: Lock owner: pg-backup-postgresql-1; I am pg-backup-postgresql-0
2023-10-13 02:57:47,738 INFO: Local timeline=1 lsn=0/4000000
2023-10-13 02:57:47,753 ERROR: Exception when working with leader
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/patroni/postgresql/rewind.py", line 69, in check_leader_is_not_in_recovery
    with get_connection_cursor(connect_timeout=3, options='-c statement_timeout=2000', **conn_kwargs) as cur:
  File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__
    return next(self.gen)
  File "/usr/local/lib/python3.10/dist-packages/patroni/postgresql/connection.py", line 43, in get_connection_cursor
    conn = psycopg.connect(**kwargs)
  File "/usr/local/lib/python3.10/dist-packages/patroni/psycopg.py", line 42, in connect
    ret = _connect(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 122, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: connection to server at "10.0.11.46", port 5432 failed: FATAL:  password authentication failed for user "postgres"
connection to server at "10.0.11.46", port 5432 failed: FATAL:  password authentication failed for user "postgres"

2023-10-13 02:57:47,761 INFO: no action. I am (pg-backup-postgresql-0), a secondary, and following a leader (pg-backup-postgresql-1)
2023-10-13 02:57:57,729 INFO: Lock owner: pg-backup-postgresql-1; I am pg-backup-postgresql-0
2023-10-13 02:57:57,733 INFO: Local timeline=1 lsn=0/4000000
2023-10-13 02:57:57,747 ERROR: Exception when working with leader
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/patroni/postgresql/rewind.py", line 69, in check_leader_is_not_in_recovery
    with get_connection_cursor(connect_timeout=3, options='-c statement_timeout=2000', **conn_kwargs) as cur:
  File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__
    return next(self.gen)
  File "/usr/local/lib/python3.10/dist-packages/patroni/postgresql/connection.py", line 43, in get_connection_cursor
    conn = psycopg.connect(**kwargs)
  File "/usr/local/lib/python3.10/dist-packages/patroni/psycopg.py", line 42, in connect
    ret = _connect(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 122, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: connection to server at "10.0.11.46", port 5432 failed: FATAL:  password authentication failed for user "postgres"
connection to server at "10.0.11.46", port 5432 failed: FATAL:  password authentication failed for user "postgres"

2023-10-13 02:57:57,751 INFO: no action. I am (pg-backup-postgresql-0), a secondary, and following a leader (pg-backup-postgresql-1)
2023-10-13 02:58:07,728 INFO: Lock owner: pg-backup-postgresql-1; I am pg-backup-postgresql-0
2023-10-13 02:58:07,732 INFO: Local timeline=1 lsn=0/4000000
2023-10-13 02:58:07,747 ERROR: Exception when working with leader
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/patroni/postgresql/rewind.py", line 69, in check_leader_is_not_in_recovery
    with get_connection_cursor(connect_timeout=3, options='-c statement_timeout=2000', **conn_kwargs) as cur:
  File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__
    return next(self.gen)
  File "/usr/local/lib/python3.10/dist-packages/patroni/postgresql/connection.py", line 43, in get_connection_cursor
    conn = psycopg.connect(**kwargs)
  File "/usr/local/lib/python3.10/dist-packages/patroni/psycopg.py", line 42, in connect
    ret = _connect(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 122, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: connection to server at "10.0.11.46", port 5432 failed: FATAL:  password authentication failed for user "postgres"
connection to server at "10.0.11.46", port 5432 failed: FATAL:  password authentication failed for user "postgres"

2023-10-13 02:58:07,752 INFO: no action. I am (pg-backup-postgresql-0), a secondary, and following a leader (pg-backup-postgresql-1)
...
2023-10-13 03:11:37,726 INFO: Lock owner: pg-backup-postgresql-1; I am pg-backup-postgresql-0
2023-10-13 03:11:37,730 INFO: Local timeline=1 lsn=0/4000000
2023-10-13 03:11:37,745 ERROR: Exception when working with leader
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/patroni/postgresql/rewind.py", line 69, in check_leader_is_not_in_recovery
    with get_connection_cursor(connect_timeout=3, options='-c statement_timeout=2000', **conn_kwargs) as cur:
  File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__
    return next(self.gen)
  File "/usr/local/lib/python3.10/dist-packages/patroni/postgresql/connection.py", line 43, in get_connection_cursor
    conn = psycopg.connect(**kwargs)
  File "/usr/local/lib/python3.10/dist-packages/patroni/psycopg.py", line 42, in connect
    ret = _connect(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 122, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: connection to server at "10.0.11.46", port 5432 failed: FATAL:  password authentication failed for user "postgres"
connection to server at "10.0.11.46", port 5432 failed: FATAL:  password authentication failed for user "postgres"

2023-10-13 03:11:37,749 INFO: no action. I am (pg-backup-postgresql-0), a secondary, and following a leader (pg-backup-postgresql-1)
2023-10-13 03:11:47,738 INFO: Lock owner: pg-backup-postgresql-1; I am pg-backup-postgresql-0
2023-10-13 03:11:47,742 INFO: Local timeline=1 lsn=0/4000000
2023-10-13 03:11:47,755 ERROR: Exception when working with leader
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/patroni/postgresql/rewind.py", line 69, in check_leader_is_not_in_recovery
    with get_connection_cursor(connect_timeout=3, options='-c statement_timeout=2000', **conn_kwargs) as cur:
  File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__
    return next(self.gen)
  File "/usr/local/lib/python3.10/dist-packages/patroni/postgresql/connection.py", line 43, in get_connection_cursor
    conn = psycopg.connect(**kwargs)
  File "/usr/local/lib/python3.10/dist-packages/patroni/psycopg.py", line 42, in connect
    ret = _connect(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 122, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: connection to server at "10.0.11.46", port 5432 failed: FATAL:  password authentication failed for user "postgres"
connection to server at "10.0.11.46", port 5432 failed: FATAL:  password authentication failed for user "postgres"

2023-10-13 03:11:47,759 INFO: no action. I am (pg-backup-postgresql-0), a secondary, and following a leader (pg-backup-postgresql-1)
2023-10-13 03:11:57,728 INFO: Lock owner: pg-backup-postgresql-1; I am pg-backup-postgresql-0
2023-10-13 03:11:57,732 INFO: Local timeline=1 lsn=0/4000000
2023-10-13 03:11:57,745 ERROR: Exception when working with leader
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/dist-packages/patroni/postgresql/rewind.py", line 69, in check_leader_is_not_in_recovery
    with get_connection_cursor(connect_timeout=3, options='-c statement_timeout=2000', **conn_kwargs) as cur:
  File "/usr/lib/python3.10/contextlib.py", line 135, in __enter__
    return next(self.gen)
  File "/usr/local/lib/python3.10/dist-packages/patroni/postgresql/connection.py", line 43, in get_connection_cursor
    conn = psycopg.connect(**kwargs)
  File "/usr/local/lib/python3.10/dist-packages/patroni/psycopg.py", line 42, in connect
    ret = _connect(*args, **kwargs)
  File "/usr/lib/python3/dist-packages/psycopg2/__init__.py", line 122, in connect
    conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: connection to server at "10.0.11.46", port 5432 failed: FATAL:  password authentication failed for user "postgres"
connection to server at "10.0.11.46", port 5432 failed: FATAL:  password authentication failed for user "postgres"

2023-10-13 03:11:57,749 INFO: no action. I am (pg-backup-postgresql-0), a secondary, and following a leader (pg-backup-postgresql-1)
➜  ~ 
➜  ~ k logs pg-backup-postgresql-1
Defaulted container "postgresql" out of: postgresql, pgbouncer, metrics, kb-checkrole, config-manager, action-0, action-1, kb-role-probe, pg-init-container (init), role-agent-installer (init)
2023-10-13 02:57:32,312 - bootstrapping - INFO - Figuring out my environment (Google? AWS? Openstack? Local?)
2023-10-13 02:57:32,321 - bootstrapping - INFO - Looks like you are running google
2023-10-13 02:57:32,345 - bootstrapping - INFO - kubeblocks generate local configuration: 
bootstrap:
  dcs:
    postgresql:
      parameters:
        archive_command: /bin/true
        archive_mode: 'on'
        autovacuum_analyze_scale_factor: '0.1'
        autovacuum_max_workers: '3'
        autovacuum_vacuum_scale_factor: '0.05'
        checkpoint_completion_target: '0.9'
        log_autovacuum_min_duration: '10000'
        log_checkpoints: 'True'
        log_connections: 'False'
        log_disconnections: 'False'
        log_min_duration_statement: '1000'
        log_statement: ddl
        log_temp_files: 128kB
        max_connections: '56'
        max_locks_per_transaction: '64'
        max_prepared_transactions: '100'
        max_replication_slots: '16'
        max_wal_senders: '64'
        max_worker_processes: '8'
        tcp_keepalives_idle: 45s
        tcp_keepalives_interval: 10s
        track_commit_timestamp: 'False'
        track_functions: pl
        wal_compression: 'True'
        wal_keep_size: '0'
        wal_level: replica
        wal_log_hints: 'False'
  initdb:
  - auth-host: md5
  - auth-local: trust
postgresql:
  config_dir: /home/postgres/pgdata/conf
  custom_conf: /home/postgres/conf/postgresql.conf
  parameters:
    log_destination: csvlog
    log_directory: log
    log_filename: postgresql-%Y-%m-%d.log
    logging_collector: 'True'
    pg_stat_statements.track_utility: 'False'
    shared_buffers: 128MB
  pg_hba:
  - host     all             all             0.0.0.0/0                md5
  - host     all             all             ::/0                     md5
  - local    all             all                                     trust
  - host     all             all             127.0.0.1/32            trust
  - host     all             all             ::1/128                 trust
  - local     replication     all                                    trust
  - host      replication     all             0.0.0.0/0               md5
  - host      replication     all             ::/0                    md5

2023-10-13 02:57:32,403 - bootstrapping - INFO - Configuring pam-oauth2
2023-10-13 02:57:32,403 - bootstrapping - INFO - No PAM_OAUTH2 configuration was specified, skipping
2023-10-13 02:57:32,403 - bootstrapping - INFO - Configuring wal-e
2023-10-13 02:57:32,403 - bootstrapping - INFO - Configuring standby-cluster
2023-10-13 02:57:32,404 - bootstrapping - INFO - Configuring log
2023-10-13 02:57:32,404 - bootstrapping - INFO - Configuring pgbouncer
2023-10-13 02:57:32,404 - bootstrapping - INFO - No PGBOUNCER_CONFIGURATION was specified, skipping
2023-10-13 02:57:32,404 - bootstrapping - INFO - Configuring crontab
2023-10-13 02:57:32,404 - bootstrapping - INFO - Skipping creation of renice cron job due to lack of SYS_NICE capability
2023-10-13 02:57:32,405 - bootstrapping - INFO - Configuring bootstrap
2023-10-13 02:57:32,405 - bootstrapping - INFO - Configuring certificate
2023-10-13 02:57:32,405 - bootstrapping - INFO - Generating ssl self-signed certificate
2023-10-13 02:57:32,624 - bootstrapping - INFO - Configuring pgqd
2023-10-13 02:57:32,625 - bootstrapping - INFO - Configuring patroni
2023-10-13 02:57:32,635 - bootstrapping - INFO - Writing to file /run/postgres.yml
2023-10-13 02:57:34,261 INFO: Selected new K8s API server endpoint https://10.128.0.124:443
2023-10-13 02:57:34,343 INFO: No PostgreSQL configuration items changed, nothing to reload.
2023-10-13 02:57:34,370 WARNING: Postgresql is not running.
2023-10-13 02:57:34,371 INFO: Lock owner: None; I am pg-backup-postgresql-1
2023-10-13 02:57:34,373 INFO: pg_controldata:
  pg_control version number: 1300
  Catalog version number: 202107181
  Database system identifier: 7289263266622885975
  Database cluster state: in archive recovery
  pg_control last modified: Fri Oct 13 02:39:39 2023
  Latest checkpoint location: 0/3001140
  Latest checkpoint's REDO location: 0/3001108
  Latest checkpoint's REDO WAL file: 000000010000000000000003
  Latest checkpoint's TimeLineID: 1
  Latest checkpoint's PrevTimeLineID: 1
  Latest checkpoint's full_page_writes: on
  Latest checkpoint's NextXID: 0:931
  Latest checkpoint's NextOID: 24576
  Latest checkpoint's NextMultiXactId: 1
  Latest checkpoint's NextMultiOffset: 0
  Latest checkpoint's oldestXID: 727
  Latest checkpoint's oldestXID's DB: 1
  Latest checkpoint's oldestActiveXID: 931
  Latest checkpoint's oldestMultiXid: 1
  Latest checkpoint's oldestMulti's DB: 1
  Latest checkpoint's oldestCommitTsXid: 0
  Latest checkpoint's newestCommitTsXid: 0
  Time of latest checkpoint: Fri Oct 13 02:39:26 2023
  Fake LSN counter for unlogged rels: 0/3E8
  Minimum recovery ending location: 0/4000000
  Min recovery ending loc's timeline: 1
  Backup start location: 0/0
  Backup end location: 0/0
  End-of-backup record required: no
  wal_level setting: replica
  wal_log_hints setting: on
  max_connections setting: 56
  max_worker_processes setting: 8
  max_wal_senders setting: 64
  max_prepared_xacts setting: 100
  max_locks_per_xact setting: 64
  track_commit_timestamp setting: off
  Maximum data alignment: 8
  Database block size: 8192
  Blocks per segment of large relation: 131072
  WAL block size: 8192
  Bytes per WAL segment: 16777216
  Maximum length of identifiers: 64
  Maximum columns in an index: 32
  Maximum size of a TOAST chunk: 1996
  Size of a large-object chunk: 2048
  Date/time type storage: 64-bit integers
  Float8 argument passing: by value
  Data page checksum version: 0
  Mock authentication nonce: 1a2749042f2cb6072a43e2201f6e964d53c9c123dfdf80e963a28c189428b2b9

2023-10-13 02:57:34,406 INFO: Lock owner: None; I am pg-backup-postgresql-1
2023-10-13 02:57:34,461 INFO: starting as a secondary
2023-10-13 02:57:35 GMT [100]: [1-1] 6528b21e.64 0     LOG:  Auto detecting pg_stat_kcache.linux_hz parameter...
2023-10-13 02:57:35 GMT [100]: [2-1] 6528b21e.64 0     LOG:  pg_stat_kcache.linux_hz is set to 500000
2023-10-13 02:57:35,007 INFO: postmaster pid=100
/var/run/postgresql:5432 - no response
2023-10-13 02:57:35 GMT [100]: [3-1] 6528b21e.64 0     LOG:  redirecting log output to logging collector process
2023-10-13 02:57:35 GMT [100]: [4-1] 6528b21e.64 0     HINT:  Future log output will appear in directory "log".
/var/run/postgresql:5432 - accepting connections
/var/run/postgresql:5432 - accepting connections
2023-10-13 02:57:36,063 INFO: establishing a new patroni connection to the postgres cluster
2023-10-13 02:57:36,109 WARNING: Could not activate Linux watchdog device: "Can't open watchdog device: [Errno 2] No such file or directory: '/dev/watchdog'"
2023-10-13 02:57:36,164 INFO: promoted self to leader by acquiring session lock
server promoting
2023-10-13 02:57:36,171 INFO: cleared rewind state after becoming the leader
DO
NOTICE:  role "admin" is already a member of role "cron_admin"
GRANT ROLE
DO
DO
NOTICE:  extension "pg_auth_mon" already exists, skipping
CREATE EXTENSION
NOTICE:  version "1.1" of extension "pg_auth_mon" is already installed
ALTER EXTENSION
GRANT
NOTICE:  extension "pg_cron" already exists, skipping
CREATE EXTENSION
DO
NOTICE:  version "1.5" of extension "pg_cron" is already installed
ALTER EXTENSION
ALTER POLICY
2023-10-13 02:57:37,705 INFO: no action. I am (pg-backup-postgresql-1), the leader with the lock
REVOKE
GRANT
REVOKE
GRANT
ALTER POLICY
REVOKE
GRANT
CREATE FUNCTION
REVOKE
GRANT
REVOKE
GRANT
NOTICE:  extension "file_fdw" already exists, skipping
CREATE EXTENSION
DO
NOTICE:  relation "postgres_log" already exists, skipping
CREATE TABLE
GRANT
NOTICE:  column "backend_type" of relation "postgres_log" already exists, skipping
ALTER TABLE
NOTICE:  column "leader_pid" of relation "postgres_log" already exists, skipping
ALTER TABLE
NOTICE:  column "query_id" of relation "postgres_log" already exists, skipping
ALTER TABLE
NOTICE:  relation "postgres_log_0" already exists, skipping
CREATE FOREIGN TABLE
GRANT
CREATE VIEW
ALTER VIEW
GRANT
NOTICE:  relation "postgres_log_1" already exists, skipping
CREATE FOREIGN TABLE
GRANT
CREATE VIEW
ALTER VIEW
GRANT
NOTICE:  relation "postgres_log_2" already exists, skipping
CREATE FOREIGN TABLE
GRANT
CREATE VIEW
ALTER VIEW
GRANT
NOTICE:  relation "postgres_log_3" already exists, skipping
CREATE FOREIGN TABLE
GRANT
CREATE VIEW
ALTER VIEW
GRANT
NOTICE:  relation "postgres_log_4" already exists, skipping
CREATE FOREIGN TABLE
GRANT
CREATE VIEW
ALTER VIEW
GRANT
NOTICE:  relation "postgres_log_5" already exists, skipping
CREATE FOREIGN TABLE
GRANT
CREATE VIEW
ALTER VIEW
GRANT
NOTICE:  relation "postgres_log_6" already exists, skipping
CREATE FOREIGN TABLE
GRANT
CREATE VIEW
ALTER VIEW
GRANT
NOTICE:  relation "postgres_log_7" already exists, skipping
CREATE FOREIGN TABLE
GRANT
CREATE VIEW
ALTER VIEW
GRANT
RESET
SET
NOTICE:  drop cascades to 5 other objects
DETAIL:  drop cascades to type zmon_utils.system_information
drop cascades to function zmon_utils.get_database_cluster_information()
drop cascades to function zmon_utils.get_database_cluster_system_information()
drop cascades to function zmon_utils.get_last_status_active_cronjobs()
drop cascades to view zmon_utils.last_status_active_cronjobs
DROP SCHEMA
NOTICE:  extension "plpython3u" already exists, skipping
DO
NOTICE:  language "plpythonu" does not exist, skipping
DROP LANGUAGE
NOTICE:  function plpython_call_handler() does not exist, skipping
DROP FUNCTION
NOTICE:  function plpython_inline_handler(internal) does not exist, skipping
DROP FUNCTION
NOTICE:  function plpython_validator(oid) does not exist, skipping
DROP FUNCTION
CREATE SCHEMA
GRANT
SET
CREATE TYPE
CREATE FUNCTION
CREATE FUNCTION
CREATE FUNCTION
REVOKE
CREATE VIEW
REVOKE
GRANT
GRANT
You are now connected to database "postgres" as user "postgres".
NOTICE:  schema "user_management" already exists, skipping
CREATE SCHEMA
GRANT
SET
CREATE FUNCTION
CREATE FUNCTION
REVOKE
GRANT
COMMENT
CREATE FUNCTION
REVOKE
GRANT
COMMENT
CREATE FUNCTION
REVOKE
GRANT
COMMENT
CREATE FUNCTION
REVOKE
GRANT
COMMENT
CREATE FUNCTION
REVOKE
GRANT
COMMENT
CREATE FUNCTION
REVOKE
GRANT
COMMENT
CREATE FUNCTION
REVOKE
GRANT
COMMENT
CREATE FUNCTION
REVOKE
GRANT
COMMENT
GRANT
RESET
NOTICE:  extension "pg_stat_statements" already exists, skipping
CREATE EXTENSION
NOTICE:  extension "pg_stat_kcache" already exists, skipping
CREATE EXTENSION
NOTICE:  extension "set_user" already exists, skipping
CREATE EXTENSION
NOTICE:  version "3.0" of extension "set_user" is already installed
ALTER EXTENSION
GRANT
GRANT
GRANT
NOTICE:  schema "metric_helpers" already exists, skipping
CREATE SCHEMA
GRANT
SET
CREATE FUNCTION
CREATE VIEW
CREATE FUNCTION
CREATE VIEW
CREATE FUNCTION
CREATE VIEW
REVOKE
GRANT
REVOKE
GRANT
RESET
You are now connected to database "template1" as user "postgres".
NOTICE:  schema "user_management" already exists, skipping
CREATE SCHEMA
GRANT
SET
CREATE FUNCTION
CREATE FUNCTION
REVOKE
GRANT
COMMENT
CREATE FUNCTION
REVOKE
GRANT
COMMENT
CREATE FUNCTION
REVOKE
GRANT
COMMENT
CREATE FUNCTION
REVOKE
GRANT
COMMENT
CREATE FUNCTION
REVOKE
GRANT
COMMENT
CREATE FUNCTION
REVOKE
GRANT
COMMENT
CREATE FUNCTION
REVOKE
GRANT
COMMENT
CREATE FUNCTION
REVOKE
GRANT
COMMENT
GRANT
RESET
NOTICE:  extension "pg_stat_statements" already exists, skipping
CREATE EXTENSION
NOTICE:  extension "pg_stat_kcache" already exists, skipping
CREATE EXTENSION
NOTICE:  extension "set_user" already exists, skipping
CREATE EXTENSION
NOTICE:  version "3.0" of extension "set_user" is already installed
ALTER EXTENSION
GRANT
GRANT
GRANT
NOTICE:  schema "metric_helpers" already exists, skipping
CREATE SCHEMA
GRANT
SET
CREATE FUNCTION
CREATE VIEW
CREATE FUNCTION
CREATE VIEW
CREATE FUNCTION
CREATE VIEW
REVOKE
GRANT
REVOKE
GRANT
RESET
2023-10-13 02:57:47,773 INFO: no action. I am (pg-backup-postgresql-1), the leader with the lock
2023-10-13 02:57:57,760 INFO: no action. I am (pg-backup-postgresql-1), the leader with the lock
2023-10-13 02:58:07,761 INFO: no action. I am (pg-backup-postgresql-1), the leader with the lock
2023-10-13 02:58:17,722 INFO: no action. I am (pg-backup-postgresql-1), the leader with the lock
2023-10-13 02:58:27,778 INFO: no action. I am (pg-backup-postgresql-1), the leader with the lock
2023-10-13 02:58:33.754 UTC [69] LOG Starting pgqd 3.5
2023-10-13 02:58:33.754 UTC [69] LOG auto-detecting dbs ...
2023-10-13 02:58:37,722 INFO: no action. I am (pg-backup-postgresql-1), the leader with the lock
2023-10-13 02:58:47,724 INFO: no action. I am (pg-backup-postgresql-1), the leader with the lock
2023-10-13 02:58:57,723 INFO: no action. I am (pg-backup-postgresql-1), the leader with the lock
...
2023-10-13 03:11:04.094 UTC [69] LOG {ticks: 0, maint: 0, retry: 0}
2023-10-13 03:11:07,724 INFO: no action. I am (pg-backup-postgresql-1), the leader with the lock
2023-10-13 03:11:17,723 INFO: no action. I am (pg-backup-postgresql-1), the leader with the lock
2023-10-13 03:11:27,721 INFO: no action. I am (pg-backup-postgresql-1), the leader with the lock
2023-10-13 03:11:34.098 UTC [69] LOG {ticks: 0, maint: 0, retry: 0}
2023-10-13 03:11:37,722 INFO: no action. I am (pg-backup-postgresql-1), the leader with the lock
2023-10-13 03:11:47,738 INFO: no action. I am (pg-backup-postgresql-1), the leader with the lock
2023-10-13 03:11:57,722 INFO: no action. I am (pg-backup-postgresql-1), the leader with the lock
➜  ~ 

logs check role

➜  ~ k logs pg-backup-postgresql-0  kb-role-probe
{"level":"error","ts":1697165823.4257655,"caller":"binding/operation_volume_protection.go:182","msg":"unmarshal volume protection spec error","raw spec":"","error":"unexpected end of JSON input","stacktrace":"github.com/apecloud/kubeblocks/lorry/binding.(*operationVolumeProtection).initVolumes\n\t/src/lorry/binding/operation_volume_protection.go:182\ngithub.com/apecloud/kubeblocks/lorry/binding.init.1\n\t/src/lorry/binding/operation_volume_protection.go:103\nruntime.doInit1\n\t/usr/local/go/src/runtime/proc.go:6740\nruntime.doInit\n\t/usr/local/go/src/runtime/proc.go:6707\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:249"}
{"level":"info","ts":1697165823.4258568,"caller":"binding/operation_volume_protection.go:104","msg":"init volumes to monitor failed","error":"unexpected end of JSON input"}
{"level":"info","ts":1697165823.4259064,"caller":"binding/operation_volume_protection.go:106","msg":"succeed to init volume protection, pod: pg-backup-postgresql-0, spec: {\"highWatermark\":\"0\",\"volumes\":[]}"}
2023-10-13T02:57:03Z	INFO	grpc	role probe request	{"type": "custom"}
2023-10-13T02:57:03Z	INFO	middleware	No correspond builtin type, use the custom...
2023-10-13T02:57:03Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36501/role?KB_RSM_LAST_STDOUT=", "output": ""}
2023-10-13T02:57:03Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36502/role?KB_RSM_LAST_STDOUT=", "output": "secondary"}
2023-10-13T02:57:03Z	INFO	Custom	send event: map[event:Success operation:checkRole originalRole: role:secondary]
2023-10-13T02:57:03Z	INFO	Custom	operation called	{"operation": "checkRole", "result": {"event":"Success","operation":"checkRole","originalRole":"","role":"secondary"}}
2023-10-13T02:57:03Z	INFO	grpc	No event detected	{"meta": "{\"duration\":\"26.458811ms\",\"end-time\":\"2023-10-13T02:57:03.62162615Z\",\"operation\":\"checkRole\",\"start-time\":\"2023-10-13T02:57:03.595167348Z\"}", "data": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"\",\"role\":\"secondary\"}"}
2023-10-13T02:57:04Z	INFO	grpc	role probe request	{"type": "custom"}
2023-10-13T02:57:04Z	INFO	middleware	No correspond builtin type, use the custom...
2023-10-13T02:57:04Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36501/role?KB_RSM_LAST_STDOUT=", "output": ""}
2023-10-13T02:57:04Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36502/role?KB_RSM_LAST_STDOUT=", "output": "secondary"}
2023-10-13T02:57:04Z	INFO	Custom	operation called	{"operation": "checkRole", "result": {"event":"Success","operation":"checkRole","originalRole":"secondary","role":"secondary"}}
2023-10-13T02:57:04Z	INFO	grpc	No event detected	{"meta": "{\"duration\":\"6.906141ms\",\"end-time\":\"2023-10-13T02:57:04.296443688Z\",\"operation\":\"checkRole\",\"start-time\":\"2023-10-13T02:57:04.289537541Z\"}", "data": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"secondary\",\"role\":\"secondary\"}"}
2023-10-13T02:57:05Z	INFO	grpc	role probe request	{"type": "custom"}
2023-10-13T02:57:05Z	INFO	middleware	No correspond builtin type, use the custom...
2023-10-13T02:57:05Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36501/role?KB_RSM_LAST_STDOUT=", "output": ""}
2023-10-13T02:57:05Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36502/role?KB_RSM_LAST_STDOUT=", "output": "secondary"}
2023-10-13T02:57:05Z	INFO	Custom	operation called	{"operation": "checkRole", "result": {"event":"Success","operation":"checkRole","originalRole":"secondary","role":"secondary"}}
2023-10-13T02:57:05Z	INFO	grpc	No event detected	{"meta": "{\"duration\":\"7.755681ms\",\"end-time\":\"2023-10-13T02:57:05.295358988Z\",\"operation\":\"checkRole\",\"start-time\":\"2023-10-13T02:57:05.287603314Z\"}", "data": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"secondary\",\"role\":\"secondary\"}"}
2023-10-13T02:57:06Z	INFO	grpc	role probe request	{"type": "custom"}
2023-10-13T02:57:06Z	INFO	middleware	No correspond builtin type, use the custom...
2023-10-13T02:57:06Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36501/role?KB_RSM_LAST_STDOUT=", "output": ""}
2023-10-13T02:57:06Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36502/role?KB_RSM_LAST_STDOUT=", "output": "secondary"}
2023-10-13T02:57:06Z	INFO	Custom	operation called	{"operation": "checkRole", "result": {"event":"Success","operation":"checkRole","originalRole":"secondary","role":"secondary"}}
2023-10-13T02:57:06Z	INFO	grpc	No event detected	{"meta": "{\"duration\":\"8.441071ms\",\"end-time\":\"2023-10-13T02:57:06.291297712Z\",\"operation\":\"checkRole\",\"start-time\":\"2023-10-13T02:57:06.282856647Z\"}", "data": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"secondary\",\"role\":\"secondary\"}"}
...
2023-10-13T03:20:07Z	INFO	grpc	role probe request	{"type": "custom"}
2023-10-13T03:20:07Z	INFO	middleware	No correspond builtin type, use the custom...
2023-10-13T03:20:07Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36501/role?KB_RSM_LAST_STDOUT=", "output": ""}
2023-10-13T03:20:07Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36502/role?KB_RSM_LAST_STDOUT=", "output": "secondary"}
2023-10-13T03:20:07Z	INFO	Custom	operation called	{"operation": "checkRole", "result": {"event":"Success","operation":"checkRole","originalRole":"secondary","role":"secondary"}}
2023-10-13T03:20:07Z	INFO	grpc	No event detected	{"meta": "{\"duration\":\"21.478861ms\",\"end-time\":\"2023-10-13T03:20:07.310679609Z\",\"operation\":\"checkRole\",\"start-time\":\"2023-10-13T03:20:07.289200752Z\"}", "data": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"secondary\",\"role\":\"secondary\"}"}
2023-10-13T03:20:08Z	INFO	grpc	role probe request	{"type": "custom"}
2023-10-13T03:20:08Z	INFO	middleware	No correspond builtin type, use the custom...
2023-10-13T03:20:08Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36501/role?KB_RSM_LAST_STDOUT=", "output": ""}
2023-10-13T03:20:08Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36502/role?KB_RSM_LAST_STDOUT=", "output": "secondary"}
2023-10-13T03:20:08Z	INFO	Custom	operation called	{"operation": "checkRole", "result": {"event":"Success","operation":"checkRole","originalRole":"secondary","role":"secondary"}}
2023-10-13T03:20:08Z	INFO	grpc	No event detected	{"meta": "{\"duration\":\"22.270426ms\",\"end-time\":\"2023-10-13T03:20:08.3130801Z\",\"operation\":\"checkRole\",\"start-time\":\"2023-10-13T03:20:08.290809712Z\"}", "data": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"secondary\",\"role\":\"secondary\"}"}
➜  ~ 

➜  ~ k logs pg-backup-postgresql-1  kb-role-probe
{"level":"error","ts":1697165822.433919,"caller":"binding/operation_volume_protection.go:182","msg":"unmarshal volume protection spec error","raw spec":"","error":"unexpected end of JSON input","stacktrace":"github.com/apecloud/kubeblocks/lorry/binding.(*operationVolumeProtection).initVolumes\n\t/src/lorry/binding/operation_volume_protection.go:182\ngithub.com/apecloud/kubeblocks/lorry/binding.init.1\n\t/src/lorry/binding/operation_volume_protection.go:103\nruntime.doInit1\n\t/usr/local/go/src/runtime/proc.go:6740\nruntime.doInit\n\t/usr/local/go/src/runtime/proc.go:6707\nruntime.main\n\t/usr/local/go/src/runtime/proc.go:249"}
{"level":"info","ts":1697165822.434032,"caller":"binding/operation_volume_protection.go:104","msg":"init volumes to monitor failed","error":"unexpected end of JSON input"}
{"level":"info","ts":1697165822.434092,"caller":"binding/operation_volume_protection.go:106","msg":"succeed to init volume protection, pod: pg-backup-postgresql-1, spec: {\"highWatermark\":\"0\",\"volumes\":[]}"}
2023-10-13T02:57:03Z	INFO	grpc	role probe request	{"type": "custom"}
2023-10-13T02:57:03Z	INFO	middleware	No correspond builtin type, use the custom...
2023-10-13T02:57:03Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36501/role?KB_RSM_LAST_STDOUT=", "output": ""}
2023-10-13T02:57:03Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36502/role?KB_RSM_LAST_STDOUT=", "output": "secondary"}
2023-10-13T02:57:03Z	INFO	Custom	send event: map[event:Success operation:checkRole originalRole: role:secondary]
2023-10-13T02:57:03Z	INFO	Custom	operation called	{"operation": "checkRole", "result": {"event":"Success","operation":"checkRole","originalRole":"","role":"secondary"}}
2023-10-13T02:57:03Z	INFO	grpc	No event detected	{"meta": "{\"duration\":\"32.110533ms\",\"end-time\":\"2023-10-13T02:57:03.175864414Z\",\"operation\":\"checkRole\",\"start-time\":\"2023-10-13T02:57:03.14375389Z\"}", "data": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"\",\"role\":\"secondary\"}"}
2023-10-13T02:57:04Z	INFO	grpc	role probe request	{"type": "custom"}
2023-10-13T02:57:04Z	INFO	middleware	No correspond builtin type, use the custom...
2023-10-13T02:57:04Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36501/role?KB_RSM_LAST_STDOUT=", "output": ""}
2023-10-13T02:57:04Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36502/role?KB_RSM_LAST_STDOUT=", "output": "secondary"}
2023-10-13T02:57:04Z	INFO	Custom	operation called	{"operation": "checkRole", "result": {"event":"Success","operation":"checkRole","originalRole":"secondary","role":"secondary"}}
2023-10-13T02:57:04Z	INFO	grpc	No event detected	{"meta": "{\"duration\":\"7.607811ms\",\"end-time\":\"2023-10-13T02:57:04.14603782Z\",\"operation\":\"checkRole\",\"start-time\":\"2023-10-13T02:57:04.138430008Z\"}", "data": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"secondary\",\"role\":\"secondary\"}"}
2023-10-13T02:57:05Z	INFO	grpc	role probe request	{"type": "custom"}
2023-10-13T02:57:05Z	INFO	middleware	No correspond builtin type, use the custom...
2023-10-13T02:57:05Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36501/role?KB_RSM_LAST_STDOUT=", "output": ""}
2023-10-13T02:57:05Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36502/role?KB_RSM_LAST_STDOUT=", "output": "secondary"}
2023-10-13T02:57:05Z	INFO	Custom	operation called	{"operation": "checkRole", "result": {"event":"Success","operation":"checkRole","originalRole":"secondary","role":"secondary"}}
2023-10-13T02:57:05Z	INFO	grpc	No event detected	{"meta": "{\"duration\":\"7.272546ms\",\"end-time\":\"2023-10-13T02:57:05.147505241Z\",\"operation\":\"checkRole\",\"start-time\":\"2023-10-13T02:57:05.140232707Z\"}", "data": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"secondary\",\"role\":\"secondary\"}"}
2023-10-13T02:57:06Z	INFO	grpc	role probe request	{"type": "custom"}
2023-10-13T02:57:06Z	INFO	middleware	No correspond builtin type, use the custom...
2023-10-13T02:57:06Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36501/role?KB_RSM_LAST_STDOUT=", "output": ""}
2023-10-13T02:57:06Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36502/role?KB_RSM_LAST_STDOUT=", "output": "secondary"}
2023-10-13T02:57:06Z	INFO	Custom	operation called	{"operation": "checkRole", "result": {"event":"Success","operation":"checkRole","originalRole":"secondary","role":"secondary"}}
2023-10-13T02:57:06Z	INFO	grpc	No event detected	{"meta": "{\"duration\":\"6.883734ms\",\"end-time\":\"2023-10-13T02:57:06.145198857Z\",\"operation\":\"checkRole\",\"start-time\":\"2023-10-13T02:57:06.138315101Z\"}", "data": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"secondary\",\"role\":\"secondary\"}"}
2023-10-13T02:57:07Z	INFO	grpc	role probe request	{"type": "custom"}
2023-10-13T02:57:07Z	INFO	middleware	No correspond builtin type, use the custom...
2023-10-13T02:57:07Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36501/role?KB_RSM_LAST_STDOUT=", "output": ""}
2023-10-13T02:57:07Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36502/role?KB_RSM_LAST_STDOUT=", "output": "secondary"}
2023-10-13T02:57:07Z	INFO	Custom	operation called	{"operation": "checkRole", "result": {"event":"Success","operation":"checkRole","originalRole":"secondary","role":"secondary"}}
2023-10-13T02:57:07Z	INFO	grpc	No event detected	{"meta": "{\"duration\":\"7.882263ms\",\"end-time\":\"2023-10-13T02:57:07.149015465Z\",\"operation\":\"checkRole\",\"start-time\":\"2023-10-13T02:57:07.141133198Z\"}", "data": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"secondary\",\"role\":\"secondary\"}"}
...
2023-10-13T03:21:32Z	INFO	grpc	role probe request	{"type": "custom"}
2023-10-13T03:21:32Z	INFO	middleware	No correspond builtin type, use the custom...
2023-10-13T03:21:32Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36501/role?KB_RSM_LAST_STDOUT=", "output": ""}
2023-10-13T03:21:32Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36502/role?KB_RSM_LAST_STDOUT=", "output": "secondary"}
2023-10-13T03:21:32Z	INFO	Custom	operation called	{"operation": "checkRole", "result": {"event":"Success","operation":"checkRole","originalRole":"secondary","role":"secondary"}}
2023-10-13T03:21:32Z	INFO	grpc	No event detected	{"meta": "{\"duration\":\"20.266146ms\",\"end-time\":\"2023-10-13T03:21:32.161484586Z\",\"operation\":\"checkRole\",\"start-time\":\"2023-10-13T03:21:32.141218452Z\"}", "data": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"secondary\",\"role\":\"secondary\"}"}
2023-10-13T03:21:33Z	INFO	grpc	role probe request	{"type": "custom"}
2023-10-13T03:21:33Z	INFO	middleware	No correspond builtin type, use the custom...
2023-10-13T03:21:33Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36501/role?KB_RSM_LAST_STDOUT=", "output": ""}
2023-10-13T03:21:33Z	INFO	Custom	action succeed	{"url": "http://127.0.0.1:36502/role?KB_RSM_LAST_STDOUT=", "output": "secondary"}
2023-10-13T03:21:33Z	INFO	Custom	operation called	{"operation": "checkRole", "result": {"event":"Success","operation":"checkRole","originalRole":"secondary","role":"secondary"}}
2023-10-13T03:21:33Z	INFO	grpc	No event detected	{"meta": "{\"duration\":\"24.033901ms\",\"end-time\":\"2023-10-13T03:21:33.171740036Z\",\"operation\":\"checkRole\",\"start-time\":\"2023-10-13T03:21:33.147706139Z\"}", "data": "{\"event\":\"Success\",\"operation\":\"checkRole\",\"originalRole\":\"secondary\",\"role\":\"secondary\"}"}
➜  ~ 

Expected behavior
pg cluster restore success.

Screenshots
If applicable, add screenshots to help explain your problem.

Desktop (please complete the following information):

  • OS: [e.g. iOS]
  • Browser [e.g. chrome, safari]
  • Version [e.g. 22]

Additional context
Add any other context about the problem here.

@JashBook JashBook added the kind/bug Something isn't working label Oct 13, 2023
@JashBook JashBook assigned Y-Rookie and unassigned wangyelei Oct 13, 2023
@JashBook JashBook added severity/critical Blocking or critical issues severity/major Great chance user will encounter the same problem and removed severity/critical Blocking or critical issues labels Oct 13, 2023
@JashBook JashBook added this to the Release 0.7.0 milestone Oct 13, 2023
@JashBook JashBook assigned free6om and unassigned Y-Rookie Oct 13, 2023
@JashBook
Copy link
Collaborator Author

JashBook commented Oct 13, 2023

dup #4756, will track in this latest issue

@JashBook JashBook assigned Y-Rookie and unassigned free6om Oct 13, 2023
@JashBook JashBook changed the title [BUG] pg cluster restore pg-basebackup backup failed [BUG] pg cluster restore backup failed: FATAL: password authentication failed for user "postgres" Oct 13, 2023
@wangyelei
Copy link
Contributor

fixed at #5637

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug kind/bug Something isn't working severity/major Great chance user will encounter the same problem
Projects
None yet
Development

No branches or pull requests

5 participants