完全崩潰後如何恢復 MariaDB Galera 集群?
我的所有 3 個節點都崩潰了。在所有節點都啟動後,我注意到 mariadb 已經死了。我無法再次執行它。
我在所有伺服器上使用 CentOS 7
我試圖啟動第一個節點,然後是其他節點,但沒有成功。
首先,我試圖按照文件所述找到最新的 seqno。所以我在所有 3 個節點上查看了這個文件:
/var/lib/mysql/grastate.dat
並註意到所有 3 個節點上的內容都是相同的(uuid 相同,seqno 相同)!這是這個文件:# GALERA saved state version: 2.1 uuid: ec3e180d-bbff-11e6-b989-3273ac13ba57 seqno: -1 cert_index:
好的。由於所有節點都是相同的,我可以將任何節點作為新節點執行並向其添加另一個節點。我使用了下一個命令:
galera_new_cluster
它沒有用。節點沒有啟動。
這是我得到的:
-- Unit mariadb.service has begun starting up. Dec 07 18:20:55 GlusterDC1_1 sh[4298]: 2016-12-07 18:20:55 139806456780992 [Note] /usr/sbin/mysqld (mysqld 10.1.19-MariaDB) starting as process 4332 ... Dec 07 18:20:58 GlusterDC1_1 sh[4298]: WSREP: Recovered position ec3e180d-bbff-11e6-b989-3273ac13ba57:83 Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] /usr/sbin/mysqld (mysqld 10.1.19-MariaDB) starting as process 4364 ... Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: Read nil XID from storage engines, skipping position init Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/galera/libgalera_smm.so' Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: wsrep_load(): Galera 25.3.18(r3632) by Codership Oy <info@codership.com> loaded successfully. Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: CRC-32C: using hardware acceleration. Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: Found saved state: ec3e180d-bbff-11e6-b989-3273ac13ba57:-1 Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/mysql/; base_host = 192.168.0.120; base_port = 4567; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignore_quorum = false Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830658434816 [Note] WSREP: Service thread queue flushed. Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: Assign initial position for certification: 83, protocol version: -1 Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: wsrep_sst_grab() Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: Start replication Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: 'wsrep-new-cluster' option used, bootstrapping the cluster Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: Setting initial position to ec3e180d-bbff-11e6-b989-3273ac13ba57:83 Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: protonet asio version 0 Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: Using CRC-32C for message checksums. Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: backend: asio Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: gcomm thread scheduling priority set to other:0 Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Warning] WSREP: access file(/var/lib/mysql//gvwstate.dat) failed(No such file or directory) Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: restore pc from disk failed Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: GMCast version 0 Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: (23356fd8, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567 Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: (23356fd8, 'tcp://0.0.0.0:4567') multicast: , ttl: 1 Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: EVS version 0 Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: gcomm: bootstrapping new group 'my_cluster' Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [Note] WSREP: start_prim is enabled, turn off pc_recovery Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [ERROR] WSREP: Address already in use Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [ERROR] WSREP: failed to open gcomm backend connection: 98: error while trying to listen 'tcp://0.0.0.0:4567?socket.non_blocking=1', asio error 'Address already in use': 98 (Address already in use) Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: at gcomm/src/asio_tcp.cpp:listen():810 Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -98 (Address already in use) Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1380: Failed to open channel 'my_cluster' at 'gcomm://192.168.0.120,192.168.0.121,192.168.0.122': -98 (Address already in use) Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [ERROR] WSREP: gcs connect failed: Address already in use Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [ERROR] WSREP: wsrep::connect(gcomm://192.168.0.120,192.168.0.121,192.168.0.122) failed: 7 Dec 07 18:20:58 GlusterDC1_1 mysqld[4364]: 2016-12-07 18:20:58 139830894778560 [ERROR] Aborting Dec 07 18:20:59 GlusterDC1_1 systemd[1]: mariadb.service: main process exited, code=exited, status=1/FAILURE Dec 07 18:20:59 GlusterDC1_1 systemd[1]: Failed to start MariaDB database server. -- Subject: Unit mariadb.service has failed
好的,我嘗試手動執行節點。使用下一個命令:
systemctl start mariadb
我得到了:
-- Unit mariadb.service has begun starting up. Dec 07 18:31:55 GlusterDC1_1 sh[4505]: 2016-12-07 18:31:55 139834720598208 [Note] /usr/sbin/mysqld (mysqld 10.1.19-MariaDB) starting as process 4539 ... Dec 07 18:31:58 GlusterDC1_1 sh[4505]: WSREP: Recovered position ec3e180d-bbff-11e6-b989-3273ac13ba57:83 Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] /usr/sbin/mysqld (mysqld 10.1.19-MariaDB) starting as process 4571 ... Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: Read nil XID from storage engines, skipping position init Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: wsrep_load(): loading provider library '/usr/lib64/galera/libgalera_smm.so' Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: wsrep_load(): Galera 25.3.18(r3632) by Codership Oy <info@codership.com> loaded successfully. Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: CRC-32C: using hardware acceleration. Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: Found saved state: ec3e180d-bbff-11e6-b989-3273ac13ba57:-1 Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: Passing config to GCS: base_dir = /var/lib/mysql/; base_host = 192.168.0.120; base_port = 4567; cert.log_conflicts = no; debug = no; evs.auto_evict = 0; evs.delay_margin = PT1S; evs.delayed_keep_period = PT30S; evs.inactive_check_period = PT0.5S; evs.inactive_timeout = PT15S; evs.join_retrans_period = PT1S; evs.max_install_timeouts = 3; evs.send_window = 4; evs.stats_report_period = PT1M; evs.suspect_timeout = PT5S; evs.user_send_window = 2; evs.view_forget_timeout = PT24H; gcache.dir = /var/lib/mysql/; gcache.keep_pages_size = 0; gcache.mem_size = 0; gcache.name = /var/lib/mysql//galera.cache; gcache.page_size = 128M; gcache.size = 128M; gcomm.thread_prio = ; gcs.fc_debug = 0; gcs.fc_factor = 1.0; gcs.fc_limit = 16; gcs.fc_master_slave = no; gcs.max_packet_size = 64500; gcs.max_throttle = 0.25; gcs.recv_q_hard_limit = 9223372036854775807; gcs.recv_q_soft_limit = 0.25; gcs.sync_donor = no; gmcast.segment = 0; gmcast.version = 0; pc.announce_timeout = PT3S; pc.checksum = false; pc.ignore_quorum = false Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525285508864 [Note] WSREP: Service thread queue flushed. Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: Assign initial position for certification: 83, protocol version: -1 Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: wsrep_sst_grab() Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: Start replication Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: Setting initial position to ec3e180d-bbff-11e6-b989-3273ac13ba57:83 Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: protonet asio version 0 Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: Using CRC-32C for message checksums. Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: backend: asio Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: gcomm thread scheduling priority set to other:0 Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Warning] WSREP: access file(/var/lib/mysql//gvwstate.dat) failed(No such file or directory) Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: restore pc from disk failed Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: GMCast version 0 Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: (acad4591, 'tcp://0.0.0.0:4567') listening at tcp://0.0.0.0:4567 Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: (acad4591, 'tcp://0.0.0.0:4567') multicast: , ttl: 1 Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: EVS version 0 Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [Note] WSREP: gcomm: connecting to group 'my_cluster', peer '192.168.0.120:,192.168.0.121:,192.168.0.122:' Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [ERROR] WSREP: Address already in use Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [ERROR] WSREP: failed to open gcomm backend connection: 98: error while trying to listen 'tcp://0.0.0.0:4567?socket.non_blocking=1', asio error 'Address already in use': 98 (Address already in use) Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: at gcomm/src/asio_tcp.cpp:listen():810 Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [ERROR] WSREP: gcs/src/gcs_core.cpp:gcs_core_open():208: Failed to open backend connection: -98 (Address already in use) Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [ERROR] WSREP: gcs/src/gcs.cpp:gcs_open():1380: Failed to open channel 'my_cluster' at 'gcomm://192.168.0.120,192.168.0.121,192.168.0.122': -98 (Address already in use) Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [ERROR] WSREP: gcs connect failed: Address already in use Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [ERROR] WSREP: wsrep::connect(gcomm://192.168.0.120,192.168.0.121,192.168.0.122) failed: 7 Dec 07 18:31:58 GlusterDC1_1 mysqld[4571]: 2016-12-07 18:31:58 140525521279168 [ERROR] Aborting Dec 07 18:31:59 GlusterDC1_1 systemd[1]: mariadb.service: main process exited, code=exited, status=1/FAILURE Dec 07 18:31:59 GlusterDC1_1 systemd[1]: Failed to start MariaDB database server. -- Subject: Unit mariadb.service has failed
我厭倦了其他節點上的兩個命令並得到了同樣的錯誤。
我也嘗試執行下一個命令,但也沒有成功:
/etc/init.d/mysql start --wsrep-new-cluster service mysql start --wsrep_cluster_address="gcomm://192.168.0.120,192.168.0.121,192.168.0.122" \ --wsrep_cluster_name="my_cluster"
在這種情況下是否可以恢復集群?
預恢復設置:
- 確保在 .profile 中導出 MYSQL_HOME 路徑。如果 MySQL 安裝在不同的位置,則對 MYSQL_HOME 進行更改。(範例:MYSQL_HOME=/path/to/mysql)
崩潰恢復步驟:
- 找到一個有效的 seqno。查看每台伺服器上的 grastate.dat 文件,看看哪台機器擁有最新的數據。seqno 最大的節點就是目前數據的節點。
- 接下來,查看三個 grastate.dat 文件。
a) Node0:此 grastate.dat 顯示正常關閉。注意序列號。我們正在尋找具有最大 seqno 的節點。
/var/lib/mysql/grastate.dat version: 2.1 uuid: cbd332a9-f617-11e2-b77d-3ee9fa637069 seqno: 43760
b) Node1:這個 grastate.dat 文件在 seqno 中顯示 -1。該節點在事務處理期間崩潰。使用 wsrep-recover 選項啟動此節點。MySQL 會將最後送出的 GTID 儲存在 InnoDB 數據頭中。
/var/lib/mysql/grastate.dat version: 2.1 uuid: cbd332a9-f617-11e2-b77d-3ee9fa637069 seqno: -1
c) Node2:這個 grastate.dat 文件沒有 seqno 或組 ID。此節點在 DDL 期間崩潰。
/var/lib/mysql/grastate.dat version: 2.1 uuid: 00000000-0000-0000-0000-000000000000 seqno: -1
- 接下來,恢復具有 uuid 但沒有 seqno 的節點。要獲取 seqno,請使用 –wsrep-recover 選項。要恢復 seqno:
/path/to/mysql/bin/mysqld –wsrep-recover. mysqld 將讀取 InnoDB 標頭檔並立即關閉。最後一個 wsrep 位置列印在 mysqld.log 文件中。
範例:140716 12:55:45
$$ Note $$WSREP:找到保存狀態:cbd332a9-f617-11e2-b77d-3ee9fa637069:36742 4. 查看來自 Node0 (seqno: 43760) 和 Node1 (seqno: -1) 的 seqno。Node0 具有目前數據快照,應首先啟動。 5. 在 Node0 上,發出以下命令以啟動節點:
a) nohup /path/to/mysql/bin/mysqld_safe – wsrep_cluster_address=gcomm:// &; 等待該節點上線。
b) 然後啟動 Node1 和 Node2。這兩個節點應該一次啟動一個,並且可以像往常一樣啟動。
c) 一旦所有三個節點都啟動並處於主要狀態,以正常方式重新啟動 Node0(因此它作為整個集群的一部分出現,而不僅僅是引導程序)。
- 如果 Node1 或 Node2 具有最高的 seqno,則該節點將作為引導程序啟動,並且您將允許其餘節點一次啟動一個(連接到具有最高 seqno 的節點)。