== conf-sanity test 84: check recovery_hard_time ========= 11:04:08 (1713279848) start mds service on oleg120-server start mds service on oleg120-server Starting mds1: -o localrecov -o recovery_time_hard=60,recovery_time_soft=60 /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg120-server: oleg120-server.virtnet: executing set_default_debug -1 all pdsh@oleg120-client: oleg120-server: ssh exited with exit code 1 Started lustre-MDT0000 start mds service on oleg120-server Starting mds2: -o localrecov -o recovery_time_hard=60,recovery_time_soft=60 /dev/mapper/mds2_flakey /mnt/lustre-mds2 oleg120-server: oleg120-server.virtnet: executing set_default_debug -1 all pdsh@oleg120-client: oleg120-server: ssh exited with exit code 1 Started lustre-MDT0001 oleg120-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0000-mdc-*.mds_server_uuid oleg120-client.virtnet: executing wait_import_state_mount FULL mdc.lustre-MDT0001-mdc-*.mds_server_uuid start ost1 service on oleg120-server Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg120-server: oleg120-server.virtnet: executing set_default_debug -1 all pdsh@oleg120-client: oleg120-server: ssh exited with exit code 1 Started lustre-OST0000 oleg120-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid oleg120-server: oleg120-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg120-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg120-server: oleg120-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid 50 oleg120-server: os[cp].lustre-OST0000-osc-MDT0001.ost_server_uuid in FULL state after 1 sec start ost2 service on oleg120-server Starting ost2: -o localrecov /dev/mapper/ost2_flakey /mnt/lustre-ost2 seq.cli-lustre-OST0001-super.width=65536 oleg120-server: oleg120-server.virtnet: executing set_default_debug -1 all pdsh@oleg120-client: oleg120-server: ssh exited with exit code 1 Commit the device label on /dev/mapper/ost2_flakey Started lustre-OST0001 oleg120-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0001-osc-[-0-9a-f]*.ost_server_uuid oleg120-server: oleg120-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid 50 oleg120-server: os[cp].lustre-OST0001-osc-MDT0000.ost_server_uuid in FULL state after 0 sec oleg120-server: oleg120-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid 50 oleg120-server: os[cp].lustre-OST0001-osc-MDT0001.ost_server_uuid in FULL state after 0 sec recovery_time=60, timeout=20, wrap_up=5 mount lustre on /mnt/lustre..... Starting client: oleg120-client.virtnet: -o user_xattr,flock oleg120-server@tcp:/lustre /mnt/lustre mount lustre on /mnt/lustre2..... Starting client: oleg120-client.virtnet: -o user_xattr,flock oleg120-server@tcp:/lustre /mnt/lustre2 UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 95248 1668 84924 2% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 95248 1532 85060 2% /mnt/lustre[MDT:1] lustre-OST0000_UUID 142216 1524 126692 2% /mnt/lustre[OST:0] lustre-OST0001_UUID 142216 1524 126692 2% /mnt/lustre[OST:1] filesystem_summary: 284432 3048 253384 2% /mnt/lustre total: 1000 open/close in 2.72 seconds: 367.40 ops/second fail_loc=0x20000709 fail_val=5 Failing mds1 on oleg120-server Stopping /mnt/lustre-mds1 (opts:) on oleg120-server 11:04:46 (1713279886) shut down Failover mds1 to oleg120-server e2fsck -d -v -t -t -f -n /dev/mapper/mds1_flakey -m8 oleg120-server: e2fsck 1.46.2.wc5 (26-Mar-2022) oleg120-server: Use max possible thread num: 1 instead Warning: skipping journal recovery because doing a read-only filesystem check. Pass 1: Checking inodes, blocks, and sizes [Thread 0] Scan group range [0, 3) [Thread 0] jumping to group 0 [Thread 0] e2fsck_pass1_run:2564: increase inode 81 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 82 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 83 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 84 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 85 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 86 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 87 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 88 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 89 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 90 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 91 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 92 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 93 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 94 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 95 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 96 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 97 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 98 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 99 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 100 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 101 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 102 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 103 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 104 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 105 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 106 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 107 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 108 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 109 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 110 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 111 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 112 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 113 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 114 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 115 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 116 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 117 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 118 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 119 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 120 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 121 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 122 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 123 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 124 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 125 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 126 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 127 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 128 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 129 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 130 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 131 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 132 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 133 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 134 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 135 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 136 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 137 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 138 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 139 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 140 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 141 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 142 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 143 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 144 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 145 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 146 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 147 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 148 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 149 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 150 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 151 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 152 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 153 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 154 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 155 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 156 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 157 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 158 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 159 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 160 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 161 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 162 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 163 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 164 badness 0 to 2 for 10084 [Thread 0] group 1 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 26697 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 26724 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 26725 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 26726 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 26727 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 26728 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 26729 badness 0 to 2 for 10084 [Thread 0] group 2 finished [Thread 0] e2fsck_pass1_run:2564: increase inode 53372 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53373 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53374 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53375 badness 0 to 2 for 10084 [Thread 0] e2fsck_pass1_run:2564: increase inode 53376 badness 0 to 2 for 10084 [Thread 0] group 3 finished [Thread 0] Pass 1: Memory used: 264k/0k (140k/125k), time: 0.01/ 0.00/ 0.00 [Thread 0] Pass 1: I/O read: 1MB, write: 0MB, rate: 172.77MB/s [Thread 0] Scanned group range [0, 3), inodes 277 Pass 2: Checking directory structure Pass 2: Memory used: 264k/0k (97k/168k), time: 0.01/ 0.00/ 0.00 Pass 2: I/O read: 1MB, write: 0MB, rate: 175.10MB/s Pass 3: Checking directory connectivity Peak memory: Memory used: 264k/0k (97k/168k), time: 0.02/ 0.01/ 0.01 Pass 3: Memory used: 264k/0k (96k/169k), time: 0.00/ 0.00/ 0.00 Pass 3: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 4: Checking reference counts Pass 4: Memory used: 264k/0k (67k/198k), time: 0.00/ 0.00/ 0.00 Pass 4: I/O read: 0MB, write: 0MB, rate: 0.00MB/s Pass 5: Checking group summary information Free blocks count wrong (25455, counted=25443). Fix? no Free inodes count wrong (79719, counted=79715). Fix? no Pass 5: Memory used: 264k/0k (67k/198k), time: 0.00/ 0.00/ 0.00 Pass 5: I/O read: 1MB, write: 0MB, rate: 212.04MB/s 273 inodes used (0.34%, out of 79992) 5 non-contiguous files (1.8%) 0 non-contiguous directories (0.0%) # of inodes with ind/dind/tind blocks: 0/0/0 24545 blocks used (49.09%, out of 50000) 0 bad blocks 1 large file 150 regular files 117 directories 0 character device files 0 block device files 0 fifos 0 links 0 symbolic links (0 fast symbolic links) 0 sockets ------------ 267 files Memory used: 264k/0k (66k/199k), time: 0.03/ 0.02/ 0.01 I/O read: 1MB, write: 0MB, rate: 32.07MB/s mount facets: mds1 Starting mds1: -o localrecov -o recovery_time_hard=60,recovery_time_soft=60 /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg120-server: oleg120-server.virtnet: executing set_default_debug -1 all pdsh@oleg120-client: oleg120-server: ssh exited with exit code 1 Started lustre-MDT0000 11:05:00 (1713279900) targets are mounted 11:05:00 (1713279900) facet_failover done oleg120-client: error: invalid path '/mnt/lustre': Input/output error pdsh@oleg120-client: oleg120-client: ssh exited with exit code 5 recovery status status: COMPLETE recovery_start: 1713279904 recovery_duration: 60 completed_clients: 2/3 replayed_requests: 156 last_transno: 8589934748 VBR: DISABLED IR: DISABLED fail_loc=0 umount lustre on /mnt/lustre..... Stopping client oleg120-client.virtnet /mnt/lustre (opts:) umount lustre on /mnt/lustre2..... Stopping client oleg120-client.virtnet /mnt/lustre2 (opts:) stop ost1 service on oleg120-server Stopping /mnt/lustre-ost1 (opts:-f) on oleg120-server stop ost2 service on oleg120-server Stopping /mnt/lustre-ost2 (opts:-f) on oleg120-server stop mds service on oleg120-server Stopping /mnt/lustre-mds1 (opts:-f) on oleg120-server stop mds service on oleg120-server Stopping /mnt/lustre-mds2 (opts:-f) on oleg120-server