== replay-ost-single test 9: Verify that no req deadline happened during recovery ========================================================== 15:49:32 (1713296972) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0138159 s, 75.9 MB/s UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 1772 1285916 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 1612 1286076 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 1540 3604432 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1524 3605496 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 3064 7209928 1% /mnt/lustre 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00762423 s, 138 MB/s fail_loc=0x00000714 fail_val=20 Failing ost1 on oleg406-server Stopping /mnt/lustre-ost1 (opts:) on oleg406-server 15:49:35 (1713296975) shut down Failover ost1 to oleg406-server mount facets: ost1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg406-server: oleg406-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg406-client: oleg406-server: ssh exited with exit code 1 Started lustre-OST0000 15:49:49 (1713296989) targets are mounted 15:49:49 (1713296989) facet_failover done oleg406-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec fail_loc=0