== replay-ost-single test 9: Verify that no req deadline happened during recovery ========================================================== 03:32:07 (1713425527) 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0178376 s, 58.8 MB/s UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 1772 1285916 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 1612 1286076 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 1540 3604432 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1524 3605496 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 3064 7209928 1% /mnt/lustre 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00978754 s, 107 MB/s fail_loc=0x00000714 fail_val=20 Failing ost1 on oleg313-server Stopping /mnt/lustre-ost1 (opts:) on oleg313-server 03:32:11 (1713425531) shut down Failover ost1 to oleg313-server mount facets: ost1 Starting ost1: -o localrecov /dev/mapper/ost1_flakey /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg313-server: oleg313-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg313-client: oleg313-server: ssh exited with exit code 1 Started lustre-OST0000 03:32:25 (1713425545) targets are mounted 03:32:25 (1713425545) facet_failover done oleg313-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec fail_loc=0