== replay-ost-single test 7: Fail OST before obd_destroy ========================================================== 17:07:04 (1713388024) Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg138-server mds-ost sync done. Waiting for MDT destroys to complete 1280+0 records in 1280+0 records out 5242880 bytes (5.2 MB) copied, 0.354001 s, 14.8 MB/s before: 7536640 after_dd: 7531520 took 6 seconds UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210688 3200 2205440 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3771392 8192 3761152 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 3072 3766272 1% /mnt/lustre[OST:1] filesystem_summary: 7542784 11264 7527424 1% /mnt/lustre Failing ost1 on oleg138-server Stopping /mnt/lustre-ost1 (opts:) on oleg138-server reboot facets: ost1 Failover ost1 to oleg138-server mount facets: ost1 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 oleg138-server: oleg138-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg138-client: oleg138-server: ssh exited with exit code 1 Started lustre-OST0000 oleg138-client.virtnet: executing wait_import_state_mount (FULL|IDLE) osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 0 sec affected facets: ost1 oleg138-server: oleg138-server.virtnet: executing _wait_recovery_complete *.lustre-OST0000.recovery_status 1475 oleg138-server: *.lustre-OST0000.recovery_status status: COMPLETE Can't lstat /mnt/lustre/d0.replay-ost-single/f7.replay-ost-single: No such file or directory Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg138-server mds-ost sync done. sleep 5 for ZFS zfs Waiting for MDT destroys to complete before: 7536640 after: 7536640