== replay-single test 20b: write, unlink, eviction, replay (test mds_cleanup_orphans) ========================================================== 07:01:42 (1713438102) /mnt/lustre/f20b.replay-single lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 1090 0x442 0x240000400 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 1.28981 s, 31.8 MB/s Failing mds1 on oleg308-server Stopping /mnt/lustre-mds1 (opts:) on oleg308-server 07:01:46 (1713438106) shut down Failover mds1 to oleg308-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg308-server: oleg308-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg308-client: oleg308-server: ssh exited with exit code 1 Started lustre-MDT0000 07:01:59 (1713438119) targets are mounted 07:01:59 (1713438119) facet_failover done oleg308-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec affected facets: mds1 oleg308-server: oleg308-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg308-server: *.lustre-MDT0000.recovery_status status: COMPLETE sleep 5 for ZFS zfs Waiting for MDT destroys to complete before 6144, after 6144