== replay-single test 20b: write, unlink, eviction, replay (test mds_cleanup_orphans) ========================================================== 15:54:03 (1713297243) /mnt/lustre/f20b.replay-single lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 obdidx objid objid group 0 1090 0x442 0x240000400 10000+0 records in 10000+0 records out 40960000 bytes (41 MB) copied, 1.40713 s, 29.1 MB/s Failing mds1 on oleg130-server Stopping /mnt/lustre-mds1 (opts:) on oleg130-server 15:54:07 (1713297247) shut down Failover mds1 to oleg130-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg130-server: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg130-client: oleg130-server: ssh exited with exit code 1 Started lustre-MDT0000 15:54:20 (1713297260) targets are mounted 15:54:20 (1713297260) facet_failover done oleg130-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec affected facets: mds1 oleg130-server: oleg130-server.virtnet: executing _wait_recovery_complete *.lustre-MDT0000.recovery_status 1475 oleg130-server: *.lustre-MDT0000.recovery_status status: COMPLETE sleep 5 for ZFS zfs Waiting for MDT destroys to complete before 6144, after 6144