== replay-single test 89: no disk space leak on late ost connection ========================================================== 20:02:43 (1713484963) Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg255-server mds-ost sync done. sleep 5 for ZFS zfs Waiting for MDT destroys to complete 10+0 records in 10+0 records out 41943040 bytes (42 MB) copied, 0.210935 s, 199 MB/s Stopping /mnt/lustre-ost1 (opts:) on oleg255-server Failing mds1 on oleg255-server Stopping /mnt/lustre-mds1 (opts:) on oleg255-server 20:02:55 (1713484975) shut down Failover mds1 to oleg255-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg255-server: oleg255-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg255-client: oleg255-server: ssh exited with exit code 1 Started lustre-MDT0000 20:03:09 (1713484989) targets are mounted 20:03:09 (1713484989) facet_failover done Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg255-server: oleg255-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg255-client: oleg255-server: ssh exited with exit code 1 Started lustre-OST0000 Starting client: oleg255-client.virtnet: -o user_xattr,flock oleg255-server@tcp:/lustre /mnt/lustre osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 68 sec Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg255-server mds-ost sync done. sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete free_before: 7517184 free_after: 7517184