== replay-single test 89: no disk space leak on late ost connection ========================================================== 12:01:19 (1713283279) Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg438-server mds-ost sync done. sleep 5 for ZFS zfs Waiting for MDT destroys to complete 10+0 records in 10+0 records out 41943040 bytes (42 MB) copied, 0.269764 s, 155 MB/s Stopping /mnt/lustre-ost1 (opts:) on oleg438-server Failing mds1 on oleg438-server Stopping /mnt/lustre-mds1 (opts:) on oleg438-server 12:01:32 (1713283292) shut down Failover mds1 to oleg438-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg438-server: oleg438-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg438-client: oleg438-server: ssh exited with exit code 1 Started lustre-MDT0000 12:01:45 (1713283305) targets are mounted 12:01:45 (1713283305) facet_failover done Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg438-server: oleg438-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg438-client: oleg438-server: ssh exited with exit code 1 Started lustre-OST0000 Starting client: oleg438-client.virtnet: -o user_xattr,flock oleg438-server@tcp:/lustre /mnt/lustre osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 68 sec Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg438-server mds-ost sync done. sleep 5 for ZFS zfs Waiting for MDT destroys to complete free_before: 7517184 free_after: 7517184