== replay-single test 89: no disk space leak on late ost connection ========================================================== 10:00:10 (1713535210) Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg119-server mds-ost sync done. sleep 5 for ZFS zfs Waiting for MDT destroys to complete 10+0 records in 10+0 records out 41943040 bytes (42 MB) copied, 0.196492 s, 213 MB/s Stopping /mnt/lustre-ost1 (opts:) on oleg119-server Failing mds1 on oleg119-server Stopping /mnt/lustre-mds1 (opts:) on oleg119-server 10:00:21 (1713535221) shut down Failover mds1 to oleg119-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg119-server: oleg119-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg119-client: oleg119-server: ssh exited with exit code 1 Started lustre-MDT0000 10:00:34 (1713535234) targets are mounted 10:00:34 (1713535234) facet_failover done Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg119-server: oleg119-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg119-client: oleg119-server: ssh exited with exit code 1 Started lustre-OST0000 Starting client: oleg119-client.virtnet: -o user_xattr,flock oleg119-server@tcp:/lustre /mnt/lustre osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 68 sec Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg119-server mds-ost sync done. sleep 5 for ZFS zfs Waiting for MDT destroys to complete free_before: 7517184 free_after: 7517184