== replay-single test 89: no disk space leak on late ost connection ========================================================== 16:48:57 (1713300537) Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg130-server mds-ost sync done. sleep 5 for ZFS zfs Waiting for MDT destroys to complete 10+0 records in 10+0 records out 41943040 bytes (42 MB) copied, 0.267655 s, 157 MB/s Stopping /mnt/lustre-ost1 (opts:) on oleg130-server Failing mds1 on oleg130-server Stopping /mnt/lustre-mds1 (opts:) on oleg130-server 16:49:10 (1713300550) shut down Failover mds1 to oleg130-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg130-server: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg130-client: oleg130-server: ssh exited with exit code 1 Started lustre-MDT0000 16:49:24 (1713300564) targets are mounted 16:49:24 (1713300564) facet_failover done Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 seq.cli-lustre-OST0000-super.width=65536 oleg130-server: oleg130-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg130-client: oleg130-server: ssh exited with exit code 1 Started lustre-OST0000 Starting client: oleg130-client.virtnet: -o user_xattr,flock oleg130-server@tcp:/lustre /mnt/lustre osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 67 sec Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg130-server mds-ost sync done. sleep 5 for ZFS zfs Waiting for MDT destroys to complete free_before: 7517184 free_after: 7517184