== replay-single test 89: no disk space leak on late ost connection ========================================================== 18:15:16 (1713392116) Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg428-server mds-ost sync done. sleep 5 for ZFS zfs Waiting for MDT destroys to complete 10+0 records in 10+0 records out 41943040 bytes (42 MB) copied, 0.247617 s, 169 MB/s Stopping /mnt/lustre-ost1 (opts:) on oleg428-server Failing mds1 on oleg428-server Stopping /mnt/lustre-mds1 (opts:) on oleg428-server reboot facets: mds1 Failover mds1 to oleg428-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg428-server: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg428-client: oleg428-server: ssh exited with exit code 1 Started lustre-MDT0000 Starting ost1: -o localrecov lustre-ost1/ost1 /mnt/lustre-ost1 oleg428-server: oleg428-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg428-client: oleg428-server: ssh exited with exit code 1 Started lustre-OST0000 Starting client: oleg428-client.virtnet: -o user_xattr,flock oleg428-server@tcp:/lustre /mnt/lustre osc.lustre-OST0000-osc-[-0-9a-f]*.ost_server_uuid in FULL state after 68 sec Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg428-server mds-ost sync done. sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete free_before: 7518208 free_after: 7518208