== ost-pools test 25: Create new pool and restart MDS ==== 15:16:14 (1713294974) oleg245-server: Pool lustre.testpool1 created oleg245-server: OST lustre-OST0000_UUID added to pool lustre.testpool1 Failing mds1 on oleg245-server Stopping /mnt/lustre-mds1 (opts:) on oleg245-server 15:16:21 (1713294981) shut down Failover mds1 to oleg245-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg245-server: oleg245-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg245-client: oleg245-server: ssh exited with exit code 1 Started lustre-MDT0000 15:16:34 (1713294994) targets are mounted 15:16:34 (1713294994) facet_failover done oleg245-server: oleg245-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg245-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg245-server mds-ost sync done. Creating a file in pool1 Destroy the created pools: testpool1 lustre.testpool1 oleg245-server: OST lustre-OST0000_UUID removed from pool lustre.testpool1 oleg245-server: Pool lustre.testpool1 destroyed