== ost-pools test 25: Create new pool and restart MDS ==== 10:59:25 (1713452365) oleg249-server: Pool lustre.testpool1 created oleg249-server: OST lustre-OST0000_UUID added to pool lustre.testpool1 Failing mds1 on oleg249-server Stopping /mnt/lustre-mds1 (opts:) on oleg249-server 10:59:32 (1713452372) shut down Failover mds1 to oleg249-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg249-server: oleg249-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg249-client: oleg249-server: ssh exited with exit code 1 Started lustre-MDT0000 10:59:46 (1713452386) targets are mounted 10:59:46 (1713452386) facet_failover done oleg249-server: oleg249-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg249-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg249-server mds-ost sync done. Creating a file in pool1 Destroy the created pools: testpool1 lustre.testpool1 oleg249-server: OST lustre-OST0000_UUID removed from pool lustre.testpool1 oleg249-server: Pool lustre.testpool1 destroyed