== ost-pools test 25: Create new pool and restart MDS ==== 19:09:51 (1713481791) oleg127-server: Pool lustre.testpool1 created oleg127-server: OST lustre-OST0000_UUID added to pool lustre.testpool1 Failing mds1 on oleg127-server Stopping /mnt/lustre-mds1 (opts:) on oleg127-server 19:09:59 (1713481799) shut down Failover mds1 to oleg127-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg127-server: oleg127-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg127-client: oleg127-server: ssh exited with exit code 1 Started lustre-MDT0000 19:10:13 (1713481813) targets are mounted 19:10:13 (1713481813) facet_failover done oleg127-server: oleg127-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg127-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg127-server mds-ost sync done. Creating a file in pool1 Destroy the created pools: testpool1 lustre.testpool1 oleg127-server: OST lustre-OST0000_UUID removed from pool lustre.testpool1 oleg127-server: Pool lustre.testpool1 destroyed