== ost-pools test 25: Create new pool and restart MDS ==== 11:08:23 (1713280103) oleg328-server: Pool lustre.testpool1 created oleg328-server: OST lustre-OST0000_UUID added to pool lustre.testpool1 Failing mds1 on oleg328-server Stopping /mnt/lustre-mds1 (opts:) on oleg328-server 11:08:31 (1713280111) shut down Failover mds1 to oleg328-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg328-server: oleg328-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg328-client: oleg328-server: ssh exited with exit code 1 Started lustre-MDT0000 11:08:45 (1713280125) targets are mounted 11:08:45 (1713280125) facet_failover done oleg328-server: oleg328-server.virtnet: executing wait_import_state FULL os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid 50 oleg328-server: os[cp].lustre-OST0000-osc-MDT0000.ost_server_uuid in FULL state after 0 sec Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg328-server mds-ost sync done. Creating a file in pool1 Destroy the created pools: testpool1 lustre.testpool1 oleg328-server: OST lustre-OST0000_UUID removed from pool lustre.testpool1 oleg328-server: Pool lustre.testpool1 destroyed