== sanity-pfl test 22a: Test repeat component behavior with degraded OST ========================================================== 11:14:28 (1713280468) striped dir -i0 -c2 -H crush /mnt/lustre/d22a.sanity-pfl 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0106128 s, 98.8 MB/s UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 3588 1284100 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 3524 1284164 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 22600 3584420 1% /mnt/lustre[OST:0] D lustre-OST0001_UUID 3833116 39044 3566928 2% /mnt/lustre[OST:1] filesystem_summary: 7666232 61644 7151348 1% /mnt/lustre 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.008334 s, 126 MB/s Failing mds1 on oleg405-server Stopping /mnt/lustre-mds1 (opts:) on oleg405-server 11:14:42 (1713280482) shut down Failover mds1 to oleg405-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg405-server: oleg405-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg405-client: oleg405-server: ssh exited with exit code 1 Started lustre-MDT0000 11:14:57 (1713280497) targets are mounted 11:14:57 (1713280497) facet_failover done oleg405-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec