== sanity-pfl test 22a: Test repeat component behavior with degraded OST ========================================================== 19:15:35 (1713482135) striped dir -i0 -c2 -H crush2 /mnt/lustre/d22a.sanity-pfl 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00905152 s, 116 MB/s UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 2608 1285080 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 4492 1283196 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 20612 3586408 1% /mnt/lustre[OST:0] D lustre-OST0001_UUID 3833116 41032 3561844 2% /mnt/lustre[OST:1] filesystem_summary: 7666232 61644 7148252 1% /mnt/lustre 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00983614 s, 107 MB/s Failing mds1 on oleg438-server Stopping /mnt/lustre-mds1 (opts:) on oleg438-server 19:15:49 (1713482149) shut down Failover mds1 to oleg438-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg438-server: oleg438-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg438-client: oleg438-server: ssh exited with exit code 1 Started lustre-MDT0000 19:16:03 (1713482163) targets are mounted 19:16:03 (1713482163) facet_failover done oleg438-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec