== sanity-pfl test 19e: Replay of layout instantiation & extension ========================================================== 02:26:54 (1713508014) striped dir -i1 -c2 -H all_char /mnt/lustre/d19e.sanity-pfl UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 2460 1285228 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 2320 1285368 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 138616 3333248 4% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 16756 3588060 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 155372 6921308 3% /mnt/lustre 1+0 records in 1+0 records out 4194304 bytes (4.2 MB) copied, 0.0352367 s, 119 MB/s before MDS recovery, the ost fid of 2nd component is [0x2c0000400:0x650:0x0] Failing mds1 on oleg248-server Stopping /mnt/lustre-mds1 (opts:) on oleg248-server reboot facets: mds1 Failover mds1 to oleg248-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg248-server: oleg248-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg248-client: oleg248-server: ssh exited with exit code 1 Started lustre-MDT0000 oleg248-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec after MDS recovery, the ost fid of 2nd component is [0x2c0000400:0x650:0x0]