== sanity-pfl test 19e: Replay of layout instantiation & extension ========================================================== 11:06:28 (1713279988) striped dir -i1 -c2 -H fnv_1a_64 /mnt/lustre/d19e.sanity-pfl UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 2440 1285248 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 2448 1285240 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 13896 3589444 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 12932 3592476 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 26828 7181920 1% /mnt/lustre 1+0 records in 1+0 records out 4194304 bytes (4.2 MB) copied, 0.061777 s, 67.9 MB/s before MDS recovery, the ost fid of 2nd component is [0x2c0000408:0x205:0x0] Failing mds1 on oleg405-server Stopping /mnt/lustre-mds1 (opts:) on oleg405-server 11:06:33 (1713279993) shut down Failover mds1 to oleg405-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg405-server: oleg405-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg405-client: oleg405-server: ssh exited with exit code 1 Started lustre-MDT0000 11:06:48 (1713280008) targets are mounted 11:06:48 (1713280008) facet_failover done oleg405-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec after MDS recovery, the ost fid of 2nd component is [0x2c0000408:0x205:0x0]