== sanity-pfl test 22d: out of/low on space + failed to repeat + forced extension ========================================================== 09:19:07 (1713532747) striped dir -i0 -c2 -H crush /mnt/lustre/d22d.sanity-pfl Waiting for MDT destroys to complete Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg101-server mds-ost sync done. Creating new pool oleg101-server: Pool lustre.test_22d created Adding targets to pool oleg101-server: OST lustre-OST0000_UUID added to pool lustre.test_22d UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 3728 1283960 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 3352 1284336 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 36108 3570912 2% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 35024 3571996 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 71132 7142908 1% /mnt/lustre 348+0 records in 348+0 records out 364904448 bytes (365 MB) copied, 2.38037 s, 153 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0160952 s, 65.1 MB/s Waiting for MDT destroys to complete Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg101-server mds-ost sync done. set OST0 lwm back to 3, hwm back to 7 Failing mds1 on oleg101-server Stopping /mnt/lustre-mds1 (opts:) on oleg101-server 09:20:14 (1713532814) shut down Failover mds1 to oleg101-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg101-server: oleg101-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg101-client: oleg101-server: ssh exited with exit code 1 Started lustre-MDT0000 09:20:30 (1713532830) targets are mounted 09:20:30 (1713532830) facet_failover done oleg101-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 3688 1284000 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 3312 1284376 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 37132 3569888 2% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 35024 3571996 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 72156 7141884 2% /mnt/lustre 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.014512 s, 72.3 MB/s Waiting for MDT destroys to complete Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg101-server mds-ost sync done. set OST0 lwm back to 3, hwm back to 7 Failing mds1 on oleg101-server Stopping /mnt/lustre-mds1 (opts:) on oleg101-server 09:21:05 (1713532865) shut down Failover mds1 to oleg101-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg101-server: oleg101-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg101-client: oleg101-server: ssh exited with exit code 1 Started lustre-MDT0000 09:21:21 (1713532881) targets are mounted 09:21:21 (1713532881) facet_failover done oleg101-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec Destroy the created pools: test_22d lustre.test_22d oleg101-server: OST lustre-OST0000_UUID removed from pool lustre.test_22d oleg101-server: Pool lustre.test_22d destroyed Waiting 90s for 'foo'