== sanity-pfl test 22d: out of/low on space + failed to repeat + forced extension ========================================================== 16:09:26 (1713298166) sleep 5 for ZFS zfs Waiting for MDT destroys to complete Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg238-server mds-ost sync done. Creating new pool oleg238-server: Pool lustre.test_22d created Adding targets to pool oleg238-server: OST lustre-OST0000_UUID added to pool lustre.test_22d Waiting 90s for 'lustre-OST0000_UUID ' UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210560 6144 2202368 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3771392 26624 3742720 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 50176 3719168 2% /mnt/lustre[OST:1] filesystem_summary: 7542784 76800 7461888 2% /mnt/lustre 365+0 records in 365+0 records out 382730240 bytes (383 MB) copied, 2.03015 s, 189 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0125428 s, 83.6 MB/s sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg238-server mds-ost sync done. set OST0 lwm back to 3, hwm back to 7 Failing mds1 on oleg238-server Stopping /mnt/lustre-mds1 (opts:) on oleg238-server 16:10:49 (1713298249) shut down Failover mds1 to oleg238-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg238-server: oleg238-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg238-client: oleg238-server: ssh exited with exit code 1 Started lustre-MDT0000 16:11:03 (1713298263) targets are mounted 16:11:03 (1713298263) facet_failover done oleg238-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210560 6144 2202368 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3771392 27648 3741696 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 50176 3719168 2% /mnt/lustre[OST:1] filesystem_summary: 7542784 77824 7460864 2% /mnt/lustre 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0115989 s, 90.4 MB/s sleep 5 for ZFS zfs Waiting for MDT destroys to complete Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg238-server mds-ost sync done. set OST0 lwm back to 3, hwm back to 7 Failing mds1 on oleg238-server Stopping /mnt/lustre-mds1 (opts:) on oleg238-server 16:11:40 (1713298300) shut down Failover mds1 to oleg238-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg238-server: oleg238-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg238-client: oleg238-server: ssh exited with exit code 1 Started lustre-MDT0000 16:11:53 (1713298313) targets are mounted 16:11:53 (1713298313) facet_failover done oleg238-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec Destroy the created pools: test_22d lustre.test_22d oleg238-server: OST lustre-OST0000_UUID removed from pool lustre.test_22d oleg238-server: Pool lustre.test_22d destroyed Waiting 90s for 'foo'