== sanity-pfl test 20b: Remove component without instantiation when there is no space ========================================================== 02:12:34 (1713507154) Creating new pool oleg246-server: Pool lustre.test_20b created Adding targets to pool oleg246-server: OST lustre-OST0000_UUID added to pool lustre.test_20b UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 1932 1285756 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3833116 1252 3603672 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1252 3604720 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 2504 7208392 1% /mnt/lustre 351+0 records in 351+0 records out 368050176 bytes (368 MB) copied, 3.52742 s, 104 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0155511 s, 67.4 MB/s /mnt/lustre/d20b.sanity-pfl/f20b.sanity-pfl Failing mds1 on oleg246-server Stopping /mnt/lustre-mds1 (opts:) on oleg246-server reboot facets: mds1 Failover mds1 to oleg246-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg246-server: oleg246-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg246-client: oleg246-server: ssh exited with exit code 1 Started lustre-MDT0000 oleg246-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec Waiting for MDT destroys to complete Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg246-server mds-ost sync done. set OST0 lwm back to 3, hwm back to 7 Destroy the created pools: test_20b lustre.test_20b oleg246-server: OST lustre-OST0000_UUID removed from pool lustre.test_20b oleg246-server: Pool lustre.test_20b destroyed