== sanity-pfl test 20b: Remove component without instantiation when there is no space ========================================================== 05:00:00 (1713344400) Creating new pool oleg301-server: Pool lustre.test_20b created Adding targets to pool oleg301-server: OST lustre-OST0000_UUID added to pool lustre.test_20b UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210560 3840 2204672 1% /mnt/lustre[MDT:0] lustre-OST0000_UUID 3771392 19456 3747840 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 29696 3737600 1% /mnt/lustre[OST:1] filesystem_summary: 7542784 49152 7485440 1% /mnt/lustre 366+0 records in 366+0 records out 383778816 bytes (384 MB) copied, 3.1579 s, 122 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0115778 s, 90.6 MB/s /mnt/lustre/d20b.sanity-pfl/f20b.sanity-pfl lcm_layout_gen: 6 lcm_mirror_count: 1 lcm_entry_count: 3 lcme_id: 1 lcme_mirror_id: 0 lcme_flags: init lcme_extent.e_start: 0 lcme_extent.e_end: 10485760 lmm_stripe_count: 1 lmm_stripe_size: 1048576 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 lmm_objects: - 0: { l_ost_idx: 0, l_fid: [0x100000000:0x46a5:0x0] } lcme_id: 4 lcme_mirror_id: 0 lcme_flags: init lcme_extent.e_start: 10485760 lcme_extent.e_end: 144703488 lmm_stripe_count: 1 lmm_stripe_size: 1048576 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 1 lmm_objects: - 0: { l_ost_idx: 1, l_fid: [0x100010000:0x4684:0x0] } lcme_id: 5 lcme_mirror_id: 0 lcme_flags: extension lcme_extent.e_start: 144703488 lcme_extent.e_end: EOF lmm_stripe_count: 0 lmm_extension_size: 134217728 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: -1 Failing mds1 on oleg301-server Stopping /mnt/lustre-mds1 (opts:) on oleg301-server reboot facets: mds1 Failover mds1 to oleg301-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg301-server: oleg301-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg301-client: oleg301-server: ssh exited with exit code 1 Started lustre-MDT0000 oleg301-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed wait 40 secs maximumly for oleg301-server mds-ost sync done. set OST0 lwm back to 3, hwm back to 7 Destroy the created pools: test_20b lustre.test_20b oleg301-server: OST lustre-OST0000_UUID removed from pool lustre.test_20b oleg301-server: Pool lustre.test_20b destroyed