== sanity-pfl test 20d: Low on space + 0-length comp: force extension ========================================================== 09:12:59 (1713532379) striped dir -i0 -c2 -H all_char /mnt/lustre/d20d.sanity-pfl Waiting for MDT destroys to complete Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg101-server mds-ost sync done. Creating new pool oleg101-server: Pool lustre.test_20d created Adding targets to pool oleg101-server: OST lustre-OST0000_UUID added to pool lustre.test_20d UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 2592 1285096 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 2276 1285412 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 18700 3588320 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 29904 3574020 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 48604 7162340 1% /mnt/lustre 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.00770375 s, 136 MB/s Waiting for MDT destroys to complete Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg101-server mds-ost sync done. set OST0 lwm back to 3, hwm back to 7 /mnt/lustre/d20d.sanity-pfl/f20d.sanity-pfl lcm_layout_gen: 4 lcm_mirror_count: 1 lcm_entry_count: 3 lcme_id: 1 lcme_mirror_id: 0 lcme_flags: init lcme_extent.e_start: 0 lcme_extent.e_end: 67108864 lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 lmm_objects: - 0: { l_ost_idx: 0, l_fid: [0x28000040a:0x204:0x0] } lcme_id: 2 lcme_mirror_id: 0 lcme_flags: init lcme_extent.e_start: 67108864 lcme_extent.e_end: 134217728 lmm_stripe_count: 1 lmm_stripe_size: 4194304 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: 0 lmm_pool: test_20d lmm_objects: - 0: { l_ost_idx: 0, l_fid: [0x28000040a:0x205:0x0] } lcme_id: 3 lcme_mirror_id: 0 lcme_flags: extension lcme_extent.e_start: 134217728 lcme_extent.e_end: EOF lmm_stripe_count: 0 lmm_extension_size: 67108864 lmm_pattern: raid0 lmm_layout_gen: 0 lmm_stripe_offset: -1 lmm_pool: test_20d Failing mds1 on oleg101-server Stopping /mnt/lustre-mds1 (opts:) on oleg101-server 09:13:36 (1713532416) shut down Failover mds1 to oleg101-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg101-server: oleg101-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all pdsh@oleg101-client: oleg101-server: ssh exited with exit code 1 Started lustre-MDT0000 09:13:50 (1713532430) targets are mounted 09:13:50 (1713532430) facet_failover done oleg101-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec Destroy the created pools: test_20d lustre.test_20d oleg101-server: OST lustre-OST0000_UUID removed from pool lustre.test_20d oleg101-server: Pool lustre.test_20d destroyed