-----============= acceptance-small: sanity-pfl ============----- Fri Apr 19 02:11:54 EDT 2024 excepting tests: 24a oleg402-client.virtnet: executing check_config_client /mnt/lustre oleg402-client.virtnet: Checking config lustre mounted on /mnt/lustre Checking servers environments Checking clients oleg402-client.virtnet environments Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b6633000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b6633000.idle_timeout=debug disable quota as required oleg402-server: oleg402-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 running as uid/gid/euid/egid 500/500/500/500, groups: [true] running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d0_runas_test/f7649] == sanity-pfl test 1c: Test overstriping w/max stripe count ========================================================== 02:12:20 (1713507140) striped dir -i1 -c2 -H crush /mnt/lustre/d1c.sanity-pfl 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.00739369 s, 138 kB/s Pass! PASS 1c (18s) == sanity-pfl test 14: Verify setstripe poolname/stripe_count/stripe_size inheritance ========================================================== 02:12:39 (1713507159) striped dir -i0 -c2 -H fnv_1a_64 /mnt/lustre/d14.sanity-pfl oleg402-server: Pool lustre.pool1 created oleg402-server: Pool lustre.pool2 created Destroy the created pools: pool1,pool2 lustre.pool1 oleg402-server: Pool lustre.pool1 destroyed lustre.pool2 oleg402-server: Pool lustre.pool2 destroyed PASS 14 (22s) == sanity-pfl test 20b: Remove component without instantiation when there is no space ========================================================== 02:13:02 (1713507182) striped dir -i0 -c2 -H crush /mnt/lustre/d20b.sanity-pfl Creating new pool oleg402-server: Pool lustre.test_20b created Adding targets to pool oleg402-server: OST lustre-OST0000_UUID added to pool lustre.test_20b UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 2210688 3584 2205056 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 2210688 3328 2205312 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3771392 3072 3766272 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3771392 3072 3766272 1% /mnt/lustre[OST:1] filesystem_summary: 7542784 6144 7532544 1% /mnt/lustre 367+0 records in 367+0 records out 384827392 bytes (385 MB) copied, 2.45204 s, 157 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0172538 s, 60.8 MB/s /mnt/lustre/d20b.sanity-pfl/f20b.sanity-pfl Failing mds1 on oleg402-server Stopping /mnt/lustre-mds1 (opts:) on oleg402-server reboot facets: mds1 Failover mds1 to oleg402-server mount facets: mds1 Starting mds1: -o localrecov lustre-mdt1/mdt1 /mnt/lustre-mds1 oleg402-server: oleg402-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg402-client: oleg402-server: ssh exited with exit code 1 Started lustre-MDT0000 oleg402-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec sleep 5 for ZFS zfs sleep 5 for ZFS zfs Waiting for MDT destroys to complete Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg402-server mds-ost sync done. set OST0 lwm back to 3, hwm back to 7 Destroy the created pools: test_20b lustre.test_20b oleg402-server: OST lustre-OST0000_UUID removed from pool lustre.test_20b oleg402-server: Pool lustre.test_20b destroyed PASS 20b (106s) == sanity-pfl test 24a: FIEMAP upon PFL file ============= 02:14:48 (1713507288) SKIP: sanity-pfl test_24a needs >= 3 OSTs SKIP 24a (1s) == sanity-pfl test complete, duration 174 sec ============ 02:14:49 (1713507289)