-----============= acceptance-small: sanity-pfl ============----- Fri Apr 19 02:12:02 EDT 2024 excepting tests: oleg343-client.virtnet: executing check_config_client /mnt/lustre oleg343-client.virtnet: Checking config lustre mounted on /mnt/lustre Checking servers environments Checking clients oleg343-client.virtnet environments Using TIMEOUT=20 osc.lustre-OST0000-osc-ffff8800b5529000.idle_timeout=debug osc.lustre-OST0001-osc-ffff8800b5529000.idle_timeout=debug disable quota as required oleg343-server: oleg343-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 osd-ldiskfs.track_declares_assert=1 running as uid/gid/euid/egid 500/500/500/500, groups: [true] running as uid/gid/euid/egid 500/500/500/500, groups: [touch] [/mnt/lustre/d0_runas_test/f7506] == sanity-pfl test 1c: Test overstriping w/max stripe count ========================================================== 02:12:24 (1713507144) striped dir -i1 -c2 -H crush /mnt/lustre/d1c.sanity-pfl 1+0 records in 1+0 records out 1024 bytes (1.0 kB) copied, 0.01131 s, 90.5 kB/s Pass! PASS 1c (13s) == sanity-pfl test 14: Verify setstripe poolname/stripe_count/stripe_size inheritance ========================================================== 02:12:37 (1713507157) striped dir -i0 -c2 -H crush /mnt/lustre/d14.sanity-pfl oleg343-server: Pool lustre.pool1 created oleg343-server: Pool lustre.pool2 created Destroy the created pools: pool1,pool2 lustre.pool1 oleg343-server: Pool lustre.pool1 destroyed lustre.pool2 oleg343-server: Pool lustre.pool2 destroyed PASS 14 (19s) == sanity-pfl test 20b: Remove component without instantiation when there is no space ========================================================== 02:12:57 (1713507177) striped dir -i0 -c2 -H crush /mnt/lustre/d20b.sanity-pfl Creating new pool oleg343-server: Pool lustre.test_20b created Adding targets to pool oleg343-server: OST lustre-OST0000_UUID added to pool lustre.test_20b UUID 1K-blocks Used Available Use% Mounted on lustre-MDT0000_UUID 1414116 1984 1285704 1% /mnt/lustre[MDT:0] lustre-MDT0001_UUID 1414116 1920 1285768 1% /mnt/lustre[MDT:1] lustre-OST0000_UUID 3833116 1388 3603536 1% /mnt/lustre[OST:0] lustre-OST0001_UUID 3833116 1388 3604584 1% /mnt/lustre[OST:1] filesystem_summary: 7666232 2776 7208120 1% /mnt/lustre 351+0 records in 351+0 records out 368050176 bytes (368 MB) copied, 3.45427 s, 107 MB/s 1+0 records in 1+0 records out 1048576 bytes (1.0 MB) copied, 0.0289288 s, 36.2 MB/s /mnt/lustre/d20b.sanity-pfl/f20b.sanity-pfl Failing mds1 on oleg343-server Stopping /mnt/lustre-mds1 (opts:) on oleg343-server reboot facets: mds1 Failover mds1 to oleg343-server mount facets: mds1 Starting mds1: -o localrecov /dev/mapper/mds1_flakey /mnt/lustre-mds1 oleg343-server: oleg343-server.virtnet: executing set_default_debug vfstrace rpctrace dlmtrace neterror ha config ioctl super lfsck all 8 pdsh@oleg343-client: oleg343-server: ssh exited with exit code 1 Started lustre-MDT0000 oleg343-client.virtnet: executing wait_import_state_mount (FULL|IDLE) mdc.lustre-MDT0000-mdc-*.mds_server_uuid mdc.lustre-MDT0000-mdc-*.mds_server_uuid in FULL state after 0 sec Waiting for MDT destroys to complete Waiting for orphan cleanup... osp.lustre-OST0000-osc-MDT0000.old_sync_processed osp.lustre-OST0000-osc-MDT0001.old_sync_processed osp.lustre-OST0001-osc-MDT0000.old_sync_processed osp.lustre-OST0001-osc-MDT0001.old_sync_processed wait 40 secs maximumly for oleg343-server mds-ost sync done. set OST0 lwm back to 3, hwm back to 7 Destroy the created pools: test_20b lustre.test_20b oleg343-server: OST lustre-OST0000_UUID removed from pool lustre.test_20b oleg343-server: Pool lustre.test_20b destroyed Waiting 90s for 'foo' PASS 20b (109s) == sanity-pfl test 24a: FIEMAP upon PFL file ============= 02:14:45 (1713507285) SKIP: sanity-pfl test_24a needs >= 3 OSTs SKIP 24a (2s) == sanity-pfl test complete, duration 164 sec ============ 02:14:47 (1713507287)